return to table of content

Tracking supermarket prices with Playwright

brikym
26 replies
20h2m

I have been doing something similar for New Zealand since the start of the year with Playwright/Typescript dumping parquet files to cloud storage. I've just collecting the data I have not yet displayed it. Most of the work is getting around the reverse proxy services like Akamai and Cloudflare.

At the time I wrote it I thought nobody else was doing but now I know of at least 3 start ups doing the same in NZ. It seems the the inflation really stoked a lot of innovation here. The patterns are about what you'd expect. Supermarkets are up to the usual tricks of arbitrary making pricing as complicated as possible using 'sawtooth' methods to segment time-poor people from poor people. Often they'll segment on brand loyalty vs price sensitive people; There might be 3 popular brands of chocolate and every week only one of them will be sold at a fair price.

seoulmetro
8 replies
18h48m

Legality of this is rocky in Australia. I dare say that NZ is the same?

There are so many scrapers that come and go doing this in AU but are usually shut down by the big supermarkets.

It's a cycle of usefulness and "why doesn't this exist", except it had existed many times before.

jaza
2 replies
12h14m

Aussie here. I hadn't heard that price scraping is only quasi-legal here and that scrapers get shut down by the big supermarkets - but then again I'm not surprised.

I'm thinking of starting a little price comparison site, mainly to compare select products at Colesworths vs Aldi (I've just started doing more regular grocery shopping at Aldi myself). But as far as I know, Aldi don't have any prices / catalogues online, so my plan is to just manually enter the data myself in the short-term, and to appeal to crowdsourcing the data in the long-term. And plan is to just make it a simple SSG site (e.g. Hugo powered), data all in simple markdown / json files, data all sourced via github pull requests.

Feel free to get in touch if you'd like to help out, or if you know of anything similar that already exists: greenash dot net dot au slash contact

sumedh
0 replies
9h18m

get shut down by the big supermarkets

How do they shut them down?

jasomill
0 replies
10h0m

But as far as I know, Aldi don't have any prices / catalogues online

There are a few here, but more along the lines of a flyer than a catalog:

https://www.aldi.com.au/groceries/

Aldi US has a more-or-less complete catalog online, so it might be worth crowdsourcing suggestions to the parent company to implement the same in Australia.

russelg
1 replies
18h17m

I think with the current climate wrt the big supermarkets in AU, now would be the time to push your luck. The court of public opinion will definitely not be on the supermarkets side, and the government may even step in.

timrkn
0 replies
12h38m

Agreed. Should we make something and find out?

timrkn
0 replies
12h40m

Agreed. Hopefully the govs price gouging mitigation strategy includes free flow of information (allowing scraping for price comparison).

I’ve been interested in price comparison for Australia for a while, am a Product designer/manager with a concept prototype design, looking for others interested to work on it. My email is on my profile if you are.

sumedh
0 replies
9h18m

Legality of this is rocky in Australia. I dare say that NZ is the same?

You might be breaking the sites terms and conditions but that does not mean its illegal.

Dan Murphy uses a similar thing, they have their own price checking algorithm.

walterbell
7 replies
19h46m

Those who order grocery delivery online would benefit from price comparisons, because they can order from multiple stores at the same time. In addition, there's only one marketplace that has all the prices from different stores.

gruez
4 replies
17h20m

Those who order grocery delivery online would benefit from price comparisons, because they can order from multiple stores at the same time.

Not really, since the delivery fees/tips that you have to pay would eat up all the savings, unless maybe if you're buying for a family of 5 or something.

seaal
3 replies
10h52m

Instacart, Uber Eats, DoorDash all sell gift cards of $100 for $80 basically year round — when you combine that 20% with other promotions I often have deliveries that are cheaper than shopping in person.

maccard
1 replies
8h34m

Uber Eats and Deliveroo all have list prices that are 15-20+% above the shelf price in the same supermarket. Plus a delivery fee, plus the "service charge", I've _never_ found it to be competitive let alone cheaper.

walterbell
0 replies
7h52m

Some vendors offer "in-store prices" on Instacart.

walterbell
0 replies
7h53m

Are those usually discounted via coupon sites?

teruakohatu
1 replies
19h39m

I think the fees they tack on for online orders would ruin ordering different products from different stores. It mostly makes sense with staples that don't perish.

With fresh produce I find Pak n Save a lot more variable with quality, making online orders more risky despite the lower cost.

walterbell
0 replies
17h36m

For those who have to order online (e.g. elderly), they are paying the fees anyway. They can avoid minimum order fees with bulk purchases of staples. Their bot/app can monitor prices and when a staple goes on sale, they can order multiple items to meet the threshold for a lower fee.

ustad
2 replies
11h44m

Can anyone comment how supermarkets exploit customer segmentation by updating prices? How do the time-poor and poor-poor people generally respond?

“Often they'll segment on brand loyalty vs price sensitive people; There might be 3 popular brands of chocolate and every week only one of them will be sold at a fair price.”

brikym
1 replies
11h27m

Let's say there are three brands of some item. Each week one of the brands is rotated to $1 while the others are $2. And let's also suppose that the supermarket pays 80c per item.

The smart shopper might only buy in bulk once every three weeks when his favourite brand at a lower price, or twitch to the cheapest brand every week. A hurried or lazy shopper might always pick their favourite brand every week. If they buy one item a week the lazy shopper would have spent $5, while the smart shopper has only spent $3.

They've made 60c off the smart shopper and $2.60 off the lazy shopper. By segmenting out the lazy shoppers they've made $2. The whole idea of rotating the prices is nothing to do with the cost of goods sold it's all about making shopping a pain in the ass for busy people and catching them out.

j45
0 replies
11h10m

Bingo, an extra 50 cents to 2 dollars per item in a grocery order adds up quick, sooner or later.

Also, in-store pricing can be cheaper or not advertised as a sale, but encouraging online shopping can often be a higher price, even if it's pickup and not delivery.

"In-Store Pricing" on websites/apps are an interesting thing to track as well, it feels the more a grocery store goes towards "In-Store pricing', the higher it is.

This would be great to see in other countries.

Dev102
2 replies
10h18m

I built one called https://bbdeals.in/ for India. I mostly use it to buy just fruits and its saved me about 20% of sending. which is not bad in these hard times.

Building crawlers and infra to support it tool not more than 20 hours.

alwinaugustin
1 replies
6h15m

Does this work for HYD only?

Dev102
0 replies
5h3m

Yeas. Planned to expand it to other manjor cities

teruakohatu
1 replies
19h42m

I was planning on doing the same in NZ. I would be keen to chat to you about it (email in HN profile). I am a data scientist

Did you notice anything pre and post Whittakers price increase(s)? They must have a brilliant PR firm in retainer for every major news outlet to more or less push the line that increased prices are a good thing for the consumer. I noticed more aggressive "sales" more recently, but unsure if I am just paying more attention.

My prediction is that they will decrease the size of the bars soon.

scubadude
0 replies
14h35m

I think Whittaker's changed their recipe some time in the last year. Whittaker's was what Cadbury used to be (good) but now I think they have both followed the same course. Markedly lower quality. This is the 200g blocks fwiw not sure about the wee 50g peanut slab.

pikelet
0 replies
19h48m

As a kiwi, are your able to make any of these (or your) projects? I'm quite interested.

RasmusFromDK
17 replies
10h35m

Nice writeup. I've been through similar problems that you have with my contact lens price comparison website https://lenspricer.com/ that I run in ~30 countries. I have found, like you, that websites changing their HTML is a pain.

One of my biggest hurdles initially was matching products across 100+ websites. Even though you think a product has a unique name, everyone puts their own twist on it. Most can be handled with regexes, but I had to manually map many of these (I used AI for some of it, but had to manually verify all of it).

I've found that building the scrapers and infrastructure is somewhat the easy part. The hard part is maintaining all of the scrapers and figuring out if when a product disappears from a site, is that because my scraper has an error, is it my scraper being blocked, did the site make a change, was the site randomly down for maintenance when I scraped it etc.

A fun project, but challenging at times, and annoying problems to fix.

heap_perms
6 replies
9h21m

I'm curious, can you wear contact lenses while working? I notice my eyes get tired when I look at a monitor for too long. Have you found any solutions for that?

RasmusFromDK
2 replies
9h16m

I use contact lenses basically every day, and I have had no problems working in front of screens. There's a huge difference between the different brands. Mine is one of the more expensive ones (Acuvue Oasys 1-Day), so that might be part of it, but each eye is compatible with different lenses.

If I were you I would go to an optometrist and talk about this. They can also often give you free trials for different contacts and you can find one that works for you.

theryan
0 replies
3h26m

FWIW, that is the same brand that I use and was specifically recommended for dry-eyes by my optometrist. I still wear glasses most of the time because my eyes also get strained from looking at a monitor with contacts in.

I'd recommend a trial of the lenses to see how they work for you before committing to a bigger purchase.

alexpotato
0 replies
3h46m

Acuvue Oasys 1-Day

I don't often wear contacts at work but I can second that these are great for "all day" wear.

siamese_puff
0 replies
3h37m

I cannot, personally. They dry out

pavel_lishin
0 replies
3h15m

This is very likely age-dependent.

When I was in my 20s, this was absolutely not a problem.

When I hit my 30s, I started wearing glasses instead of contacts basically all the time, and it wasn't a problem.

Now that I'm in my 40s, I'm having to take my glasses off to read a monitor and most things that are closer than my arm's reach.

kristianbrigman
0 replies
2h32m

My eye doctor recommended wearing “screen glasses”. They are a small prescription (maybe 0.25 or 0.5) with blue blocking. It’s small but it does help; I work on normal glasses at night (so my eyes can rest) and contacts + screen glasses during the day and they are really close.

shellfishgene
2 replies
8h10m

For Germany, below the prices it says "some links may be sponsored", but it does not mark which ones. Is that even legal? Also there seem to be very few shops, are maybe all the links sponsored? Also idealo.de finds lower prices.

RasmusFromDK
1 replies
7h58m

When I decided to put the text like that, I had looked at maybe 10-20 of the biggest price comparison websites across different countries because I of course want to make sure I respect all regulations that there are. I found that many of them don't even write anywhere that the links may be sponsored, and you have to go to the "about" page or similar to find this. I think that I actually go further than most of them when it comes to making it known that some links may be sponsored.

Now that you mention idealo, there seems to be no mention at all on a product page that they are paid by the stores, you have to click the "rank" link in the footer to be brought to a page https://www.idealo.de/aktion/ranking where they write this.

shellfishgene
0 replies
6h57m

Fair enough, I had assumed the rules would be similar to those for search engines.

ludvigk
2 replies
10h28m

Isn’t this a use-case where LLMs could really help?

RasmusFromDK
1 replies
10h20m

Yeah it is to some degree. I tried to use it as much as possible, but there's always those annoying edge cases that makes me not trust the results and I have to check everything, and it ended up being faster just building some simple UI where I can easily classify the name myself.

Part of the problem is simply due to bad data from the websites. Just as an example - there's a 2-week contact lens called "Acuvue Oasys". And there's a completely different 1-day contact lens called "Acuvue Oasys 1-Day". Some sites have been bad at writing this properly, so both variants may be called "Acuvue Oasys" (or close to it), and the way to distinguish them is to look at the image to see which actual lens they mean, look at the price etc.

It's true that this could probably also be handled by AI, but in the end, classifying the lenses takes like 1-2% of the time it takes to make a scraper for a website so I found it was not worth trying to build a very good LLM classifier for this.

alexpotato
0 replies
3h37m

It's true that this could probably also be handled by AI, but in the end, classifying the lenses takes like 1-2% of the time it takes to make a scraper for a website so I found it was not worth trying to build a very good LLM classifier for this.

This is true for technology in general (in addition to specifically for LLMs).

In my experience, the 80/20 rule comes into play in that MOST of the edge cases can be handled by a couple lines of code or a regex. There is then this asymptotic curve where each additional line(s) of code handle a rarer and rarer edge case.

And, of course, I always seem to end up on project where even a small, rare edge case has some huge negative impact if it gets hit so you have to keep adding defensive code and/or build a catch all bucket that alerts you to the issue without crashing the entire system etc.

siamese_puff
1 replies
3h37m

Doing the work we need. Every year I get fucked by my insurance company when buying a basic thing - contacts. Pricing is all over the place and coverage is usually 30% done by mail in reimbursement. Thanks!

RasmusFromDK
0 replies
2h58m

Thanks for the nice words!

brunoqc
0 replies
3h43m

Do you support Canada?

bane
0 replies
2h25m

One of my biggest hurdles initially was matching products across 100+ websites. Even though you think a product has a unique name, everyone puts their own twist on it. Most can be handled with regexes, but I had to manually map many of these (I used AI for some of it, but had to manually verify all of it)

In the U.S. at least, big retailers will have product suppliers build slightly different SKUs for them to make price comparisons tricky. Costco is somewhat notorious for this where almost everything electronics (and many other products) sold in their stores is a custom SKU -- often with slightly product configuration.

sakisv
6 replies
23h11m

Oh nice!

A thorny problem in my case is that the same item is named in 3 different ways between the 3 supermarkets which makes it very hard and annoying to do a proper comparison.

Did you have a similar problem?

nosecreek
4 replies
23h0m

Absolutely! It’s made it difficult to implement some of the cross-retailer comparison features I would like to add. For my charts I’ve just manually selected some products, but I’ve also been trying to get a “good enough but not perfect” string comparison algorithm working.

sakisv
2 replies
22h52m

Ah yes.

My approach so far has been to first extract the brand names (which are also not written the same way for some fcking reason!), update the strings, and then compare the remaining.

If they have a high similarity (e.g. >95%) then they could be automatically merged, and then anything between 75%-95% can be reviewed manually.

victornomad
1 replies
22h4m

I am not by any mean an expert but maybe using some LLMs or a sentence transformer here could help to do the job?

sakisv
0 replies
21h32m

I gave it a very quick try with chatgpt, but wasn't very impressed from the results.

Granted it was around January, and things may have progressed...

(Βut then again why take the easy approach when Ι can waste a few afternoons playing around with string comparisons)

project2501a
0 replies
21h56m

would maintaining a map of products product_x[supermarket] with 2-3 values work? I don't suspect that supermarkets would be very keen to change the name (but they might play other dirty games)

I am thinking of doing the same thing for linux packages in debian and fedora

seszett
0 replies
20h13m

I have built a similar system for myself, but since it's small scale I just have "groups" of similar items that I manually populate.

I have the additional problem that I want to compare products across France and Belgium (Dutch-speaking side) so there is no hope at all to group products automatically. My manual system allows me to put together say 250g and 500g packaging of the same butter, or of two of the butters that I like to buy, so I can always see easily which one I should get (it's often the 250g that's cheaper by weight these days).

Also the 42000 or so different packagings for Head and Shoulders shampoo. 250ml, 270ml, 285ml, 480ml, 500ml (285ml is usually cheapest). I'm pretty sure they do it on purpose so each store doesn't have to match price with the others because it's a "different product".

kareemm
1 replies
17h53m

Was looking for one in Canada. Tried this out and it seems like some of the data is missing from where I live (halifax). Got an email I can hit you up at? Mine's in my HN profile - couldn't find yours on HN or your site.

nosecreek
0 replies
17h15m

For sure, just replace the first dot in the url from my profile with an @

snac
0 replies
21h42m

Love your site! It was a great source of inspiration with the amount of data you collect.

I did the same and made https://grocerygoose.ca/

Published the API endpoints that I “discovered” to make the app https://github.com/snacsnoc/grocery-app (see HACKING.md)

It’s an unfortunate state of affairs when devs like us have to go to such great lengths to track the price of a commodity (food).

maxglute
0 replies
13h52m

Excellent work.

xyst
10 replies
21h15m

Would be nice to have a price transparency of goods. It would make processes like this much more easier to track by store, and region.

For example, compare the price of oat milk at different zip codes and grocery stores. Additionally track “shrinkflation” (same price but smaller portion).

On that note, it seems you are tracking price but are you also checking the cost per gram (or ounce)? Manufacturer or store could keep price the same but offer less to the consumer. Wonder if your tool would catch this.

barbazoo
7 replies
21h0m

Grocers not putting per unit prices on the label is a pet peeve of mine. I can’t imagine any purpose not rooted in customer hostility.

baronswindle
4 replies
19h57m

In my experience, grocers always do include unit prices…at least in the USA. I’ve lived in Florida, Indiana, California, and New York, and in 35 years of life, I can’t remember ever not seeing the price per oz, per pound, per fl oz, etc. right next to the total price for food/drink and most home goods.

There may be some exceptions, but I’m struggling to think of any except things where weight/volume aren’t really relevant to the value — e.g., a sponge.

lightbritefight
2 replies
17h9m

What they often do is put different units on the same type of good. Three chocolate bars? One will be in oz, one in lbs, one in "per unit."

They all are labelled, but it's still customer hostile to create comparison fatigue.

jasomill
0 replies
9h17m

Worse, I've seen CVS do things like place a 180-count package of generic medication next to an identically-sized 200-count package of the equivalent name brand, with the generic costing a bit less, but with a slightly higher unit price due to the mismatched quantities.

Rastonbury
0 replies
13h1m

This is such a shame, anywhere this is mandated they should mandate by mass and for medical/vitamins per mass of active ingredient

nosecreek
0 replies
17h11m

In Canada I think they are legally required to, but sometimes it can be frustrating if they don’t always compare like units - one product will be price per gram or 100 grams, and another price per kg. I’ve found with online shopping, the unit prices don’t take into account discounts and sale prices, which makes it harder to shop sales (in store seems to be better for this).

girvo
0 replies
16h12m

It's required by law in Australia, which is nice

dawnerd
0 replies
11h14m

Or when they change what unit to display so you can’t easily cross compare.

sakisv
0 replies
21h0m

I do track the price per unit (kg, lt, etc) and I was a bit on the fence on whether I should show and graph that number instead of the price that someone would pay at the checkout, but I opted for the latter to keep it more "familiar" with the prices people see.

Having said that, that's definitely something that I could add and it would show when the shrinkflation occured if any.

candiddevmike
0 replies
19h25m

Imagine mandating transparent cost of goods pricing. I'd love to see farmer was paid X, manufacturer Y, and grocer added Z.

batata004
10 replies
14h5m

I created a similar website which got lots of interest in my city. I scrape even app and websites data using a single server at Linode with 2GB of RAM with 5 IPv4 and 1000 IPv6 (which is free) and every single product is scraped at most 40 minutes interval, never more than that with avg time of 25 minutes. I use curl impersonate and scrape JSON as much as possible because 90% of markets provide prices from Ajax calls and the other 10% I use regex to easily parse the HTML. You can check it at https://www.economizafloripa.com.br

latexr
8 replies
12h40m

I scrape even app and websites data

And then try to sell it back to businesses, even suggesting they use the data to train AI. You also make it sound like there’s a team manually doing all the work.

https://www.economizafloripa.com.br/?q=parceria-comercial

That whole page makes my view of the project go from “helpful tool for the people, to wrestle back control from corporations selling basic necessities” to “just another attempt to make money”. Which is your prerogative, I was just expecting something different and more ethically driven when I read the homepage.

mechanical_bear
4 replies
12h27m

Where does this lack ethics? It seems that they are providing a useful service, that they created with their hard work. People are allowed to make money with their work.

latexr
3 replies
11h53m

Where does this lack ethics?

I didn’t say it lacked ethics, I said I expected it to be driven by ethics. There’s a world of difference. I just mean it initially sounded like this was a protest project “for the people”, done in a way to take back power from big corporations, and was saddened to see it’s another generic commercial endeavour.

People are allowed to make money with their work.

Which is why I said it’s their prerogative.

If you’re going to reply, please strive to address the points made, what was said, not what you imagine the other person said. Don’t default to thinking the other person is being dismissive or malicious.

saaaaaam
2 replies
9h47m

I’m curious to know why you thought it “sounded like this was a protest project ‘for the people’”?

I’ve read the parent post above and looked at the website and see nothing that would make me think it’s a “protest for the people”.

It just seems a little strange when you then go on to say “strive to address… what was said, not what you imagine the other person said”.

latexr
0 replies
5h24m

I’m curious to know why you thought it “sounded like this was a protest project ‘for the people’”?

See the sibling reply by another user, which I think explains it perfectly.

https://news.ycombinator.com/item?id=41179628

It just seems a little strange when you then go on to say “strive to address… what was said, not what you imagine the other person said”.

It’s not strange at all if you pay attention to the words. I did not mischaracterise the author or their goals, I explained what I expected and what I felt regarding what I experienced reading the post and then the website.

In other words, I’m not attacking or criticising the author. I’m offering one data point, one description of an outside view which they’re free to ignore or think about. That’s it.

Don’t take every reply as an explicit agreement or disagreement. Points can be nuanced, you just have to make a genuine effort to understand. Default to not assuming the other person is a complete dolt. Or don’t. It’s also anyone’s prerogative to be immediately reactive. That’s becoming ever more prevalent (online and offline), and in my view it’s a negative way to live.

Lutger
0 replies
9h5m

I had the same thought. This is why:

- 'I created a similar website', so it compares to https://pricewatcher.gr/en/.

- a big part of the discussion is in the context of inflation and price gouging

- pricewatcher presents its data publicly for all consumers to see and use, it is clearly intended as a tool for consumers to combat price gouging strategies

- 'pricewatcher.gr is an independent site which is not endorsed by any shop', nothing suggests this website is making money off consumers

- the 'similar website' however is offering exclusive access to data to businesses, at a price, in order for those business to undercut the competition and become more profitable

So the goals are almost opposite. One is to help consumers combat price gouging of supermarkets, the other is to help supermarkets become (even) more profitable. It is similar in the sense that it is also scraping data, but it's not strange to think being similar would mean they would have the same goal, which they don't.

presentation
2 replies
11h4m

It’s almost like people try to do valuable services for others in exchange for money.

latexr
0 replies
2h26m

That was not the argument. Please don’t strawman. Saying “I was just expecting something different” does not mean “what you are doing is wrong”.

I also expected replies to be done in good faith. That they would cover what I said, not that the reader would inject their own worst misconceptions and reply to those. I generally expect better from this website. I am also disappointed when that isn’t the case.

6510
0 replies
4h40m

A good few people cant imagine doing anything for any other reason. The fascinating aspect is that (combined with endless rent seeking) everything gets more expensive and eventually people wont have time to do anything for free. What is also fascinating is that by [excessively] not accounting for things done for free people shall take them for granted. We are surrounded by useful things done for us by generations long gone.

I started thinking about this when looking at startup micro nations. Having to build everything from scratch turns out to be very expensive and a whole lot of work.

Meanwhile we are looking to find ways to sell the price label separately. haha

When I worked at a supermarket I often proposed to rearrange the shelves into a maze. One can replace the parking lot with a hedge maze with many entrances an exits from the building. Special doorways on timers and remote control so that you need only one checkout. Add some extra floors, mirrors and glass walls.

There are countless possibilities, you could also have different entrances and different exits with different fees, doorways that only open if you have all varieties of a product in your basket, crawling tunnels, cage fights for discounts, double or nothing buttons, slot machines to win coupons.

valuable services for others?

siamese_puff
0 replies
10h44m

How does the ipv6 rotation work in this flow?

odysseus
7 replies
16h24m

I used to price track when I moved to a new area, but now I find it way easier to just shop at 2 markets or big box stores that consistently have low prices.

In Europe, that would probably be Aldi/Lidl.

In the U.S., maybe Costco/Trader Joe's.

For online, CamelCamelCamel/Amazon. (for health/beauty/some electronics but not food)

If you can buy direct from the manufacturer, sometimes that's even better. For example, I got a particular brand of soap I love at the soap's wholesaler site in bulk for less than half the retail price. For shampoo, buying the gallon size direct was way cheaper than buying from any retailer.

bufferoverflow
4 replies
15h53m

In the U.S., maybe Costco/Trader Joe's.

Costco/Walmart/Aldi in my experience.

Trader Joe's is higher quality, but generally more expensive.

shiroiushi
0 replies
14h27m

Trader Joe's also only carries Trader Joe's-branded merchandise, aside from the produce. So if you're looking for something in particular that isn't a TJ item, you won't find it there.

odysseus
0 replies
11h54m

Occasionally you can get the same Trader Joe’s private label products rebranded as Aldi merchandise for even cheaper at Aldi.

dawnerd
0 replies
11h16m

Sams club I’ve found beats Costco in some areas but for some items Costco absolutely undercuts like crazy. Cat litter at sams is twice the price when not on sale.

I pretty much just exclusively shop at Aldi/Walmart as they have the best prices overall. Kroger owned stores and Albertsons owned are insanely overpriced. Target is a good middle ground but I can’t stand shopping there now with everything getting locked up.

DontchaKnowit
0 replies
15h33m

walmart is undisputed king of low prices and honestly in my experience the quality on their store brand stuff is pretty damn solid. and usually like half the price of comparable products. Been living off their greek yogurt for a while now. Its great

dexwiz
1 replies
15h22m

You can find ALDIs in the USA, but they are regional. Trader Joe’s is owned by the same family as ALDIs, and until recently (past 10 years) you wouldn’t see them in the same areas.

jasomill
0 replies
9h35m

I'd usually associate the term "regional" with chains like Meijer, Giant Eagle, and Winn-Dixie.

With 2,392 stores in 38 states plus DC[1], I'm not sure Aldi US qualifies.

[1] https://stores.aldi.us

langsoul-com
7 replies
18h35m

The hard thing is not scraping, but getting around the increasingly sophisticated blockers.

You'll need to constantly rotate residential proxies (high rated) and make sure not to exhibit data scraping patterns. Some supermarkets don't show the network requests in the network tab, so cannot just get that api response.

Even then, mitm attacks with mobile app (to see the network requests and data) will also get blocked without decent cover ups.

I tried but realised it isn't worth it due to the costs and constant dev work required. In fact, some of the supermarket pricing comparison services just have (cheap labour) people scrape it

__MatrixMan__
2 replies
17h40m

I wonder if we could get some legislation in place to require that they publish pricing data via an API so we don't have to tangle with the blockers at all.

zackmorris
0 replies
3h57m

I'd prefer that governments enact legislation that prevents discriminating against IP addresses, perhaps under net neutrality laws.

For anyone with some clout/money who would like to stop corporations like Akamai and Cloudflare from unilaterally blocking IP addresses, the way that works is you file a lawsuit against the corporations and get an injunction to stop a practice (like IP blacklisting) during the legal proceedings. IANAL, so please forgive me if my terminology isn't quite right here:

https://pro.bloomberglaw.com/insights/litigation/how-to-file...

https://www.law.cornell.edu/wex/injunctive_relief

Injunctions have been used with great success for a century or more to stop corporations from polluting or destroying ecosystems. The idea is that since anyone can file an injunction, that creates an incentive for corporations to follow the law or risk having their work halted for months or years as the case proceeds.

I'd argue that unilaterally blocking IP addresses on a wide scale pollutes the ecosystem of the internet, so can't be allowed to continue.

Of course, corporations have thought of all of this, so have gone to great lengths to lobby governments and use regulatory capture to install politicians and judges who rule in their favor to pay back campaign contributions they've received from those same corporations:

https://www.crowell.com/en/insights/client-alerts/supreme-co...

https://www.mcneeslaw.com/nlrb-injunction/

So now the pressures that corporations have applied on the legal system to protect their own interests at the cost of employees, taxpayers and the environment have started to affect other industries like ours in tech.

You'll tend to hear that disruptive ideas like I've discussed are bad for business from the mainstream media and corporate PR departments, since they're protecting their own interests. That's why I feel that the heart of hacker culture is in disrupting the status quo.

immibis
0 replies
16h53m

Perhaps in Europe. Anywhere else, forget about it.

seanthemon
1 replies
18h33m

And you couldn't use OCR and simply take an image of the product list? Not ideal, but difficult or impossible to track depending on your method.

langsoul-com
0 replies
13h13m

You'll get blocked before even seeing the page most times.

sakisv
0 replies
10h41m

Thankfully I'm not there yet.

Since this is just a side project, if it starts demanding too much of my time too often I'll just stop it and open both the code and the data.

BTW, how could the network request not appear in the network tab?

For me the hardest part is to correlate and compare products across supermarkets

eddyfromtheblok
0 replies
10h57m

Crowdsource it with a browser extension

grafraf
7 replies
10h51m

We have been doing it for the Swedish market in more than 8 years. We have a website https://www.matspar.se/ , where the customer can browse all the products of all major online stores, compare the prices and add the products they want to buy in the cart. The customer can in the end of the journey compare the total price of that cart (including shipping fee) and export the cart to the store they desire to order it.

I'm also one of the founders and the current CTO, so there been a lot of scraping and maintaining during the years. We are scraping over 30 million prices daily.

showsover
5 replies
10h47m

Do you have a technical writeup of your scraping approach? I'd love to read more about the challenges and solutions for them.

grafraf
4 replies
10h17m

Unfortunately no, but i can share some insights that i hope can be of value:

- Tech: Everything is hosted in AWS. We are using Golang in docker containers that does the scraping. They run on ECS Fargate spots when needed using cronjob. The scraping result is stored as a parquet in S3 and processed in our RDS Postgresql. We need to be creative and have some methods to identify that a particular product A in store 1 is the same as product A in store 2 so they are mapped together. Sometimes it needs to be verified manually. The data that are of interest for the user/site is indexed into an Elastic search.

Things that might be of interest: - We always try to avoid parsing the HTML but instead calling the sites APIs directly to reduce scraping time. We also try to scrape the category listing to access multiple prices by one request, this can reduce the total requests from over 100 000 to maybe less than 1000 requests.

- We also try to avoid scraping the sites during peak times and respect their robots.txt. We add some delay to each request. The scrapes are often done during night/early morning.

- The main challenge is that stores can redesign or modify which make our scrapers fail, so we need to be fast and adopt to the new changes.

- Another major hidden challenge is that the stores have different prices for the same product depending on your zip code, so we have our ways of identifying the stores different warehouses, what zip codes belong to a specific warehouse and do a scrape for that warehouse. So a store might have 5 warehouses, so we need to scrape it 5 times with different zip codes

There is much more but i hope that gave you some insights of challenges and some solutions!

sumedh
1 replies
9h14m

Have the sites tried to shut you down?

grafraf
0 replies
8h28m

We received some harsh words in the start but everything we are doing is legally and by the book.

We try to establish good relationship with the stores as the customers don't always focus on the price, but sometimes they want a specific product. We are both helping the stores and the customers to find each other. We have sent million of users over the years to the stores (not unique of course as there are only 9 million people living in Sweden)

showsover
1 replies
9h53m

Interesting stuff, thanks for the reply!

Do you run into issues where they block your scraping attempts or are they quite relaxed on this? Circumventing the bot detection often forces us to go for Puppeteer so we can fully control the browser, but that carries quite a heavy cost compared to using a simple HTTP requester.

grafraf
0 replies
8h34m

We have been blocked a couple of times during they years, usually using proxy has been enough. We try to reach out to the stores and try to establish a friendly relationship. The feelings have been mixed depending on what store we are talking to

filleokus
0 replies
5h38m

On the business side, what's your business model, how do you generate revenue? What's the longer term goals?

(Public data shows the company have a revenue of ≈400k USD and 6 employees https://www.allabolag.se/5590076351/matspar-i-sverige-ab)

lotsofpulp
6 replies
23h54m

In the US, retail businesses are offering individualized and general coupons via the phone apps. I wonder if this pricing can be tracked, as it results in significant differences.

For example, I recently purchased fruit and dairy at Safeway in the western US, and after I had everything I wanted, I searched each item in the Safeway app, and it had coupons I could apply for $1.5 to $5 off per item. The other week, my wife ran into the store to buy cream cheese. While she did that, I searched the item in the app, and “clipped” a $2.30 discount, so what would have been $5.30 to someone that didn’t use the app was $3.

I am looking at the receipt now, and it is showing I would have spent $70 total if I did not apply the app discounts, but with the app discounts, I spent $53.

These price obfuscation tactics are seen in many businesses, making price tracking very difficult.

mcoliver
5 replies
23h42m

I wrote a chrome extension to help with this. Clips all the coupons so you don't have to do individual searches. Has resulted in some wild surprise savings when shopping. www.throwlasso.com

koolba
1 replies
23h19m

Ha! I have the same thing as a bookmarklet for specific sites. It’s fun to watch it render the clicks.

Larrikin
0 replies
7h40m

Could you share the bookmarklet?

Larrikin
1 replies
23h34m

This looks amazing. Do you have plans to support Firefox and other browsers?

mcoliver
0 replies
19h26m

It's published as a Firefox extension and you should be able to find it by searching for Lasso but I think I need to push the latest version and update the website. Thanks for the reminder. Which other browsers would you like?

lotsofpulp
0 replies
19h19m

Wow! This is amazing, thank you. I usually use Safari, but will give it a try.

ikesau
4 replies
23h53m

Ah, I love this. Nice work!

I really wish supermarkets were mandated to post this information whenever the price of a particular SKU updated.

The tools that could be built with such information would do amazing things for consumers.

sakisv
2 replies
23h13m

Thanks!

If Greece's case is anything to go by, I doubt they'd ever accept that as it may bring to light some... questionable practices.

At some point I need to deduplicate the products and plot the prices across all 3 supermarkets on the same graph as I suspect it will show some interesting trends.

project2501a
1 replies
21h59m

fyi, I posted this on /r/greece

sakisv
0 replies
21h50m

Thanks!

robotnikman
0 replies
22h19m

As someone who actively works on these kind of systems, it's a bit more complicated than that. The past few years we worked on migrating from some old system from the 80's designed for LAN use only, to a cloud based item catalogue system that finally allowed us the ability to easily make pricing info more available to consumers, such as through an app.

pcblues
3 replies
11h5m

This is interesting because I believe the two major supermarkets in Australia can create a duopoly in anti-competitive pricing by just employing price analysis AI algorithms on each side and the algorithms will likely end up cooperating to maximise profit. This can probably be done legally through publicly obtained prices and illegally by sharing supply cost or profit per product data. The result is likely to be similar. Two trained AIs will maximise profit in weird ways using (super)multidimensional regression analysis (which is all AI is), and the consumer will pay for maximised profits to ostensible competitors. If the pricing data can be obtained like this, not much more is needed to implement a duopoly-focused pair of machine learning implementations.

pcblues
1 replies
11h0m

The word I was looking for was collusion, but done with software and without people-based collusion.

avador
0 replies
8h37m

Compusion.

TrackerFF
0 replies
7h28m

Here in Norway, what is called the "competition authority"(https://konkurransetilsynet.no/norwegian-competition-authori...), is frequently critical to open and transparent (food) price information for that exact reason.

The rationale is that if all prices are out there in the open, consumers will end up paying a higher price, as the actors (supermarkets) will end up pricing their stuff equally, at a point where everyone makes a maximum profit.

For years said supermarkets have employed "price hunters", which are just people that go to competitor stores and register the prices of everything.

Here in Norway you will oftentimes notice that supermarket A will have sale/rebates on certain items one week, then the next week or after supermarket B will have something similar, to attract customers.

joelthelion
3 replies
8h11m

We should mutualize scraping efforts, creating a sort of Wikipedia of scraped data. I bet a ton of people and cool applications would benefit from it.

sakisv
2 replies
8h9m

Haha all we have to do is agree on the format, right?

joelthelion
0 replies
6h14m

Honestly I don't think that matters a lot. Even if all sites were scraped in a different format, the collection would still be insanely useful.

The most important part is being able to consistently scrape every day or so for a long time. That isn't easy.

Spivak
0 replies
7h56m

We already did. The format supports attaching related content, the scraped info, with the archive itself. So you get your data along with the means to generate it yourself if you want something different.

https://en.m.wikipedia.org/wiki/WARC_(file_format)

haolez
3 replies
22h56m

I heard that some e-commerce sites will not block scrappers, but poison the data shown to them (e.g. subtly wrong prices). Does anyone know more about this?

marginalia_nu
0 replies
11h4m

Yeah, by far the most reliable way of preventing bots is to silently poison the data. The harder you try to fight them in a visible fashion, the harder they become to detect. If you block them, they just come back with a hundred times as many IP addresses and u-a fingerprints.

barryrandall
0 replies
21h52m

I never poisoned data, but I have implemented systems where clients who made requests too quickly got served data from a snapshot that only updated every 15 minutes.

MathMonkeyMan
0 replies
16h59m

This HN post had me playing around with Key Food's website. A lot of information is wrapped up in a cookie, but it looks like there isn't too much javascript rendering.

But when I hit the URLs with curl, without a cookie, I get a valid looking page, but it's just a hundred listings for "Baby Bok Choy." Maybe a test page?

After a little more fiddling, the server just responded with an empty response body. So, it looks like I'll have to use browser automation.

seanwilson
2 replies
18h19m

They change things in a way that doesn't make your scraper fail. Instead the scraping continues as before, visiting all the links and scraping all the products. However the way they write the prices has changed and now a bag of chips doesn't cost €1.99 but €199. To catch these changes I rely on my transformation step being as strict as possible with its inputs.

You could probably add some automated checks to not sync changes to prices/products if a sanity check fails e.g. each price shouldn't change by more than 100%, and the number of active products shouldn't change by more than 20%.

z3t4
0 replies
12h16m

Sanity checks in programming are underrated, not only are they cheap performance vice, they catch bugs early that would otherwise poison the state.

sakisv
0 replies
10h46m

Yeah I thought about that, but I've seen cases that a product jumped more than 100%.

I used this kind of heuristic to check if a scrape was successful by checking that the amount of products scraped today is within ~10% of the average of the last 7 days or so

hk1337
2 replies
20h42m

I would be curious if there were a price difference between what is online and physically in the store.

flir
0 replies
20h8m

Next step: monitoring the updates to those e-ink shelf edge labels that are starting to crop up.

devjab
0 replies
12h33m

In Denmark there often is, things like localised sales the 4-8 times a year a specific store celebrates its birthday or similar. You can scan their PDF brochures but you would need image recognition for most of them, and some well trained recognitions to boot since they often alter their layouts and prices are listed differently.

The biggest sales come from the individual store “close to expiration” sales where items can become really cheap. These aren’t available anywhere but the stores themselves though.

Here I think the biggest challenge might be the monopoly supermarket chains have on the market. We basically two major corporations with various brands. They are extremely similar in their pricing, and even though there are two low price competitors, these don’t seem to affect the competition with the two major corporations at all. What is worse is that one of these two major corporations is “winning”, meaning that we’re heading more and more toward what will basically be a true monopoly.

andrewla
2 replies
22h29m

One problem that the author notes is that so much rendering is done client side via javascript.

The flip side to this is that very often you find that the data populating the site is in a very simple JSON format to facilitate easy rendering, ironically making the scraping process a lot more reliable.

sakisv
1 replies
21h18m

Initially that's what I wanted to do, but the first supermarket I did is sending back HTML rendered on the server side, so I abandonded this approach for the sake of "consistency".

Lately I've been thinking to bite the bullet and Just Do It, but since it's working I'm a bit reluctant to touch it.

andrewla
0 replies
21h14m

For your purposes scraping the user-visible site probably makes the most sense since in the end, their users' eyes are the target.

I am typically doing one-off scraping and for that, an undocumented but clean JSON api makes things so much easier, so I've grown to enjoy sites that are unnecessarily complex in their rendering.

Alifatisk
2 replies
8h7m

Some stores don’t have an interactive website but instead send out magazines to your email with news for the week.

How would one scrape those? Anyone experienced?

psd1
1 replies
7h49m

Imap library to dump the attachment, pandoc to convert it to html, then DOM library to parts it statically.

Likely easier than website scraping.

Alifatisk
0 replies
7h44m

I’ll try this approach, thanks! Most magazines I’ve noticed are using a grid design, so my first thought was to somehow detect each square then OCR the product name with it’a price.

xnx
1 replies
23h32m

Scraping tools have become more powerful than ever, but bot restrictions have become equally more strict. It's hard to scrape reliably under any circumstance, or even consistently without residential proxies.

sakisv
0 replies
23h22m

When I first started it there was a couple of instances that my IP was blocked - despite being a residential IP behind CGNAT.

I then started randomising every aspect of the scraping process that I could: The order that I visited the links, the sleep duration between almost every action, etc.

As long as they don't implement a strict fingerprinting technique, that seems to be enough for now

ptrik
1 replies
4h44m

While the supermarket that I was using to test things every step of the way worked fine, one of them didn't. The reason? It was behind Akamai and they had enabled a firewall rule which was blocking requests originating from non-residential IP addresses.

Why did you pick Tailscale as the solution for proxy vs scraping with something like AWS Lambda?

anamexis
0 replies
4h13m

Didn't you answer your own question with the quote? It needs to originate from a residential IP address

ptrik
1 replies
4h46m

My CI of choice is [Concourse](https://concourse-ci.org/) which describes itself as "a continuous thing-doer". While it has a bit of a learning curve, I appreciate its declarative model for the pipelines and how it versions every single input to ensure reproducible builds as much as it can.

What's the thought process behind using a CI server - which I thought is mainly for builds - for what essentially is a data pipeline?

sakisv
0 replies
1h21m

Well I'm just thinking of concourse the same way it describes itself, "a continuous thing doer".

I want something that will run some code when something happens. In my case that "something" is a specific time of day. The code will spin up a server, connect it to tailscale, run the 3 scraping jobs and then tear down the server and parse the data. Then another pipeline runs that loads the data and refreshes the caches.

Of course I'm also using it for continuously deploying my app across 2 environments, or its monitoring stack, or running terraform etc.

Basically it runs everything for me so that I don't have to.

moohaad
1 replies
20h21m

Cloudflare Worker has Browser Rendering API

pencilcode
0 replies
19h17m

It’s pretty good actually. Used in a small scraping site and worked without a hitch.

maerten
1 replies
2h5m

Nice article!

The second kind is nastier. > > They change things in a way that doesn't make your scraper fail. Instead the scraping continues as before, visiting all the links and scraping all the products.

I have found that it is best to split the task of scraping and parsing into separate processes. By saving the raw JSON or HTML, you can always go back and apply fixes to your parser.

I have built a similar system and website for the Netherlands, as part of my master's project: https://www.superprijsvergelijker.nl/

Most of the scraping in my project is done by doing simple HTTP calls to JSON apis. For some websites, a Playwright instance is used to get a valid session cookie and circumvent bot protection and captchas. The rest of the crawler/scraper, parsers and APIs are build using Haskell and run on AWS ECS. The website is NextJS.

The main challenge I have been trying to work on, is trying to link products from different supermarkets, so that you can list prices in a single view. See for example: https://www.superprijsvergelijker.nl/supermarkt-aanbieding/6...

It works for the most part, as long as at least one correct barcode number is provided for a product.

sakisv
0 replies
1h48m

Thanks!

I have found that it is best to split the task of scraping and parsing into separate processes. By saving the raw JSON or HTML, you can always go back and apply fixes to your parser.

Yes, that's exactly what I've been doing and it saved me more times than I'd care to admit!

hnrodey
1 replies
3h51m

Nice job getting through all this. I kind of enjoy writing scrapers and browser automation in general. Browser automation is quite powerful and under explored/utilized by the average developer.

Something I learned recently, which might help your scrapers, is the ability in Playwright to sniff the network calls made through the browser (basically, programmatic API to the Network tab of the browser).

The boost is that you allow the website/webapp to make the API calls and then the scraper focuses on the data (rather than allowing the page to render DOM updates).

This approach falls apart if the page is doing server side rendering as there are no API calls to sniff.

sakisv
0 replies
1h44m

...or worse, if there _is_ an API call but the response is HTML instead of a json

gadders
1 replies
3h31m

This reminds me a bit of a meme that said something along the lines of "I don't want AI to draw my art, I want AI review my weekly grocery shop, workout which combinations of shops save me money, and then schedule the deliveries for me."

sakisv
0 replies
1h46m

Ha, you can't imagine how many times I've thought of doing just that - it's just that it's somewhat blocked by other things that need to happen before I even attempt to do it

antman
1 replies
20h15m

Looks great. Perhaps more than 30 days comparisons would be interesting. Or customizable should be fast enough with a duckdb backend

sakisv
0 replies
10h25m

When you click on a product you get its full price history by default.

I did consider adding a 3 and 6 month button, but for some reason I decided against it, don't remember why. It wasn't performance because I'm heavily caching everything so it wouldn't have made a difference. Maybe aesthetics?

Scrapemist
1 replies
14h13m

What if you add all products to your shopping cart and save it as “favourites” and scrape that every other day.

nilsherzig
0 replies
12h17m

You would still need a way to add all items and to check if there are new ones

PigiVinci83
1 replies
4h20m

Nice article, enjoyed reading it. I’m Pier, co founder of https://Databoutique.com, which is a marketplace for web scraped data. If you’re willing to monetize your data extractions, you can list them on our website. We just started with the grocery industry and it would be great to have you on board.

redblacktree
0 replies
3h49m

Do you have data on which data is in higher demand? Do you keep a list of frequently-requested datasets?

6510
1 replies
5h29m

Can someone name the South-American country where they have a government price comparison website. Listing all products was required by law.

Someone showed me this a decade ago. The site had many obvious issues but it did list everything. If I remember correctly it was started to stop merchants pricing things by who is buying.

I forget which country it was.

scarredwaits
0 replies
5h37m

Great article and congrats on making this! It would be great to have a chat if you like, because I’ve built Zuper, also for Greek supermarkets, which has similar goals (and problems!)

raybb
0 replies
6h19m

Anyone know of one of these for Spain?

ptrik
0 replies
4h47m

The data from the scraping are saved in Cloudflare's R2 where they have a pretty generous 10GB free tier which I have not hit yet, so that's another €0.00 there.

Wonder how's the data from R2 fed into frontend?

ptrik
0 replies
4h47m

I went from 4vCPUs and 16GB of RAM to 8vCPUs and 16GB of RAM, which reduced the duration by about ~20%, making it comparable to the performance I get on my MBP. Also, because I'm only using the scraping server for ~2h the difference in price is negligible.

Good lesson on cloud economics. Below certain threshold we get linear performance gain with more expensive instance type. It is essentially the same amount of spending but you would save time running the same workload with more expensive machine but for shorter period of time.

mt_
0 replies
9h27m

What about networking costs? Is it free in Hetzner?

mishu2
0 replies
5h53m

Playwright is basically necessary for scraping nowadays, as the browser needs to do a lot of work before the web page becomes useful/readable. I remember scraping with HTTrack back in high school and most of the sites kept working...

For my project (https://frankendash.com/), I also ran into issues with dynamically generated class names which change on every site update, so in the end I just went with saving a crop area from the website as an image and showing that.

jonatron
0 replies
11h32m

If you were thinking of making a UK supermarket price comparison site, IIRC there's a company who owns all the product photos, read more at https://news.ycombinator.com/item?id=31900312

cynicalsecurity
0 replies
8h22m

My first thought was to use AWS, since that's what I'm most familiar with, but looking at the prices for a moderately-powerful EC2 instance (i.e. 4 cores and 8GB of RAM) it was going to cost much more than I was comfortable to spend for a side project.

Yep, AWS is hugely overrated and overpriced.

Stubbs
0 replies
7h51m

I did something very similar but for the price of wood from sellers here in the UK but instead of Platwright, which I'd never heard of at the time, I used NodeRED.

You just reminded me, it's probably still running today :-D

SebFender
0 replies
6h57m

I've worked with similar solutions for decades (complete different need) and in the end web changes made the solution unscalable. Fun idea to play but with too many error scenarios.

NKosmatos
0 replies
17h36m

Hey, thanks for creating https://pricewatcher.gr/en/ very much appreciated.

Nice blog post and very informative. Good to read that it costs you less than 70€ per year to run this and hope that the big supermarkets don’t block this somehow.

Have you thought of monetizing this? Perhaps with ads from the 3 big supermarkets you scrape ;-)

Closi
0 replies
7h1m

This is great! Would be great if the website would give a summary of which shop was actually cheapest (e.g. based on a basket of comparable goods that all retailers stock).

Although might be hard to do with messy data.