return to table of content

You can't leak users' data if you don't hold it

jmward01
52 replies
20h15m

I agree with the core idea, avoid saving info so you can't ever leak it. I personally think our legal framework should be based on consequences to encourage this mentality more. If you are hacked I don't care even a little that you did everything right, I just care that my information got taken. You should be held liable even if you did what the industry thought was right.

mooreds
31 replies
18h47m

avoid saving info so you can't ever leak it

I think that this is a good idea. It's similar to the principle of least privilege: keep only what you need to offer the service you are providing. Less risk for the provider, less risk for the consumer.

However, at least in the USA, I've noticed an increasing number of companies who have determined that personal data is worth good money. This is why most stores have rewards programs, where they capture what you've bought and when you have bought it.

If it was just about repeat business, they'd still be using punch cards, like they did a decade ago.

I don't have any insight into how they use the data, but why would they offer free things (restaurants offer appetizers, grocery stores offer discounts, etc) unless the value they received was more than the cost of the incentive?

Affric
11 replies
18h38m

I am still waiting for digital identity.

Not sure why I can’t authenticate myself with these companies based on a private key and any details they want be disclosed to them for whatever reason don’t just come ephemerally from my server.

Obviously, you could also have a third party acting in this space for the non-tech savvy.

Right now all my data is held by corporate types who don’t give a shit.

mooreds
8 replies
18h31m

Passkeys are a start. They have their issues (I wrote about some here: https://ciamweekly.substack.com/p/on-webauthn-and-passkeys ) but at least it is widespread, well supported, standardized, (possibly) anonymous public private key cryptography.

__MatrixMan__
6 replies
17h39m

It seems likely that you know passkeys better than I do (you wrote a blog about them after all), so I've got a question.

My impression is that there's a server side component. It's not just a key in your device, it's a key in your device that's blessed by someone who maintains a server. My further impression is that the people who manage the servers (either the authenticating-you server or the supporting-auth-for-you server) will be able to configure allow/deny lists for each other. They can say:

sorry passkeys.jimbobsmomsbasement.com, you're not on the list of servers that I trust, so I'm not going to accept this key

Quoting GP here:

Right now all my data is held by corporate types who don’t give a shit.

Is it true that passkey providers will be able to use this feature band together and prevent passkey providers that they don't like from being useful? It was something about attestations--I didn't fully get it the first time it was explained to me.

If so, doesn't that make the "corporate types who don't give a shit" problem worse? At least with a password the corporate types couldn't deny you the right to authenticate because in their estimation your password provider isn't corporate enough.

hirsin
3 replies
15h22m

It's not a server-side component but something integral to the passkey itself. Everything is on the key and there's no third party - just the key, and the site you're authing to. So when you auth with a passkey, it includes some details around "this came from a Feitian model x or yubico model y or bitwarden or...". That's blank, IIRC, if it's Apple.

And then if a corporate type has said "well we only gave our employees Yubikey 3s", they can instruct their idp to only accept passkeys for Yubikey 3s.

This is a tradeoff - their employees can't register legitimate, useful personal passkeys like their phone, but an arbitrary (not dedicated nationstate) attacker is a bit less likely to somehow takeover the account and add a Feitian key. That mismatch is a nice signal to block the user.

But in general, outside of corporate identities, no one has any interest in caring what kind of passkey you use. GitHub doesn't, certainly, outside of resident key requirements, a different bundle of fish.

(and yes, Dan is definitely great at this stuff, read his blog on this stuff).

__MatrixMan__
2 replies
13h52m

I see, thank you. I suppose it also lets you reject keys from hardware platforms that are later determined to be insecure.

The tinfoilhattery which I heard (and regret tangentially supporting on occasion) was that it was a slippery slope towards something like SSO, where some company (1Password, say) is your identity provider. And then other companies might reject your identity because hypothetically 1Password only collects a retinal scan, not also DNA, or somesuch, and now we're in a race to the bottom re: just how authenticated one can be.

juped
1 replies
12h18m

It's hardly tinfoil hattery. If it isn't that (and I'm still not convinced by all the above that it isn't) then it needs to explain exactly how it isn't loudly, repeatedly, clearly, and very prominently.

__MatrixMan__
0 replies
11h25m

The messaging around it sure has been heavy on the "you should do this it's better" and light on the "and here are the implications".

mooreds
1 replies
13h27m

There's a server side component--a public key tied to the private key on a device you control. That public key is also tied to the website that you register the passkey too.

My further impression is that the people who manage the servers (either the authenticating-you server or the supporting-auth-for-you server) will be able to configure allow/deny lists for each other.

I haven't seen that in the specification. Every host is assigned one or more public keys (again, corresponding to private keys kept in the device), and I don't think there is a common identifier that could be shared between different hosts.

Attestation is not required and seems atypical outside of enterprise use cases. I haven't seen it used at all for consumer use cases.

__MatrixMan__
0 replies
2h46m

So I did some hunting in the spec (which I should've done when I first heard this concern), so that I can be specific. I turned up this from https://w3c.github.io/webauthn/#attestation-object

An important component of the attestation object is the attestation statement. This is a specific type of signed data object, containing statements about a public key credential itself and the authenticator that created it. It contains an attestation signature created using the key of the attesting authority (except for the case of self attestation, when it is created using the credential private key).

The concern was about how creatively parties with a market interest in providing authentication services (1Password, Okta, Apple, Google) can use this field in service of goals that the user doesn't share, such as preventing competition.

It's already the case that if you don't have a phone number you're in some ways a non-person because you can't 2fa with many services that require it. The same dynamic could be used to guarantee that everybody have a relationship with one of a small handful of providers such that they don't have to care about whether we consent to whatever new requirements they dream up. Maybe. I'll have to think about it a bit more.

For instance, could this object one day contain an attestation that the user has a credit score above a certain threshold? That's the sort of thing that's new compared to passwords.

yau8edq12i
0 replies
12h43m

Passkeys simply authenticate a device + maybe a PIN. Not a person.

shicholas
0 replies
17h23m

The Schluss project out of the Netherlands is trying to do just that, really cool stuff! (Schluss.org)

canadiantim
0 replies
16h51m

For a third party acting in this space who could provide a server for the non-tech savvy, what do you think the ideal setup is, infrastructure and architecture wise?

eddd-ddde
6 replies
18h37m

Even without reward programs, couldn't they just associate purchases with a given credit card or similar?

jmward01
1 replies
18h19m

This has been my thinking too. When I make any purchase I am giving everything about myself away now. A long time ago I had the idea to create an 'accountability' e-mail server/credential generator that generated emails/credentials with unique ids along with a note about what it was used for so that you could see exactly who sold your info when you saw junk mail come in. There are now ways to do this but it is still a bit manual. It would be great to have this tied in with auto opt-out and auto reporting of abusers. Of course creating a personal e-mail server now is almost impossible.

caddy
0 replies
7h1m

Not really, I rented a box from hetzner and have ran mail-in-a-box for over a year now. The IP I was assigned was on the microsoft shitlist for a little bit, but I sent them an email explaining why I was using the server and they took me off the list. Also took a few emails to GMail to get off their spam filter, but a few emails to family and asking them to report not spam (around 3 emails) was all it took. The only server I've ever been rejected by is a corporate email server with very strict settings, but every other corporate email has been fine.

trimethylpurine
0 replies
17h40m

We don't actually care. We're interested in how the product is doing so we know how little we can get away with ordering (to the warehouse). I think the rewards are for sharing with other brands or stores that are owned by the same parent. I doubt they would sell to a competitor. That would directly reduce their own sales. Selling to another industry doesn't make sense either. No one wants more liability for retaining irrelevant data.

mooreds
0 replies
18h29m

Absolutely. And I am sure they do.

But rewards programs might make matching easier (across different credit cards and household members).

They also are owned by the service provider and don't reveal anything to credit card companies.

Again, I'm speculating here.

jethro_tell
0 replies
16h30m

I that no you usually give convent as part of the sign up for the rewards program.

They can track your cards and if they want to track those back to an address and match them all up, not a big deal, but they can sell it if you agree to their turns.

NegativeK
0 replies
17h39m

They control the registers; they can identify or at least bin users by the selection and quantity of products they buy.

ryan_lane
2 replies
16h46m

I don't have any insight into how they use the data, but why would they offer free things (restaurants offer appetizers, grocery stores offer discounts, etc) unless the value they received was more than the cost of the incentive?

Ignoring selling the data to brokers, it's not hard to think of some ways to use the data that's beneficial to both you and them:

- Inventory management: You buy something low demand, but you do it consistently. If they can match up all the purchases of that item to specific folks, and they know the general frequency at which people buy that item, they can ensure it's stocked when you need it, without the need to greatly overstock it. If you stop coming, they also know they may be able to reduce that stock safely.

- Price sensitivity: They do a price increase. Which customers have stopped purchasing the product? Do they need to do a sale on the item for you to purchase it again? Do they need to drop the price? This is more of a benefit on their side than yours, but knowing the most frequent purchasers stopped purchasing it due to an increase could lead to a decrease, where this is harder to determine without good data.

- How effective are their sales? Are they targeting them correctly?

The discount they offer isn't really a discount. It's a price hike for folks without the loyalty program.

mooreds
0 replies
4h26m

Thanks. Are these benefits based on your experience/direct knowledge or supposition?

They definitely seem reasonable to me, though I'm not sure they need individually identifiable info for most of them.

The discount they offer isn't really a discount. It's a price hike for folks without the loyalty program.

Sure, that's just messaging. But as a consumer it feels like a discount when I get 0.50/lb off chicken I buy when I use my card.

Nullabillity
0 replies
2h18m

None of those need to track individuals, they're all really just SKU sales metrics dressed up in different ways.

pc86
2 replies
5h1m

Do you mean 3-4 decades ago? I've been grocery shopping for myself since the 00s and have never seen a physical punch card being used in a grocery store.

berniedurfee
1 replies
4h58m

Our local convenience stores still use them for various items. Milk card and ice cream scoops!

But no, supermarket chains are all heavily data driven now.

pc86
0 replies
4h39m

Oh man I would love to go somewhere where they just gave you a physical card.

UniverseHacker
2 replies
18h21m

Yep... I think already companies have a profit incentive to not keep data they don't think will provide them value. They are keeping the data not because they are not thinking about security, but because the data is valuable to them, either to sell to a 3rd party or to better target/market to specific customers.

alpaca128
0 replies
9h22m

Real hacks and leaks have shown that even if there is an incentive to not keep data and minimize damage, it is negligible. If the consequences don't go beyond slightly higher DB sizes nobody cares. The potential cost needs to be much higher than potential profits from selling the data.

RajT88
0 replies
17h57m

A lot of companies seem to pay to keep data they are not using.

Some because they forgot, some because they have not figured out how to monetize it yet.

m-p-3
1 replies
18h34m

personal data is worth good money

Then let's make the penalties substantially higher with the risk it incurs.

jghn
0 replies
14h30m

Even before the Great Sell Off, there was a growing notion of "we may need this some day so let's store and/or log it". And it makes sense. If you store it and don't need it, it's relatively cheap, at least in the last couple decades. But if you *don't* store it and need it, you're screwed.

_heimdall
6 replies
16h45m

I agree with the sentiment here, but I'm not sure how that could ever really be implemented.

Our laws shouldn't punish people for honestly doing the best they know how to, especially with a caveat that it doesn't matter if it was industry standard. Not only is that confusing and at serious risk of punishing all the wrong people, it creates incentives to help hack your competition and throw them to the legal wolves.

cuu508
3 replies
12h0m

Our laws shouldn't punish people for honestly doing the best they know how to

Sure, but holding on to data you do not strictly need is not doing the best

_heimdall
1 replies
6h35m

Who gets to decide what data is needed?

I'd argue that my bank doesn't need to retain details of my transactions, only the amount paid/owed. For sure they don't need that data indefinitely. Do I legally go after them because I don't think they need that data? And does the defendant in such a case just need to provide any potential use case for a specific piece of data they store?

cuu508
0 replies
2h53m

What you should do when you believe your bank is mishandling your personal information depends on your local laws. In European countries, you would first contact your bank and get them to clarify precisely what data they are storing about you, and what legal basis they are relying on to do so (they should have this already available in their privacy policy, so perhaps you would just go and read that). If the bank refuses to answer, or if you disagree with them about the lawfulness of processing your data, you would then file a complaint with your local data protection authority. After the DPA is done with the case if you disagree with their conclusion, the next step would be to go to the court.

AnthonyMouse
0 replies
7h29m

Sometimes there is data you do strictly need but it's still sensitive. Then someone finds 0-day and relieves you of it. It would not be just or prudent for this to bankrupt anyone it happens to, who is just as much a victim as the person whose data it is.

vharuck
1 replies
3h9m

The government should assign punishing penalties for leaks that scale with the quantity and invasiveness of the data, and help grow an insurance industry for paying those penalties. That way, the law dictates how "bad" leaks are with fines, and the insurers encourage/require best practices. Externalities become internalities.

_heimdall
0 replies
12m

Wouldn't assigning leaks a specific value and creating an industry to insure against it make it easier to weigh the relative risk of cutting corners and ignoring risk?

If anything, an insurance policy for it allows the companies to externalize and distribute the fines.

A4ET8a8uTh0
2 replies
20h9m

Obviously, IANAL

In a sense, that gradation is present for other offenses. You kill a man by accident? It may end up being involuntary manslaughter. You kill that same man with malice and planning? It charge will move to aggravated and premeditated murder.

At the end of the day, a life was taken and some level of judicial review should take place. That does not appear to happen for 'hack' events.

ribosometronome
0 replies
18h32m

Are involuntary manslaughter cases not usually predicated on some sort of negligence or other criminal activity on the part of the manslaughter? I don't think I've ever heard of anyone being punished for being the person in the wrong place when someone else decides to commit suicide by train or vehicle, for example.

If we take the comparison down a step (and into what I think is maybe a more comparable situation), I routinely see warnings in parking lots that they are not liable for stolen property or damage and imagine those are generally probably pretty accurate outside of, again, some negligence on the side of the business.

gretch
0 replies
18h27m

The difference between this situation is that you still committed an action, compared to simply being acted on.

A better analogy would be like if you were a bank holding peoples' money. Armed assailants break into the money and take it. Is the bank liable because it did not have adequate defenses for the attack?

But the bank situation is easy to solve because it's money and money is fungible, so you solve it with insurance.

Leaked data can't be revoked and it's hard to quantify.

If you make the policy wrong, you essentially end up punishing a victim party (the end users are victims, but the firm that was attacked is also a victim).

andoando
1 replies
17h36m

I disagree. I dont see any reason users wouldn't equally share the risk if neither party did anything wrong. I suppose if the company makes a claim like "We wont share your data with anyone" then you could make that point

teeray
0 replies
14h46m

That assumes users fully understood what they were consenting to. Agreements and Privacy Policies are written to exhaust and bamboozle the users so they just hit “accept.”

aeternum
1 replies
17h41m

The problem is the core idea is flawed.

The same concept (don't store the data) was applied to creditcard account data 10 years ago in many point-of-sale systems. Malware simply evolved to logs the data itself.

Not collecting user data in the first place might be a solve, but don't let simply not storing it create a false sense of security. Your user's data is still very much at risk.

dj_mc_merlin
0 replies
16h10m

That's an improvement. It changes the risk profile from "company whose security practices you have no clue about getting hacked" to "one of my personal devices getting infected with malware", which you can at least do something about.

fvdessen
0 replies
16h55m

This is exactly what the GDPR is all about btw.

Terr_
0 replies
20h14m

Disclosure liability insurance, low premiums if there was nothing to leak.

Of course, that assumes a different world where companies actually pay for screwing up in the first place.

Nursie
0 replies
14h4m

That’s basically what the GDPR tried to do, and a great plan.

Instead of making the retention of data a good thing, make it toxic, make it risky. You don’t want PII in your logs, on anyone’s workstation or anywhere else it’s not absolutely necessary.

Make it radioactive.

Nextgrid
0 replies
19h27m

The root cause behind the proliferation of privacy breaches is that the legal framework against spying/hacking turns out to have a massive vulnerability.

Developing & spreading spyware that collects people's personal data without permission is illegal (you don't even need to leak the collected data for it to be illegal), but wrap it in some flashy marketing, dozens of pages of unscrutable ToS and "privacy" policy, and suddenly not only your spyware operation became legal but you can even leak or sell the data in total impunity.

This is valid in Europe as much as the US. Keep in mind that even before the GDPR, most countries had some sort of legislation around personal data processing, use and storage, but none of it was enforced. The GDPR is no better in terms of enforcement, which is why you see tons of (non-compliant) "consent" flows and spying continues as usual for the most part since businesses entirely based on non-consensual data processing are still alive and kicking.

GoblinSlayer
0 replies
5h1m

What industry thinks is right? We don't have a sound solution for breaches, 20% of attack surface can be covered with formally verified software, that's a lot of effort for a smallish gain, so risk should be mitigated somehow and minimization of data hoarding is an available mitigation.

1vuio0pswjnm7
0 replies
18h5m

Data centers could be closed, removing their negative environmental impacts.

forgotmyinfo
28 replies
20h32m

What happens when Matter gets acquired? I'm sorry, but all this self back-patting is a bit too little too late for this jaded guy, especially because a thousand other companies have made the same promises in cheery blog posts, before something happens and my social security number winds up on a sticky note on some hacker's monitor in Belarus. Hell, I've worked for companies where I was forced to break users' trust because some executive critter told me to when it was clear the profit faucet wasn't opened nearly enough.

So thanks, but this isn't enough anymore. We need laws that will guarantee that every company that handles our data will do it thoughtfully and safely. In the meantime, I'm not expecting much.

defen
12 replies
20h26m

What happens when Matter gets acquired

That's a major point that's addressed in the blog post, did you read it?

abound
11 replies
19h48m

Not sure about GP, but I did read the post. If they get acquired, I don't see anything stopping the acquirer from pushing an update that decrypts stuff and sends the plaintext to the servers.

filleduchaos
9 replies
19h13m

I'm rather baffled at this level of nitpicking. Yes, if the software was being written by completely different people with completely different goals they might then start to acquire user data but what does that have to do with the point (that data this team and this management don't have cannot be leaked)?

travisjungroth
6 replies
18h50m

Because their current solution doesn’t meet their own stated goals.

even if we are competent enough to prevent a leak from ever happening, and even if our users trust us to do what we say, we must be resilient to being strong-armed by a future controlling power (e.g. if someone we don't trust buys us)

They could be strong armed into collecting data and then handing it over.

photonthug
2 replies
18h30m

Is there any kind of legal promise that could be made and not rescinded by the board, not swept away by mergers and acquisitions? I assume not but that’s almost what is needed here more than a software architecture fix which, no matter how well designed, is only as stable as the whims of internal stakeholders.

travisjungroth
0 replies
16h35m

I think you could make the alerting louder. Hire an independent auditing firm and create some contract around what to do if they fail.

A closed source client is fundamentally incompatible with the claims they want to make.

filleduchaos
0 replies
17h11m

Legal promises don't enforce themselves.

filleduchaos
2 replies
17h12m

And you as a customer can simply stop using their services if you no longer trust their intentions and (and this is a very clear and straightforward point) the company and the new controlling power would have nothing on you. Because it did not exist in the first place.

Why are so many of you so keen for them to be "wrong"? Like what even is the alternative approach here supposed to be? Don't build a product in the first place?

travisjungroth
0 replies
13h36m

And you as a customer can simply stop using their services if you no longer trust their intentions

The original premise of the article is we don’t have to be concerned with their intentions. This is false.

Why are so many of you so keen for them to be "wrong"?

I’d rather they were right. But they are wrong.

Like what even is the alternative approach here supposed to be? Don't build a product in the first place?

Don’t say you can do things you can’t.

I don’t have to offer a solution to point out they’re not offering one either.

lebean
0 replies
15h34m

No you see, at any moment this company can get acquired and can start pushing malware as updates. It's quite an elementary mistake to not account for this possibility, and the author of this post should hang their head in shame for even pretending to have a solution. /s

stvltvs
1 replies
18h53m

I don't track management changes for the apps I use because who has the time? How can we protect ourselves against a malicious company changing their tech behind the scenes?

abound
0 replies
14h30m

My personal solution is just to self-host anything important or sensitive (password manager, file storage, photo storage, etc).

Audit the code as desired, pin to specific versions, run on a private network.

ndr
0 replies
19h39m

This. And how likely is it that if their users value their app's data then there's an acquirer willing to wipe out all the users' data from the phones before they take over?

jacurtis
5 replies
19h59m

before something happens and my social security number winds up on a sticky note on some hacker's monitor in Belarus

Isn't the point of the article that they can't leak something they don't have?

So if I never get your social security number from you, then I have zero risk of leaking it or exposing it to hackers. I can't give them (intentionally or unintentionally) something that I don't possess.

The author says:

Given these criteria and extremes, we decided that our best course of action is to just never have our users' private data.

---

To your next question on what happens if Matter is acquired. Well the app might stop working or change how it works or have new logo in the corner, but your data never left your device, so you don't really have to worry about it being leaked to Belarus.

ndr
4 replies
19h47m

Well the app might [...] change how it works [...] but your data never left your device, so you don't really have to worry about it being leaked to Belarus.

You're one update away from having an app that has access to all its data and can ship it anywhere. Do you keep updates off?

AlienRobot
3 replies
19h17m

I wish I could keep my updates off.

Every time I turn on my computer dnfdragora is like "there are 35 new updates!"

Oh yeah? But my computer is working. Those updates could fix problems that I don't have but could break stuff I have as well. I'm not updating until they release Fedora 40. (my nvidia driver stopped working when I upgraded to 39, again...)

To begin with who thought these notifications were a good idea? Just appear meekly on the system tray when you have something to say. The only time a popup is acceptable is if it says "yo, your computer is on fire." Anything less is unnecessary distraction.

I want my software as-is, changing only when I want it to change. The other day a Windows update removed my "show desktop" button from the task bar to insert a copilot button. Who asked for this? The taskbar changes when I say it changes! I started using the program because I liked the way it was. If it wasn't the way it was I wouldn't have started using it, so why change?

To make matters worse, I don't think there has ever been a time a software updated that I said "they finally added X!" It just never happens. It's insane. Things really only get worse with time. Ten years ago GIMP didn't have nondestructive editing. It still doesn't. I'm still waiting for it. They said it will come in GIMP 3, for years. I feel like that update is just never coming. We're at GIMP 2.99.18 now. Can you believe it? 2.99.18. Who even reaches minor version .99?!

albumen
1 replies
19h0m

I can do anecdata too. Photoshop added content-aware fill, and then generative fill. These have been useful additions for me, saving time that was previously tedious stamp-tool work.

photonthug
0 replies
18h38m

You missed the point. It’s not about whether updates are good or bad, because yes anecdotally it could be either. It’s about whether they are consensual. And whether constant harassment that you can’t opt out of counts as consent when you give up or misclick

ndr
0 replies
19h1m

This is viable when your app works completely offline.

As soon as it has a server to talk to things changes. It becomes rather expensive to maintain a server that has to support any previous client, or to support all the users who don't know how to update.

In practice auto-update is the best default. Android lets you turn it off. On iOS I don't know.

scoates
4 replies
20h27m

Hello. Agreed that we need comprehensive privacy reform.

You should probably read the article, though. (-;

I have access to everything (on the tech side) at Matter, and if you put your social security number into the app, I wouldn't be able to access it to write it on a sticky note. That's the whole point.

PS I'm also old and jaded. (-;

travisjungroth
3 replies
20h14m

I don’t think the article really answers this. All these decisions you’ve made to not store data are decisions that you could unmake.

To put it concretely: if everyone at Matter tomorrow became malevolent and wanted user data, what happens? For example, if you push an app that sends home my private text, how would I know? Could you?

defen
2 replies
18h27m

Isn't this an argument against putting any personal information into any app? Signal could turn malevolent tomorrow and start sending all your chats to their servers, which could have life-threatening implications for people vs just potentially being embarrassing.

travisjungroth
1 replies
13h34m

I put data into Google Docs knowing it lives on their servers. So there’s no problem there.

Signal has an open source client. Big difference for these claims.

defen
0 replies
2m

Do you compile your own open source client for your phone? Or do you install it from an app store? Most people are going to install it from the app store, so I believe my point still stands. What correlation is there between what's in the app store vs what is published in the open source repo? e.g. how do you validate that the app store client was compiled from a specific commit in the open source repo?

hn_throwaway_99
1 replies
20h11m

While I agree with your sentiment, after re-reading the post and looking at some of the blog posts, I think you missed a major point, that being:

1. You can't leak what you don't have.

That is, even if the company gets bought out or is hacked, if they don't have the data, there is nothing to leak. This point is also at least partially enforced by another point from the post:

2. Advanced app users can audit their network traffic from the app

Now, granted, I wouldn't expect many users to do this, but highlighting it at least serves as a warning that it should be harder for the app to surreptitiously change what is sent to the server (and to emphasize, I know this can be worked/hacked around, but I don't think working around this could ever be done with plausible deniability).

Given the fact that companies and products jettison their high-minded policies as soon as it becomes economically inconvenient, the only other thing I'd recommend for the author is to have a good, simple export tool, e.g. something that dumps all the "memories" to a directory or PDF file. The post talks about backup and restore, but if I were a potential user I'd like to know that if the company does kick their privacy policy to the curb at some point that I could get all of the investment and data out of the app without needing to continue to rely on the app for at least the base data I put into it.

nicksloan
0 replies
19h31m

Hi, I also work at Matter. Our current backup/restore implementation exports a zip file of complete JSON data. We will improve backups in the future, but no pull request will be merged to remove the existing implementation for at least as long as I’m leading the app team.

tschwimmer
0 replies
20h26m

Hey, no need to cast aspersions on the infosec practices of Belorussian hackers, I bet they store their stolen credentials in an encrypted SQLite database as per industry best practice.

bamnet
0 replies
18h0m

This isn't just a theoretical point. Chrome extensions are the canonical example of products which start off with the best intentions, get acquired, and then ...

okeuro49
19 replies
20h35m

Our first product is an iOS app designed to help you capture the best moments in your life

I have increasingly come to the belief that mediating our life experiences and social interactions through apps isn't good for us.

Your website "Matter" [1], to be honest, seems completely dystopian to me and an indication of all that's wrong with the relationship between technology and society.

[1] https://matter.xyz/

hn_throwaway_99
14 replies
20h6m

While I actually liked the primary point of the author regarding privacy controls, and I feel some of the commenters here are being a bit harsh regarding tangential issues or missing some of the crux of what he wrote, I also so strongly agree with your point that I couldn't let it go.

Apps aren't going to make you happier. If you want to be happier, go for a walk outside and go hang out with your friends, in person.

Retric
13 replies
20h3m

That assumes quite a bit of freedom that may not exist for many people.

An Uber driver waiting for their next passenger isn’t able to go hang out with friends, but they can play on their phone, or read a book.

dylan604
9 replies
19h44m

Before devices, people were able to find ways to kill time at work just fine. As a human, we're perfectly capable of surviving without the device. I know that seems antithetical to zillenials, but it will be more than okay to look away from the screen for an extended period of time.

Personally, having a driver sit their counting the number of red cars or counting the number of different state license plates is equally purpose serving as doom scrolling. In fact, it's probably much less detrimental to their mental health.

It saddens me a wee bit that people think that the devices much be attended to to this extent.

throwawaysleep
8 replies
19h34m

Users didn’t consider them just fine, as they immediately abandoned them when given the chance.

albumen
7 replies
19h3m

That doesn't mean it's not harming them. Have addicts made the right choice by indulging their weakness?

Retric
6 replies
16h58m

New and fun isn't the same as addictive.

We don't consider people watching TV, listing to the radio, or reading books as addicts yet people decried each of them as they became ever more popular for ruining the youth etc. People like stimulation and seem to indulge when it's new, but most people also get used to it relatively quickly.

dylan604
5 replies
15h43m

I think you're trying to move the goal posts. The apps on the new are absolutely without a shadow of doubt trying to make their users addicted. To even suggest they are not addictive is beyond pale, and makes you look like a not serious attempt at the discussion, or worse.

Your second sentence is null after your first.

Retric
4 replies
14h59m

Pay to win apps aiming for addiction get a tiny number of whales out of millions of people. So yes apps try very hard, but the vast majority of people just don’t get addicted long term just obsessed for a short period.

Compare retention numbers between smoking and clash of clans, a clearly successful pay to win game, and it’s not even close.

We could talk about various other apps, but nobody has actually nailed addiction for the general population just the people highly susceptible to such behaviors.

dylan604
3 replies
14h53m

are you willfully being obtuse to the behaviors of the social apps FB, Tiktok, Twit...er,X and the ilk? Games of course are trying to be addictive to get you to buy more IAP/lootbox/otherHandInPocket concepts.

it's like you think a device is only able to play games which is clearly ludicrous.

Retric
2 replies
14h45m

I simply used gaming to make it clear attempting to be addictive wasn’t the same as actually being addictive. Social Apps rise and fall in popularity just like everything else.

Meta has billions of users but that’s split across 4 platforms FB, WhatsApp, Instagram, and Messenger. So clearly popularity isn’t the same as addiction otherwise their FB app would be all they need.

Go back a few years and Netflix seem addictive as people binged shows, but I doubt anyone is seriously suggesting they are addictive today.

Q.E.D. New and fun isn't the same as addictive.

dylan604
1 replies
14h6m

The only QED is your circular logic

Retric
0 replies
1h48m

I accept your concession.

josephg
1 replies
19h54m

Sure; but it’s probably true for most of us most of the time.

I’m on HN right now instead of playing piano or going for a walk. I think I need the reminder sometimes.

djbusby
0 replies
19h44m

Go for a walk right now!

gremlinunderway
0 replies
19h49m

Yeah but saying "you ought to go outside / relax / rest / spend time with family" and then saying "not everyone can because shit sucks" is missing the point. Shit does suck, for a lot of people, in a lot of ways. Acknowledging that is good, but then saying the alternative for that is people to play on their phones? How about advocating for people to be able to make a wage or living and have time to hang out with friends?

YeBanKo
2 replies
19h16m

Their website is an epitome of idiocy of modern web product design. They want my email, because we are all "stardust". There is no clear explanation of what their product is or how it's different from a photo app and a notebook. Instead of a proper description of their business, they send to the Business Insider article about the founder who wants to prevent unhappiness.

alistairw
1 replies
18h49m

I was reading your comments as a classic overly negative hn comment but then went straight to that website and wow, yep that's bad.

I can't tell if they're trying to sell me groundbreaking new brain scan technology or very dodgy supplements. Possibly one of the most vague product sites I've seen and there are a lot that come up on here.

I'm surprised this happens in tech so much. I feel I'm always very aware of people in the real world having no idea what I work on, so I always need to give background and context. I have noticed people in other industries haven't experienced that their whole life so often rattle off jargon that means nothing to me.

Looks like all the founders are US based, maybe it's a cultural thing.

YeBanKo
0 replies
18h7m

This is definitely a thing.

Few years ago I was at a cloud conference and met who worked for a failure large and known security company in a tech position. They have all kinds of offerings. While he was talking to someone, I decided to speak to one of their sales guys. I shared a bit about what my company did and what kind of infra we were using and then asked him what they offered and specifically what they could offer to us. This was one of dumbest conversation I had ever had. The guys had no f*cking clue what his company was selling beyond "we sell turn key enterprise security solution", it was so painful. I even tried to steer him into trying to sell us some vulnerability scans or traffic analysis for threat detection.

Yeah, I get it, it maybe niche, he is in a non technical, they have many offerings, he hadn't been there too long (iirc about a year, which is long enough for sales), but it was still unacceptable. To me it's the same phenomenon – lack of clarity in communication – not exactly sure what the root cause.

gumby
0 replies
19h15m

Back when my kid was born I bought a video cam. I upgraded the videocam a couple of times, but I realized I didn't use it much because when I did I was videoing not being part of what was going on.

As a result there isn't much video of the kid, but I don't regret it. By the timewe switched to video cams in our pockets, I had pretty much given up, and apart from a few few-second captures, I haven't shot any video in years. And TBH I don't miss it at all.

Apps like this are simply more of the same.

wolverine876
4 replies
18h20m

From their privacy policy: https://matter.xyz/privacy

If we make changes to this privacy policy, we will update it here and update the effective date at the top. (We can’t email you about changes because we don’t collect everyone’s email addresses.) Changes to this policy will not apply retroactively.

Effectively they can change it any time and you probably won't know.

If they violate it, what power do you have to enforce it? Pay an attorney six figures? For what damages under what law?

Also, I'm not sure what 'retroactively' means here, legally: They have my data and change the policy; can they tomorrow use my data according to the new policy? (Not that it matters much, because I won't know about the changes anyway.)

refulgentis
2 replies
18h10m

This is exactly as good as it gets as a privacy extremist - if they don't have your email, they won't send it, if they change it, the changes won't apply to you.

They explicitly ruled out using your data according the new policy - retroactively means, if they update it, the updated version doesn't apply to you.

wolverine876
0 replies
17h56m

if they don't have your email, they won't send it

They could post a notice in their app, for example.

They explicitly ruled out using your data according the new policy - retroactively means, if they update it, the updated version doesn't apply to you.

I think it's much more vague than that. Maybe it's not retroactive to prior actions of theirs or to prior data they had. You're assuming it's not to prior users.

kelnos
0 replies
17h13m

My read there is that it won't apply retroactively to data you've already given them, but will apply to data you give them in the future, regardless of they are able to notify you of the change or not.

andrewnicolalde
0 replies
18h16m

Perhaps it means that a change to the policy can’t be used to justify actions taken that would have broken a prior version of the policy but which don’t violate the new one. Granted, I don’t have a complete understanding of the legal enforceability of a company’s privacy policy.

stpn
3 replies
19h42m

I really love this sentiment.

Unfortunately, it also seems really hard to build many kinds of applications in a way that follows this line of thinking. I've been building a personal finance app with privacy in mind, but there are some places where you might begrudgingly "hold" a users' data that are just unavoidable. For instance, if we want to be a serious competitor and have bank integrations, then plaid etc. will require you to run a server that can see the data, even if you don't want it.

We also don't collect names in our app, just an email, but good luck collecting payments, avoiding fraud or reporting taxes without collecting name and address.

We've built our system to be as minimally invasive (e.g. in the above, financial data is only proxied to the user's device, never stored on the server), but that's only the "intention" part - there's just not a way to take the full measure.

stephenr
1 replies
12h19m

good luck collecting payments, avoiding fraud or reporting taxes without collecting name and address

A client of mine collected just shy of $2M last calendar year (2023), and we only store an email address, and a password.

The trick is, let the organisations that (apparently) need that extra data, collect the extra data themselves. Payments are offered via PayPal, Stripe and Amazon Pay, using their hosted payment pages. It works amazingly well. The Stripe and PayPal options can even be achieved without JS, if you wish.

I believe you could also achieve a similar lack of PII using a JS-heavy 'embedded' solution for Stripe and maybe PayPal, but don't quote me on that.

stpn
0 replies
11h43m

Oh yes, to be clear we’re not collecting any of that data ourselves, but it is in our stripe account. I suppose that data isn’t in _our system_ per se, but the arrangement is still relying on our privacy intentions over any systematic guarantee (“we simply don’t have this data”) we could give.

MichaelZuo
0 replies
19h11m

You can always make an app that is not competitive with those which do store user data…

jazdw
2 replies
18h55m

Except you can leak data even if you don't hold it. You are focusing on data at rest. Not storing the data obviously helps massively, but a bug or maliciously inserted code could lead to user data becoming compromised.

xu_ituairo
1 replies
18h51m

We shouldn’t let perfect be the enemy of good

jazdw
0 replies
18h13m

We shouldn't make blanket statements

indymike
2 replies
17h44m

You can't leak users' data if you don't hold it

False. You can't leak a user's data if you never have it to begin with. If you process it, you are at risk of leaking it.

kerkeslager
1 replies
17h27m

False. In order to process a user's data, you must inherently hold it while processing it.

Alternatively, stop being pedantic.

indymike
0 replies
15h2m

False != True.

mhuffman
1 replies
18h16m

But then how can I make billions with targeted advertising using their metadata?

pembrook
0 replies
5h43m

While I’m sure there is some, I can’t think of a single B2B Saas product that sells their user’s data for targeted advertising.

That’s more of a consumer problem.

SV_BubbleTime
2 replies
19h14m

The best infosec advice I ever received was “data is toxic”.

pizzafeelsright
0 replies
3h59m

computers are built to process data and networks were made to share data

infosec does the opposite

nedt
0 replies
9h13m

The even better term is Datenenthaltsamkeit - data abstinence. Not just storing less, but really only storing something if there is no other option.

textinterface
1 replies
7h7m

I've had ideas for side-projects before, but most times I never actually went to building them because I got too scared of holding private data (sometimes sensitive information such as financial data).

I thought of just building offline apps for the browser and letting the user sync data using Dropbox or some competitor, but never found an open source project to facilitate that kind of thing (an actual db that syncs through Dropbox). I also heard that locally storing a Dropbox token in the browser could be dangerous (assuming an offline-app-for-the-browser architecture, for example), which meant I'd have to build actual native apps. But then, isn't it dangerous to store a token in the OS as well?

I might never build a side project. But damn, I wish there weren't bad people willing to steal data out there.

loughnane
0 replies
5h50m

I struggle with this too. I've got a side project now that's a django app w/ a typical database setup. I'd _love_ to set up some sort of E2EE along with it, but the support for a novice like me to do that isn't really there.

Maybe once it matures a bit I'll deploy E2EE, but i gather it's a bunch of work.

RaoulP
0 replies
13h20m

Interesting, thanks. See also Fnoord's comment elsewhere in this thread.

numbers
1 replies
20h33m

I am more interested in the author's blog navigation, so cool!

mcdonje
1 replies
14h56m

Semi-related: The point of GDPR opt-outs is it's something that's offensive to the user, which should give the site owners pause. "Is it really worth it to jump through these hoops and give the users annoying popups so we can set trackers?"

Instead, the industry rallied around popups and tried to shift the blame to the GDPR.

iamacyborg
0 replies
7h46m

A worrying number of folks in industry seem to think the GDPR mandates popups in the first place.

erehweb
1 replies
20h6m

The thing I wonder is - how will Matter make money? Is the plan to just get this via subscriptions?

Almondsetat
0 replies
19h37m

how will Matter make money?

it won't, Matter

cheema33
1 replies
15h8m

If one of our users' data became valuable to an evil nation state and they kidnapped my family, I'll be honest, I'd probably have to hand over the data.

How is this prevented, when the evil nation asks you to modify the code to your app to do what you said you wouldn't do? i.e. Steal the data.

gnicholas
0 replies
15h3m

Easy: have a livestream of his family and if they’re ever unexpectedly absent then assume that a backdoor was just installed.

tppiotrowski
0 replies
16h6m

I try to follow this mantra on my website. I don't want user accounts so if you pay for something I email you a unique link to access it. No login and password and you can use a burner email address if you'd like. I don't care. I don't worry what happens with my user data if I get acquired because no one will ever want to buy a business with 0 registered users. :)

sonicanatidae
0 replies
4h13m

Sadly, this will never occur because there is too much money to be made by selling this same consumer data and you can't sell what you don't store.

rangestransform
0 replies
15h25m

Good principle but the godforsaken states of America will never allow it, KYC and AML laws force financial service providers to keep pictures of your ID for eternity

mxuribe
0 replies
1h5m

This is why i wish projects like TBL's Solid [https://en.wikipedia.org/wiki/Solid_(web_decentralization_pr...] would take off some more. I happily pay providers for an application (or at least the service/utility that an app. provides) because that is often the value they is brought to bear for my benefit...but the data, ah the data is something I don't want anyone to control but me. Projects like Solid pave a possible path forward that *could possibly* enable an ecosystem where we still can legitimately pay a provider for value they provide a la an app or service, but still we as users would exert maximum control over our data sovereignty. I hope this author and others continue to keep thinking of data in this way.

montereynack
0 replies
20h20m

I sympathize a lot with the headline statement; it boggles my mind on a lot of the data residency/integrity/confidentiality measures taken around massive data silos (as well as the infra teams companies bring to bear to manage, scale and then inevitably publish gospel articles on the web about) when companies could just opt… NOT to collect that data? I really like the model of “It stays on your device, we never see it. At most we get bare-minimum location statistics.” Although I question the assertion that their metrics system won’t be turned against them; seems obvious that anything programmed can be reprogrammed or updated, especially in the modern update-focused age. I don’t think they addressed that beyond a general statement that they took pains to assure that their users won’t ever be spied on. Would be interested in a technical article on that.

Side note, we at Sentinel Devices are taking exactly this “we don’t hold your data” approach for industrial machinery. Think automated AI pipelines that are air-gapped. And we’re hiring! If you’re interested, reach out to hello@sentineldevices.com

masterrr
0 replies
18h59m

It's a very sensitive topic for the health data! Too many apps sending data left and right, 23andme scandal.. very few apps (e.g. Carrot Care on iOS) adopt such great philosophy.

krebsonsecurity
0 replies
17h2m

This is the way. You don't have to protect what you don't collect. Mullvad is an excellent example of this. They don't even want you to pick a password, and they're fine if you just mail them cash as payment.

jcalx
0 replies
3h11m

Similarly from Notes From An Emergency [1] by Maciej Cegłowski:

"It's not clear that anyone can secure large data collections over time. The asymmetry between offense and defense may be too great. If defense at scale is possible, the only way to do it is by pouring millions of dollars into hiring the best people to defend it. Data breaches at the highest levels have shown us that the threats are real and ongoing. And for every breach we know about, there are many silent ones that we won't learn about for years.

A successful defense, however, just increases the risk. Pile up enough treasure behind the castle walls and you'll eventually attract someone who can climb them. The feudal system makes the Internet more brittle, ensuring that when a breach finally comes, it will be disastrous."

[1] https://idlewords.com/talks/notes_from_an_emergency.htm

cvalka
0 replies
13h40m

Don't request it Don't store it Don't keep it.

arkh
0 replies
9h58m

That's one of the principles highlighted by the GDPR: data minimization. Once you can be fined for losing data, it suddenly is not free. No more "let's store everything and see what we can do with it later".

andai
0 replies
7h8m

Matter is an iPhone app, so we store data on your phone with Core Data, and in a private database that syncs within your iCloud account, but is set up in a way that even we can't access it.

But Apple can? From the title, "We outsourced storing user data to an evil megacorporation" isn't exactly what I was hoping for...

aksss
0 replies
18h41m

For a SaaS app I built, I am using a third-party IDP/IDaaS, and thought about holding all user data as metadata in that directory so the only thing my app database stored was a foreign key for user, and I'd be able to really leverage the warrantied security of the provider. In the end I needed faster access to the user metadata so now am storing in database. Meh. Not sure if first idea was a good one or not - I know that depending on the type of breach, that user info stored in the directory could still be accessed, but was really attracted to the idea of not having a damn bit of it in my own db for purpose of getting just a bit closer to the idea of not leaking it if it's not there.

SecurityLagoon
0 replies
15h47m

Exactly.

I work in cyber security and I am more convinced by the day that the answer is not having the data to steal rather than attempting to mitigate every possible threat.

This combined with zero trust and 2fa/passkeys will go much further than many other snake oil solutions the industry loves.

ComplexSystems
0 replies
1h31m

When users add memories to the app, they'll usually add content such as images. We don't want to (and we don't) hold these, either—at least not in a way we can see them. We primarily store these images on your device, but because the size of this storage is limited, we do have a system for storing assets such as images that have been encrypted on-device, and the actual photo contents or the decryption keys are never sent to us. We store data for users, here, but to us it looks like random noise (the binary ciphertext), never like a photo of whatever it is you're storing a photo of.

Just for clarification, you are storing on your servers encrypted versions of all of the pictures the user enters? Just not storing keys?

If so - storing encrypted user data is not necessarily the end of the world, but why advertise it as though you aren't storing user data at all? You are doing what many other companies do, which is store encrypted user data. Backblaze does the same thing. Or maybe I am misunderstanding.

ChrisMarshallNY
0 replies
19h56m

This aligns pretty well, with my own PoV.

I can tell you that it has not made me popular with my coworkers. This whole blasted industry has become completely drenched in PID harvesting, and incredibly casual treatment of said PID. My solitaire apps are constantly trying to get me to sign up for leaderboards and challenges.

I have been denying pig-butchering (most likely) signups for our new app, at about a 30% rate. It's pretty damn sobering (each signup is manually vetted. We don't really care about quantity). We are restricted to US, Canada, and India, and have barely made any efforts to promote the app, but the scammers jumped all over it.

Right now, they are primitive (we have a specific demographic that is hard to fake), but I expect that to change.

I have just come to accept that baddies will get in, so it's important that the liquor cabinet be empty, if they try raiding it.

Animats
0 replies
14h9m

Are they willing to contractually commit to not holding the user's data, with penalties? If not, they're not serious.

Remember "Facebook - It's free and always will be."