return to table of content

I Received an AI Email

ossyrial
96 replies
10h0m

The author links to the somewhat dystopian blog where the email sender is quite proud of their work. Their words (or perhaps that of an LLM):

Could an AI agent craft compelling emails that would capture people's attention and drive engagement, all while maintaining a level of personalization that feels human? I decided to find out.

The real hurdle was ensuring the emails seemed genuinely personalized and not spammy. I knew that if recipients detected even a whiff of a generic, mass-produced message, they'd tune out immediately.

Incredibly, not a single recipient seemed to detect that the emails were AI-generated.

https://www.wisp.blog/blog/how-i-use-ai-agents-to-send-1000-...

The technical part surprised me: they string together multiple LLMs which do all the work. It's a shame the author's passions are directed towards AI slop-email spam, all for capturing attention and driving engagement.

How much of our societal progress and collective thought and innovation has gone to capturing attention and driving up engagement, I wonder.

thrance
38 replies
7h59m

I remember seeing a talk from Jonathan Blow where he made a comparison: in the 1960s top engineers worked for NASA and put a man on the moon in a decade, basically doing computations by hand. Today, we have super advanced computers and tech companies enjoy 100× times more of the top engineers than NASA ever had, and they are all working toward making you click on ads more.

segmondy
21 replies
6h29m

Which do you think is more important? Putting man on the moon or ecommerce? I reckon you been able to get on a device, see a biscuit ads, order one from foo.com and have it shipped to you. Think of how much tech it takes for that to happen, that is more tech than NASA built to send many to the moon, the internet, packet switching, routing, fiber optic, distributed systems, web servers, web browsers, ads, cryptography, banking online, and so on and so forth. We love to trivialize what is common, but that clicking on an ad is not an easy problem. Clicking on ads has generated enormous wealth in the world which is now bootstrapping AGI.

Clicking on ads helped with our goal to AI today. Showing you the right ad and beating those trying to game it is machine learning heavy. When was the first time we started seeing spelling correction and next word suggestions? It was in google search bar. To serve the correct ads and deal with spam? heavy NLP algorithms. If you stop and think of it, we can drop a think line from the current state of LLMs to these ads click you are talking about.

Jgrubb
10 replies
6h12m

That last line kind of makes the point. Is any of that actually inspiring to a young child?

thedevilslawyer
8 replies
5h38m

It sure as heck is inspiring to a critical thinking adult. There's been enormous value added to all world's citizens.

commodoreboxer
4 replies
5h4m

Interesting. In my experience, advertisement and the incentives around it have led to the most devastatingly widespread removal of value in human culture and social connections that we've seen in this generation. Huge amounts of effort wasted on harvesting attention, manipulating money away from people, isolating and fostering extremism, building a massive political divide. And centralizing wealth more and more. The amount of human effort wasted on advertisement is staggering and shocking.

I don't think your average adult is inspired by the idea of AI generated advertisements. Probably a small bubble of people including timeshare salesmen. If advertisements were opt-in, I expect a single digit percentage of people would ever elect to see them. I don't understand how anybody can consider something like that a net good for the world.

How does non-consensually harassing people into spending money on things that don't need add value to all the world's citizens?

mrtranscendence
2 replies
3h13m

If advertisements were opt-in, I expect a single digit percentage of people would ever elect to see them.

I mean, you'd see the same thing if paying for your groceries were opt-in. Is that also a net bad for the world? Ads do enable the costless (or cost-reduced) provision of services that people would otherwise have to pay for.

mostlysimilar
0 replies
3h2m

I mean, you'd see the same thing if paying for your groceries were opt-in.

Is that seriously the comparison you want to make here? Most of us think the world would be better if you didn't have to pay for food, yes.

commodoreboxer
0 replies
2h0m

Ads are not charity. There is clearly a cost, otherwise they would lose money. They do not generate money out of thin air. "Generate" and "extract" aren't synonyms.

They do not enable any costless anything at all. They obfuscate extraction of money to make it look costless, but actually end up extracting significant amounts of money from people. Ad folks whitewash it to make it sound good, but extracting money in roundabout ways is not creating value.

ryandrake
0 replies
3h56m

"Adding value" and "Generating wealth" are always the vague euphemisms that these guys fall back to when they try to justify much of today's economic activity. Adding value for who? Generating whose wealth? The answer is usually "people who are already wealthy." Of course, they'll downplay the massive funneling of wealth to these people, and instead point to the X number of people "lifted out of poverty in the 20th century" as if capitalism and commerce was the sole lifting force.

I wish some of these people would think about how they'd explain to their 5 year old in an inspiring way what they do for a living: And not just "I take JSON data from one layer in the API and convert it to protobufs in another layer of the API" but the economic output of their jobs: "Millions of wealthy companies give us money because we can divert 1 billion people's attention from their families and loved ones for about 500 milliseconds, 500 times a day. We take that money and give some of it to other wealthy companies and pocket the rest."

raxxorraxor
1 replies
3h10m

I think this is a rationalization of an enormous waste of work. The effects generating wealth are indirect. In that regard you could argue that betting is generating wealth too. Advertising is like a hamster wheel people have to jump onto if they want their place in the market.

A similar amount of wealth would be generated if every advertised product would be represented by a text description, but we have a race to the bottom.

There is advertising and advertising of course but most of advertising is incredibly toxic and I would argue that by capturing attention, it is a huge economic drain as well.

Of course an AI would also be quite apt at removing unwanted ads, which I believe will become a reality quite soon.

mlyle
0 replies
2h21m

A similar amount of wealth would be generated if every advertised product would be represented by a text description, but we have a race to the bottom.

I fear statements like this go too far. I can't agree with the first part of this sentence.

I feel this about both marketing and finance:

They are valuable fields. There are huge amounts of activity in these fields that offer value to everyone. Removing friction on commerce and the activities that parties take in self-interest to produce a market or financial system are essential to the verdant world we live in.

And yet, they're arms races that can go seemingly-infinitely far. Beyond any generation of societal value. Beyond needless consumption of intellect and resources. All the way to actual negative impacts that clutter the financial world or the ability to communicate effectively in the market.

swat535
0 replies
3h2m

enormous value added to all worlds citizens

This is quite a statement to make.

Please elaborate on what enormous value has spam ads and marketing emails added to _world_ citizens?

Unless of course by “world” you mean Silicon Valley venture capitalists..

runlaszlorun
0 replies
3h54m

Is any of that actually inspiring to a young child?

I think the answer is pretty clear in the fact that so many of them, bluntly speaking, just don’t give a shit any more. I absolutely don’t blame them.

commodoreboxer
4 replies
5h41m

I keep hearing the phrase "generate wealth" in regards to advertisement and from the mouths of startup founders, but in almost no other context. I'm not familiar with the economic concept of "wealth generation" or its cousin "creating value".

Is the idea that any and all movement of money is virtuous? That all economic activity is good, and therefore anything that leads to more economic activity is also good? Or is it what it sounds like, and it just means "making some specific people very wealthy"? Wouldn't the more accurate wording be that it "concentrates wealth"? I don't see a huge difference in the economic output of advertisement from most other scams. A ponzi scheme also uses psychological tricks to move money from a large amount of people to a small amount of people. Something getting people to spend money isn't inherently a good thing.

runlaszlorun
2 replies
4h4m

Is the idea that any and all movement of money is virtuous?

Maybe this was your point, but this is built in to one of the definitions of GDP, isn’t it? Money supply times velocity of money?

I’m no economist though I’m sure there are folks on here who are. But this seems like an unfortunate fact that’s built into our system- that as laypeople we tend to assume that ‘economic growth’ means an increase in the material aspects of our life. Which in itself is a debatable goal, but our GDP perspective means even this is questionable.

For example, take a family of five living out in a relatively rural area. In scenario one, both parents work good paying remote tech jobs and meals, childcare, maintenance of land and housing, etc. are all outsourced. This scenario contrubutes a lot according to our economic definitions of GDP. And provides many opportunities for government to tax and companies to earn a share of these money flows.

Then take scenario 2, you take the same family but they’re living off of the grid as much as possible, raising or growing nearly all their own food, parents are providing whatever education there is, etc. In this scenario, the measurable economic activity is close to zero- even if the material situation could be quite similar. Not to mention quality of life might be rated far higher by many.

What rating an economy by the flow of its money does do is, and I’m not sure if this is at all intentional, is it does paint a picture of what money flows are potentially capturable either by government taxation or by companies trying to grab some percentage as revenue. It’s a lot harder to get a share of money that isn’t there and/or not moving around.

Perhaps my take on economics is off base but, for me, seeing this made me realize just how far off our system is from what it could and should be.

commodoreboxer
1 replies
1h42m

GDP is a measure. I'm very much not an economist, but I am extremely skeptical that the health of an economy can be reduced to any single number. Goodheart's law and all.

I concede that GDP is a good indicator, but I think you can have things that help GDP while simultaneously hurting the economy. Otherwise any scam or con would be considered beneficial, and it would make sense to mandate minimum individual spending to ensure economic activity. A low GDP inherently shows poor economic health, but a high GDP does not guarantee good health.

In my mind (noting, again, that I'm no economist), economic health is defined by the effectiveness of allocating resources to things that are beneficial to the members of that economy. Any amount of GDP can be "waste", resources flowing to places where they do not benefit the public. As Robert Kennedy famously pointed out, GDP includes money spent on addictive and harmful drugs, polluting industries, and many other ventures that are actively harmful.[0]

[0]: https://youtube.com/watch?v=3FAmr1la6w0

pixl97
0 replies
1h17m

Going back to the previous posters monetary velocity statement, if you have a trillion dollar GDP, but it's just two AI's bouncing money back and forth high speed while all the humans starve in the street your economy is "great" and totally awful at the same time. The one number has to be referenced against others like wealth inequality.

cooolbear
0 replies
1h44m

"Generate wealth" means "make somebody's number go up" i.e. allocating real resources/capital somewhere, with the assumption that 1. allocating that capital creates a net boon for society and 2. those who have "generated wealth" are wise and competent investors/leaders and their investments will create a net boon elsewhere. The first point is probably not especially true very often in contemporary tech (other than 'job creation') and is arguably not true for advertisement. The second point is not really a given at all and seems to be pretty consistently shown otherwise.

mxkopy
0 replies
5h47m

In the grand scheme, what you’re talking about is very zero-sum, while stuff like making rockets is not. Uber vs Waymo is a good example of how adtech can only go so far in actually creating wealth.

lukan
0 replies
5h50m

"Putting man on the moon or ecommerce"

The comparison here is between moonlanding and advertisement. So I choose the moon obviously.

Ecommerce can work just the same without LLM augmented personalized ads, or no advertisement at all. If a law would ban all commercial advertisement - people still need to buy things. But who would miss the ads?

ksynwa
0 replies
5h55m

They are clearly talking about one aspect of the industry which is the marketing part related to maximising engagement. It is not meant to be conflated with the e-commerce industry as a whole.

digdugdirk
0 replies
5h48m

It took way too long to convince myself this wasn't satire. I still wish it wasn't.

It made me realize that I think many computing people need more of a fundamental education in "hard" physics (statics, mechanics, thermodynamics, materials science) in order to better understand the staggering paradigm shift that occurred in our understanding of the world in the early 20th century. Maybe then they would appreciate how much of the world's resources have now been directed by the major capital players towards sucking the collective attention span of humanity into a small rectangular screen, and the potential impact of doing so.

batch12
0 replies
6h15m

Marketing manipulation and spam is less important.

echelon
7 replies
7h6m

Just wait. Enough of us will get pissed off that we will develop AI agents that sit between us and the internet.

A sufficiently advanced personal assistant AI would use multimodal capabilities to classify spam in all of its forms:

- Marketing emails

- YouTube sponsorship clips

- Banner ads

- Google search ads

- Actual human salespeople

- ...

It would identify and remove all instances of this from our daily lives.

Furthermore, we could probably use it to remove most of the worst parts of the internet too:

- Clickbait

- Trolling

- Rage content

I'm actually really looking forward to this. As long as we can get this agent into all of the panes of glass (Google will fight to prevent this), we will win. We just need it to sit between us and everything else.

madamelic
3 replies
7h1m

Enough of us will get pissed off that we will develop AI agents that sit between us and the internet.

Until _that_ company gets overrun by MBAs who are profit-driven then they start injecting ads into the results.

It will come in the vein of "we are personalizing the output and improving responses by linking you with vendors that will solve your problems".

mostlysimilar
1 replies
3h0m

Until _that_ company gets overrun by MBAs who are profit-driven then they start injecting ads into the results.

Found companies with people that share your values. Hire people that share your values. Reject the vampires. Build things for people.

pixl97
0 replies
1h11m

Unfortunately it turns out that at the end of the day one of the most common values is the love of massive piles of money. Vampires don't catch on fire in sunlight like storybook villains, they will invite themselves in, sidle up beside you, and be your best friend. Then in the moment you are weak they will plunge their fangs in.

Competing with bad actors is very, very hard. They will be fat with investor money, they will give their services away, and commonly they are not afraid to do things like DDOS to raise your costs of operations.

echelon
0 replies
6h58m

There will be a uBlock Origin for that.

Bluecobra
1 replies
6h0m

I get what you are saying but what is the end result when someone is so shielded from the outside when they decide to block everything that irks them and stuck in an echo chamber?

What if the user is a conservative voter and considers anything counterpoint to their world view the worst part of the internet and removes all instances of it from their daily lives? Not to say that isn’t already happening but they are consciously making the choice, not some AI bot. I can see something like this making the country even more polarized.

echelon
0 replies
5h8m

Same as it ever was.

Growing up as a southern evangelical before the internet, I can promise you that there has never been a modern world without filter bubbles.

The concept of "fake news" is not new, either. There has been general distrust of opposing ideas and institutions for as long as I've been alive.

And there's an entire publishing and media ecosystem for every single ideology you can imagine: 700 Club, Abeka, etc. Again, this all predates the internet. It's not going anywhere.

The danger isn't strictly censorship or filter bubbles. It's not having a choice or control over your own destiny. These decisions need to be first class and conscious.

Also, a sure fire way to rile up the "other team" is to say you're going to soften, limit, or block their world view. The brain has so many defenses against this. It's not the way to change minds.

If you want to win people over, you have to do the hard, almost individual work, of respecting them and sharing how you feel. That's a hard, uphill battle because you're attempting to create a new slope in a steep gradient to get them to see your perspective. Angering, making fun, or disrespecting is just flying headfirst into that mountain. It might make you feel good, but it undoes any progress anyone else has made.

the__alchemist
0 replies
6h1m

This was present in the book Fall;, or, Dodge in Hell. (Published in 2019; takes place in the near future) Everyone had a personal AI assistant as you describe to curate the internet. A big part of the motivation was to filter the spam. A secondary affect was that the internet was even further divided into echo chambers.

sandworm101
3 replies
7h36m

Someone decided that marketing is now a tech problem. Artists have been replaced by software engineers. The net result is creepy AI emails.

I fell for oldschool marketing yesterday. Im moving into a new appartment in a couple months. The local ISP who runs fiber in my new building cold-called me. I agreed over the phone to setup the service. That was proper target marketing. The person who called me knew the situation and identified me as a very likely customer with a need for service (the building has a relationship with the ISP). I would never have responded to an email or any wiff of AI chatbot. They only made the sale because of expensive human effort.

loa_in_
1 replies
6h10m

It's the tech that put you on a queue to be called

sandworm101
0 replies
5h58m

There was no tech here. My new landlord contacted the local ISP, the one they liked to work with, to say they had a new tenant arriving soon. I'd bet that my connection will have been setup long before I arrive, at a time convenient to the landlord and local provider. A landlord recommending a favored local vendor to a tenant, or a tenant to a vendor, is the sort of human relationship that predates electricity.

pseudalopex
0 replies
7h3m

Sales people were never artists. Cold calling is not art.

paxys
1 replies
6h40m

And yet someone is building all those super advanced computers and AI models. Someone is launching reusable rockets into space. Someone is building mRNA vaccines and F1 cars and humanoid robots and more efficient solar panels.

The "smart people are all working in advertising" trope is idiotic. Just an excuse for people to justify their own laziness. There is an infinite number of opportunities out there to make the world better. If you are ignoring them, that's on you.

runlaszlorun
0 replies
3h44m

And yet someone is building all those super advanced computers and AI models. Someone is launching reusable rockets into space. Someone is building mRNA vaccines and F1 cars and humanoid robots and more efficient solar panels.

Which is true. But clearly far fewer people work doing that than in advertising or some other seemingly meaningless grunt work. And I’m including the technological plumbling work with many on this site, myself included, have depended upon to support themselves and/or a family.

Which at best is effectively doing minor lubrication of a large and hard to comprehend system that doesn’t seem to have put society as a whole in a particularly great place.

bamboozled
0 replies
7h19m

It’s as if real issues like climate change aren’t a thing that needs solving…

aswegs8
0 replies
6h16m

Financial incentives, huh?

mns
15 replies
9h14m

I keep seeing these posts on HN and thinking, man, these are some smart people. Training LLMs, doing all this amazing AI stuff like this guy with the email agents and the other guy with the dropping of hats, and then I open the posts and it's just some guy making API requests to OpenAI or some similar disappointment.

brabel
9 replies
8h7m

Nowadays, an "AI Expert" is someone who knows how to download an AI client lib and prompt the AI to perform tasks. These are people who are not even technical and have no idea how all this works, but they can at least follow a Youtube Tutorial to get a basic website working.

paulluuk
4 replies
7h4m

As someone who actually has a university degree in Artificial Intelligence, I feel like this is always how it's been. Before, an "AI Expert" was someone who knew how to use Tensorflow, PyTorch or Keras. Before that, an "AI Expert" was someone who knew how to write a Monte Carlo simulation, etc etc.

You could of course say the same for frontend engineer or backend engineers. How many frontend engineers are simply importing Tailwind, React, etc? How many backend engineers are simply importing apache packages?

Where do you draw the line? Can you only be an AI expert if you stick to non-LLM solutions? Or are AI experts the people who have access to hundreds of millions of USD to train their own LLMs? Who are the real AI experts?

internet101010
2 replies
6h43m

I would liken it to cars. There is a difference between engineers, mechanics, and mechanics that know a certain car so well that they fabricate parts that improve upon the original design.

aswegs8
1 replies
6h13m

Good comparison. Engineers who build cars and understand their intricacies oftentimes just work on one small thing at a time, even in teams. Like a team just working on breaks. The mechanics can piece the stuff together and keep it working in a real world setting. But nowadays a self-declared "AI Expert" in that metaphor might be just some person who knows how to drive a car.

jtbayly
0 replies
5h46m

I used to work on breaks, but then I realized I was more productive when I actually stopped and walked around a bit.

the_cat_kittles
0 replies
6h11m

i draw the line at people claiming to be experts in something they have only done for a year

macocha
1 replies
8h2m

Also in most cases they were a "crypto expert" just two months ago.

ryandrake
0 replies
3h44m

And they were a leadgen/SEO expert a few years ago. These technogrifters just move from one hot topic to the next trying to make whatever buck they can smooth talk people into giving them.

shzhdbi09gv8ioi
0 replies
6h40m

Business as usual iow. Used to be scrum masters, then javascript "experts", then crypto bros.

Snake oil salesmen we called em back in my day ;-)

jtbayly
0 replies
5h51m

Someone who can get a website working is actually technical.

mattgreenrocks
3 replies
6h27m

It’s more about being on the front of the hype train and being endlessly positive versus competence.

thih9
2 replies
5h23m

I can't see this working long term though. Being endlessly positive and ignoring your actual competence sounds like a recipe to eventually bite off more than you can chew.

mattgreenrocks
1 replies
4h28m

Oftentimes this is fervor is channeled into personal brand building, which rarely has any sort of feedback mechanism that is tied to actual competence.

It's a calculated move on their part.

thih9
0 replies
1h34m

Brand building actually sounds good and productive to me, as long as it doesn’t approach fraud.

If your audience likes your brand and doesn’t distinguish between your services and services done by more competent providers, then you’ve found your niche. So: snake oil is not fine; but Supreme branded brick sounds ok to me, even if I wouldn’t buy it myself.

I guess the author will find followers who enjoy that approach to software and product growth. If spamming wasn’t part of it, I’d be ok.

biztos
0 replies
7h26m

When “altcoins” took off I spent a while racking my brain trying to figure out what special tech I could offer, how I could build my own blockchain, incentivize miners…

When I realized it was just dudes copy-pasting a “smart contract” and then doing super shady marketing, it was already illegal in my jurisdiction.

highspeedbus
9 replies
8h22m

Things I wish become taboo: Admitting to use AI content.

Everyone is so comfortable doing shit like this.

rpgwaiter
6 replies
8h16m

I much prefer admission to hiding it. It lets you easily see who doesn’t deserve your time

jacobgkau
4 replies
8h2m

While that might work great on the individual level for a little while, it's unfortunately not how normalized taboos seem to work long-term. You're just going to see more and more people who don't deserve your time until you're wanting for anyone who actually does.

daemin
2 replies
6h34m

This has been mentioned before, but I can see the benefit in having curated webrings and similar listings. Where people can verify the content is not LLM generated.

ryandrake
1 replies
3h39m

As soon as that becomes effective, you'll have dozens of SEO sites and experts giving seminars on "How to get your LLM-generated website into curated webrings." An entire cottage industry will spring up for the purposes of corrupting legitimate webrings and/or creating fake astroturf webrings that claim to be curated.

pixl97
0 replies
1h0m

Oh, what about the petty fights between different webrings accusing each other of using AI generated content....

Reminds me of the early days of the web.

beezlebroxxxxxx
0 replies
6h32m

You're just going to see more and more people who don't deserve your time until you're wanting for anyone who actually does.

I can see it, perhaps positively, investing far less importance and effort into online things. With admittedly a lot of optimism, I could see it leading to a resurgent arts and crafts movement, or a renewed importance put on hand-made things. People say "touch grass"; maybe AI will make people "touch crafts" (bad joke, I know).

yard2010
0 replies
7h32m

That's boasting, not admission

__loam
0 replies
6h48m

It's already like this for creative communities in things like illustration and writing. You will (rightly) get ostracized and blocked by your peers for using AI. It's a signal for poor quality for most people in those spaces.

Definitely interesting to see the different culture in tech and programming since programmers are so used to sharing code with things like open source. I think programmers should be more skeptical about this bullshit, but one could make the argument that having a more flexible view of intellectual property is more computer native since computers are really just copying machines. Imo, we need to have a conversation about skills development because while art and writing accept that doing the work is how you get better, duplicating knowledge by hand in programming can be seen as a waste of time. We should really push back on that attitude though or we'll end up with a glut of people in the industry who don't understand what's under all the abstractions.

BoxOfRain
0 replies
7h4m

I think it depends on the context. I think there's artistic cases for it, for example I've played around with using AI tools to extract speech from its background music for use in further (non AI-based) music which I don't think is an unethical thing to do.

cornholio
6 replies
9h15m

News at 11, spammers use sophisticated techniques to increase the profitability of spam. This is absolutely shocking and never before seen, what is the world coming to.

In all seriousness, manipulation and bullshit generation emerges as the single major real world use of AI. It's not good enough yet to solve the big problems of the world: medical diagnostic, auto accidents, hunger. Maybe just a somewhat better search tool, maybe a better converational e-learning tool, barely a better Intellisense.

But, by God, is it fantastic at creating Reddit and X bots that amplify the current line of Chinese and Russian propaganda, upvote and argue among themselves on absurd topics to shittify any real discussion and so on.

CoastalCoder
4 replies
9h5m

I don't think this is unique to this specific technology.

People can be both wonderful and despicable, regardless of era or mechanism.

cornholio
3 replies
8h48m

Sure, but I'm talking about the good:bad ratio of some creations. I really have strong hope for AI, and that we won't regard it in retrospect like the multi-stage thermonuclear device, the landmine or tetraethyl lead additives.

squigz
1 replies
8h26m

Not to dismiss any of the negative aspects of "AI", but it seems utterly foolish to compare it to those 3 things.

orbitmode
0 replies
7h51m

In May reports emerged of this suicide by a young man in Australia - not AI related. https://www.barefootinvestor.com/articles/this-is-the-hardes...

The following month, reports emerged of 50 girls in one Australian school being exploited in very similar ways by nothing more than a kid with a prompter.

https://www.abc.net.au/news/2024-06-25/explicit-ai-deepfakes...

Scaling this type of exploitation of children online is trivial when you think about anyone with basic programming skills.

The Techno Optimists manifesto is what appears to be utterly foolish to me when you figure out that there is not one mention of accountability for downside consequences.

CoastalCoder
0 replies
8h30m

I hope you're right. I'm less optimistic.

brabel
0 replies
8h1m

X bots that amplify the current line of Chinese and Russian propaganda...

Do you think those countries are the only ones doing this? Just the other day there was a scandal about one of the biggest Swedish parties, one that's in the government coalition, doing exactly this. And that's just one that got caught. In countries like India and Brazil online disinformation has become an enormous problem, and I think that in the USA and Europe, as the old Soviet joke went: "Their propaganda is so good their people even believe they don't have any".

tsukikage
4 replies
8h35m

This process should not require a human in the loop.

Consider:

* spammers have access to large amounts of compute via their botnets

* the effectiveness of any particular spam message can easily be measured - it is simply the quantity of funds arriving at the cryptocurrency wallet tied to that message within some time window

So, just complete the cycle: LLM to generate prompts, another to generate messages, send out a large batch, wait some hours, kick off a round of training based on the feedback signals received; rinse, lather, repeat, entirely unattended.

This is how we /really/ get AI foom: not in a FAANG lab but in a spammer's basement.

FeepingCreature
2 replies
8h22m

At least one sci-fi novel iirc had an AI spam filter achieve sentience, because the task basically amounted to contrastive-learning a model of humanity.

russnewcomer
0 replies
6h57m

That’s one of Peter Watt’s Rifters trilogy, I think maybe the second one? Been a few years since I read them. I think it’s a biological neural net, not an Ai per se. Lots of big ideas in those books, but not a lot of optimism and some rough stuff.

biztos
0 replies
7h23m

Having worked in the field, I think you’re more likely to achieve AGI by intelligently watering tomatoes in a hothouse.

Bluestein
0 replies
8h26m

Very well could be. Seconded. After all, it could very well become one of the largest vehicles for "mass training", ever ...

PS. Howerver, see comments downthread about "survivorship bias". Not everybody will reply, so biases will exist.-

taurath
4 replies
8h17m

This is sort of why I feel somewhat pessimistic about AI - the inevitable most profitable usecases being so bad in aggregate for a society with almost no bounds or values other than profit. It will never be easier to waste peoples attention.

yard2010
3 replies
7h30m

This is not a problem with AI but with a system in which there are no other values other than "make the most money fast".

throwaway7ahgb
2 replies
6h12m

"No other values"? When and how is such Doomer Hyperbole getting into HN articles?

This is half of major reddit subs now and I fear the same low quality comments will take over HN.

People need to go out and touch some grass.

fl0id
1 replies
6h4m

or maybe you need to touch some grass.

runlaszlorun
0 replies
3h38m

I need some grass.

PlusAddressing
2 replies
9h14m

Funny how they're self assured no one whiffed their AI bullshit. This is survivorship bias, he's looking only at all the planes that came back to port. The people who did - they just didn't reply. He can't prompt them.

kuhewa
0 replies
9h2m

When you spend $200 on spamming people you need to believe it was effective

Bluestein
0 replies
8h25m

The planes coming back from the bombing raids in WWII come to mind.-

otherme123
1 replies
9h52m

The guy writes a post about how to send spam effectively, and then offers the subscription link in the end with "Promise we won't spam you". Yes, I totally trust you...

CoastalCoder
0 replies
9h9m

It sounds like extortion.

"I'm sending spam that sneaks past your spam filter. Sign up to make it stop."

londons_explore
1 replies
9h54m

> Incredibly, not a single recipient seemed to detect that the emails were AI-generated.

Of the people who replied. I bet plenty figured it out, but didn't bother to reply.

Lio
0 replies
8h23m

Expect to see someone else write a blog post on How I Used AI to fool an AI Spammer

...of course they'd probably get an LLM to write the article too.

thih9
0 replies
5h30m

Also from that blog post:

As founder, I'm always exploring innovative ways to scale my business operations.

While this is similar to what other founders are doing, the automation, scale and the email focus puts it closer to spam in my book.

suoduandao3
0 replies
6h31m

I do believe that commodified attention is the most logical currency of a postascarce society, so best case... quite a lot.

Note my 'best case' scenario for the near future is pretty upsetting.

nojvek
0 replies
58m

How much of our societal progress and collective thought and innovation has gone to capturing attention and driving up engagement, I wonder.

Facebook + Instagram is $100B+ business, So is Youtube and Ads.

An average human now spends about ~3h per day on their screens, most of it on social media.

We are dopamine driven beings. Capturing attention and driving up engagement is one of the biggest part of our economy.

mihaaly
0 replies
8h24m

It is not only that too much is wasted on superficial nothing instead of choosing to make something with essence and benefitial for the society but it is sucking away those minds engaged in really useful things.

lkdfjlkdfjlg
0 replies
7h59m

It's a shame the author's passions are directed to (...)

Now do Google.

joelthelion
0 replies
6h48m

The technical part surprised me: they string together multiple LLMs which do all the work. It's a shame the author's passions are directed towards AI slop-email spam, all for capturing attention and driving engagement.

In defence of that guy, he's only doing it because he knows it's what pays the bills.

If we want things to change, we need to fix the system so that genuine social advancement is what's rewarded, not spam and scams.

Not an easy task, unfortunately.

Waterluvian
0 replies
7h6m

I’m unsurprised to see a lot of very shallow usage of AI. Most users don’t have a real use case for the tool.

taylorius
78 replies
11h1m

The future - megawatts of electricity being used, 24/7 as armies of LLMs email and debate each other, and try to sell each other programs at a great discount.

As for the humans, we went fishing instead.

sph
48 replies
10h51m

People cry about Bitcoin's energy usage now, imagine the amount of energy burned to create next-level spam with "AI".

Flame me all you want, but this is one case where Bitcoin is much more useful than LLM. If it doesn't create value, as its naysayers claim, at least it allows exchanging value. LLMs on the other hand, burn electricity to actively destroy the Internet's value, for the profit of inept and greedy drones.

throwaway0665
16 replies
10h43m

Bitcoin has one application where as there are multiple applications of LLMs. There might be mountains of noxious AI spam but it's hard to claim that Bitcoin as a technology is more useful.

bbarnett
13 replies
9h14m

So far, I haven't seen a useful application of LLMs. So far.

I've seen things that are wildly hobbled, and wildly inaccurate. I've seen endless companies running around, trying to improve on things. I've seen people looking in wonder at LLMs making mistakes 2 year olds don't.

Most LLM usage seems to be in two categories. Replace people's jobs with wildly inaccurate and massively broken output, or trick people into doing things.

I'd have to say Bitcoin is far more useful than LLMs. You have to add the pluses, and subtract the minuses, and in that view, LLMs are -1 billion, and bitcoin is maybe a 1 or 2.

whiplash451
6 replies
9h3m

There is one clear (albeit somewhat boring) application of LLM: data extraction from structured documents.

That field has made a leap forward with LLMs.

Positive impact on society includes automated extraction in healthcare pipelines.

immibis
3 replies
8h46m

Unstructured*

whiplash451
2 replies
8h29m

No, I really meant structured. Extracting data from structured documents is surprisingly hard when you need very high accuracy.

What I mean by structured is: invoices, documents containing tables, etc.

Extracting useful data from fully unstructured content is very hard IMO and potentially above the capacity of LLMs (depending on your definition of "useful" and "unstructured")

bbarnett
1 replies
8h24m

But this is why I made my complexity statement in my other reply.

Why are firms sending around invoices, tables instead of parseable data. Oh I know the argument, because "so hard to cooperate" on standards, etc.

Madness.

projektfu
0 replies
7h1m

Partly because the standards, such as X12, have a high startup cost to use them, they aren't very opinionated about the actual content, and you have to get the counterparty on board to use them.

bbarnett
1 replies
8h28m

Healthcare pipelines! All well and good until hallucinations cause death or what not!

And why is this better than employing a human. Or reducing complexity. It's not as if human wages are what causes hyper expensive US healthcare costs.

This seems like a negative.

whiplash451
0 replies
7h38m

Right now there is no human, the data just goes nowhere (i.e. it is not used).

At some point we need to be optimistic and look for incremental progress.

brabel
2 replies
7h53m

So far, I haven't seen a useful application of LLMs. So far.

What?! Whole industries have been changed already due to products based on them. I don't think there's a single developer who is not using AI to get help while coding, and if you aren't, sorry but you're just missing out, it's not perfect but it doesn't need to be. It just needs to be better than StackOverflow and googling around for the docs or how to do things and ending up in dubious sites, and it absolutely is.

My wife is a researcher and has to read LOTS of papers. Letting AI summarize it has made her enormously more efficient at filtering out what she needs to go into more detail.

Generating relevant images for blog posts is now so easy to do (you may not like it, but as an author who used to use irrelevant photos before instead, I love it when you use it tastefully).

Seriously, I can't even believe someone in 2024 can say there has not been useful applications of LLMs (almost all AI now is based on LLMs as far as I know) with a straight face.

pseudalopex
0 replies
7h22m

I don't think there's a single developer who is not using AI to get help while coding

You are in a bubble.

It just needs to be better than StackOverflow and googling around for the docs or how to do things and ending up in dubious sites, and it absolutely is.

Subjectively. Not absolutely.

commodoreboxer
0 replies
5h17m

I don't think there's a single developer who is not using AI to get help while coding

It's banned at my company due to copyright concerns. Company policy at the moment considers it a copyright landmine. It does need to be "perfect" at not being a legal liability at the very least.

And the blog post image thing is not a great point. AI images for blog posts, on the whole, are still quite terrible and immediately recognizable as AI generated slop. I usually click out of articles immediately when I see an AI image at the top, because I expect the rest of the article to be in line: low value, high fluff.

There are useful LLM applications, but for things that play to its strengths. It's effectively a search engine. Using it for search and summarization is useful. Using it to generate code based on code it has read would be useful if it weren't for the copyright liability, and I would argue that if you have that much boilerplate, the answer is better abstractions, libraries, and frameworks, rather than just generating that code stochastically. Imagine if the answer to assembly language being verbose was to just generate all of it rather than creating compiled programming languages.

k8sagic
1 replies
9h9m

AI is not just LLMs. AlphaFold for example moved a critical goal post for everyone of us.

bitcoin is only negative. It consumes terrawatts of energy for nothing.

HermanMartinus
0 replies
8h58m

And even if it were just LLMs, I use LLMs in my workflow every single day, and I've never used a/the blockchain except for some mild speculation around 2017.

Tainnor
0 replies
5h38m

I'm as skeptical about LLMs as anyone, especially when people use them for actual precision tasks (like coding), but what they actually IMHO are good at are language tasks. That is, summarising content, text generation for sufficiently formulaic tasks, even translation to an extent, and similar things.

yazantapuz
0 replies
6h47m

Well, a friend of mine built its house thanks to btc last's ath. Surely someone is cashing out nvidia right now. Indirectly useful :)

freehorse
0 replies
9h19m

It is not about the quantity of the applications, but about the value they bring to society. If it is about spamming and advertising we are even talking about negative value, actually.

TeMPOraL
11 replies
10h6m

Bitcoin is literally turning greed into money, by means of wasting exponentially increasing amounts of electricity. It doesn't just not create value - to be able to allow exchanging value, it fundamentally requires ever increasing waste, as the waste is what gives its mathematical guarantees.

LLMs deliver value. Right here today, to countless people across countless jobs. Sure, some of that is marketing, but that's not LLM's fault - marketing is what it always has been, it's just people waking up from their Stockholm syndrome. You've always been screwed over by marketers, and Internet has already been destroyed by adtech. Adding AI into the mix doesn't change anything, except maybe that some of the jobs in this space will go away, which for once I say - good riddance. There are more honest forms of gainful employment.

LLMs, for all their costs, don't burn energy superlinearly. More important, for LLMs, just like for fiat money, and about everything else other than crypto, burning electricity is a cost, upkeep, that is being aggressively minimized. More efficient LLMs benefit everyone involved. More efficient crypto just stops working, because inefficient waste is fundamental to cryptos' mathematical guarantees.

Anyway, comparing crypto and LLMs is dumb. The only connection is that they both eat GPUs and their novelty periods were close together in time. But they're fundamentally different, and the hypes surrounding them are fundamentally different too. I'd say that "AI hype" is more like the dot-com bubble: sure, lots of grifters lost their money, but who cares. Technology was good; the bubble cleared out nonsense and grift around it.

richrichie
4 replies
9h25m

It doesn't just not create value

Value is a subjective concept. One could argue that its value is that arbitrary quantities of it cannot be created by dictat.

- to be able to allow exchanging value, it fundamentally requires ever increasing waste, as the waste is what gives its mathematical guarantees.

One could argue that it takes a lot worse to maintain any currency such as USD as a currency. Full force of government law enforcement will be unleashed on you if you decide to have your own currency. There is a lot of "wastage" that goes to safeguard currency creation and storage and to prevent counterfeiting.

I do not hold BTC. Nor do I trade it. But to discuss as if other currencies have no cost is not rational.

k8sagic
2 replies
9h6m

But we do know that the Proof of Stake system we currently have, is a lot cheaper and more advanced than what Bitcoin does.

Bitcoin doesn't solve any problem yet which is fundamental to our society and a fiat system like the trust issue:

If i exchange 1 bitcoin with you for any service or thing outside of the blockchain, i need the whole proof of stack system protection of our normal existing money infrastructure like lawyers, contracts etc.

And no smart contracts do not solve this issue.

What is left? Small amount of transactions per day with high fees 'but' decentralized infrastructure run by someone we all don't know aggregated probably in data centers owned by big companies.

block_dagger
1 replies
8h13m

Proof of Work is far superior to Proof of Stake in a network with absolute fairness (security) being fundamental. Satoshi himself said he could find no other way.

Compare energy spent on global hash rate to all energy spent by mining metals, physical banking, financial services middle persons, etc. if you want to talk about energy usage and make any kind of sense.

k8sagic
0 replies
7h56m

Yes, start comparing energy spend on bitcoin mining and the missing features. You will see that bitcoin already consumes a lot more energy than our proof of stake system.

What do you do when you want to exchange 1 bitcoin for 1 car and the person with the car doesn't give you the car after the 'absolut fairness/ security' of transfering bitcoin to their wallet? You go back to our Proof of Stake system. You talk to a lawyer. You expect the police to help you.

The smallest issue in our society is just transfering money from left to right. This is not a hard problem. And pls don't tell me how much easier it is to send a few bitcoins to africa. Most people don't do this and yes western union exists.

Or try to recover your bitcoins. A friend has 100k in bitcoins just doesn't know the password anymore.

What do you do when someone breaks into your home and forces you to give them your bitcoin key? Yes exactly anonyms moving of money from you to them. Untraceable, wow what a great thing to have!

And no Satoshi 'himself' is not an expert in global economy. He just invented bitcoin and you can cleary see how flawed it is.

TeMPOraL
0 replies
2h13m

There is a lot of "wastage" that goes to safeguard currency creation and storage and to prevent counterfeiting.

Yes. But the point I'm making is, none of that benefits from waste. The waste is something everyone want to reduce. With Bitcoin, the trend is uniquely opposite, because the crypto system is secured through aggregate waste being way larger than any actor or group can afford.

sph
0 replies
7h7m

You've always been screwed over by marketers, and Internet has already been destroyed by adtech. Adding AI into the mix doesn't change anything

This is pure, complacent nonsense. "We have always been surrounded with spam, 10x more won't change anything."

Yeah, why improve the status quo? Why improve the world? Why recycle when there's a big patch of plastic in the ocean.

It's an argument based on a nonsensical, cynical if not greedy position. "Everyone pollutes, so a little more pollution won't be noticed."

ryandrake
0 replies
3h26m

Long term, LLMs are not going to create more actual value than the sum of their costs and negative externalities. Bookmark this comment and check me in 5 years.

immibis
0 replies
8h44m

Delivering value is not the same as creating it. Spam takes lots of value from many people, destroys most of it, and delivers a small fraction to the spammers.

helboi4
0 replies
9h43m

Dunno why you're being voted down, this is sort of true.

davidgerard
0 replies
8h20m

I'd disagree to a large extent, because the specific similarities are important:

* the VCs are often literally the same guys pivoting

* the promoters are often literally the same guys pivoting

* AI's excuses for the ghastly electricity consumption are often literally bitcoin excuses

I think that's an excellent start on the comparison being valid.

Like, I've covered crypto skeptically for years and I was struck by just how similar the things to be said about the AI grifters were, and my readers have concurred.

Rattled
0 replies
9h21m

Well said, too many people conflate AI and crypto, and dismiss both without understanding either. Crypto has demonstrated very limited benefit compared to its cost, exchanging value has been a solved problem for millenia. We're only beginning to understand what can be done with LLMs but we can see some limits. Although it causes some harm to say it doesn't create any value is ridiculous. We can't yet see if the benefits outweigh the cost but it looks to me like they will.

k8sagic
8 replies
9h11m

AI solves gigantic issues and helps us with cancer, protein folding, potentially math and other studies, material science etc.

Bitcoin consumes as much energy as a country and has basically done nothing besides moving money from one group of people to a random other group of people.

And bitcoin is also motivated to find the cheapest energy independent of any ethical reasoning (taking energy from cheap chinese hydro and disrupting local energy networks) while AI will have energy from the richest companies in the world (ms, google, etc.) which already working on co2 neutral 24/7.

kmacdough
2 replies
9h5m

The benefit is all for naught if it undermines the fabric of society at the same time. All these benefits will only go to the few who land on top of this mess.

It's continuing to widen the wealth gap as it is.

k8sagic
0 replies
9h0m

The wealth gap is widening while in parallel poorer people have better lives than ever.

We house, heat and give access to knowledge to a lot more people than ever before.

Cheap medical procedures through AI will help us all. The AI which will be able to analyse the x-ray picture from some 3th world country? It only needs a basic x-ray machine and some internet. The AI will be able to tell you what you have.

I'm also convinced that if AGI is happening in the next 10 years, it will affect that many people that our society has to discuss capitalisms future.

HermanMartinus
0 replies
9h0m

Yeah, Bitcoin is dual-edged like that. Harming people and harming the planet.

EdwardDiego
2 replies
8h31m

Which gigantic issues has it solved? Curious to know.

k8sagic
1 replies
8h15m

I actually listed them up directly after.

For example alphafold: Protein folding. It is also now used in fusion reactor plasma control

EdwardDiego
0 replies
8h2m

It didn't solve protein folding. It led to new areas of inquiry, but it didn't solve it.

May I recommend reading Derek Lowe's "In The Pipeline" blog for a realistic discussion of the actual impact of Alphafold? [0]

And seeing as we don't have viable fusion yet, saying it "solved" it is really reaching. I'm sure it's helping, but solved? No.

[0]: https://www.science.org/topic/blog-category/ai-and-machine-l...

bergen
1 replies
9h0m

None of your problems in the first sentence are solved by LLMs. I do not dispute AI research and applications and their benefits, but the current LLM and GenerativeAI hype is of no value to hard scientific problems. Otherwise I agree with you.

stavros
4 replies
10h39m

This is what spam always did, why is it different now?

justsomehnguy
1 replies
9h56m

Sending spam is... very energy efficient, compared to the LLM usage.

stavros
0 replies
9h50m

Yep, and thus very cheap, the exact thing you don't want spam to be.

sph
0 replies
7h9m

You didn't need a GPU to generate Cialis spam.

rwmj
0 replies
10h11m

It actually adds some cost to the spammer, so that could be good.

rpigab
4 replies
9h18m

Yes, that's quite right.

That's why I created EtherGPT, an LLM Chat agent that runs decentralized in the Ether blockchain, on smart contracts only, to make sure that value is created and rewards directly the people and not big companies.

By providing it just a fraction of just a bit north of 10% of the current fusion reactions occuring in our sun, and giving it a decade or two on processing time and sync, you can ask it simple questions like "what do dogs do when you're not around" and it will come up with helpful answers like "they go to work in an office" or funny ones like "you should park your car in direct sunlight so that your dog can recharge its phone using solar panels".

mionhe
2 replies
8h38m

Another AI response, or humor from an actual person?

waciki
1 replies
6h40m

are LLMs even capable of humor? The attempts I've seen are not very funny

meiraleal
0 replies
6h32m

Something must be very wrong with someone who continuously laughs at computer jokes so I don't think it will ever reach the level you are expecting (hopefully).

noman-land
0 replies
8h37m

I'm an Ethereum fan and I found this funny.

tiew9Vii
20 replies
9h14m

The irony

Everyone is playing lip service to global warming, energy efficiency, reducing emissions.

At the same time data centers are being filled with power hungry graphic cards and hardware to predict if showing a customer an ad will get a clock, generating spam that “engages” users aka clicks.

It’s like living in a episode of black mirror.

k8sagic
12 replies
9h3m

I disagree.

Datacenters save a lot more energy than they make. Alone how much co2 is saved when i can do my banking online instead of having to drive to a bank is significant.

The same with a ton of ohter daily things i do.

Is video producing co2? yes. But you know what creates a lot more co2? Driving around for entertainment.

And the companies running those GPUs actually have an incentive to be co2 neutral while bitcoin miners don't: They 1. already said they are doing / going co2 neutral due to 2. marketing and they will achieve it becauseh 3. they have the money to do so.

When someone like Bill Gates or Suckerberg say 'lets build a nuclear power plant for AGI' than they will actually just do that.

croes
4 replies
8h29m

Is video producing co2? yes. But you know what creates a lot more co2? Driving around for entertainment

What's more likely, watching a movie online, drive to watch a movie in a cinema?

You know what creates a lot less CO2? Staying at home reading a book vor playing a board game.

Datacenters save a lot more energy than they make

I think you mean CO2. And I doubt that they actually save anything because datacenters are convenient so we use them more as alternatives with less convenience.

Like the movie example, we watch more and even bad movies if it's just a click on Netflix than we do if we have to drive somewhere to watch.

MS recently announced they fail der CO2 target but instead produce 40% more because of cloud services like AI

k8sagic
3 replies
8h3m

Have you checked how much co2 a normal car drive creates vs. watching a movie online?

We need to be realistic here. We know what modern entertainment looks like and its not realistic at all to just 'read books' and play board games.

commodoreboxer
2 replies
5h32m

It is 100% realistic to read books and play board games. Both markets are massive, and board games in particular are having what I would consider a renaissance. Maybe it depends on your crowd, but everybody I know plays tabletop games and reads books.

mrtranscendence
1 replies
2h41m

You're missing the point. What's not realistic is to tell everyone that they should abstain from any type of entertainment that requires power (TV shows, movies, video games, etc) and should only read books and play board games instead. I don't care what kind of renaissance board games are undergoing, most people still only play the mass market classics, and then only rarely.

I don't know how much energy Netflix uses serving a movie, but playing a video game on my PC for two hours where I'm located might generate a kg of CO2. That's about as much as I'll breathe in a day. Relative to other sources of atmospheric CO2 I'm not that concerned.

commodoreboxer
0 replies
1h26m

My issue was with "we know what modern entertainment looks like" as if humans are now incapable of enjoying themselves without a screen. And you should care about a massive market increase when it's directly relevant to the point at hand. If the initial point was "we know what modern entertainment looks like, nobody plays board games or reads books", pointing out that the board game market has more than doubled in the past decade is far from irrelevant. It actually directly counters the point.

I agree with your second paragraph, and selling the "make better choices to save the world" argument is an industry playbook favorite. Environmental damage needs to be put on the shoulders of those who cause it, which is overwhelmingly industrial actors. AI is not useful enough to continue the slide into burning more fossil fuels than ever. If it spurs more green energy, good. If it's the old "well this is the way things are now", that's really not good enough.

quassy
1 replies
8h49m

Point 1, 2 and 3 all apply to miners as well and yet they never delivered on their promise.

k8sagic
0 replies
8h28m

The normal miners never said that. They just say this at conferences for simple greenwashing.

The normal miner doesn't go to those bitcoin conferences, they buy asics, put them in some warehouses around the world and make money.

fattegourmet
1 replies
8h10m

how much co2 is saved when i can do my banking online instead of having to drive to a bank is significant.

And if the online bank wasn't sending a bunch of requests to a bunch of third party ad networks on every click, it would save even more.

k8sagic
0 replies
8h2m

Yes. But what are you implying? Entertainment + ad garbage still is a lot more co2 efficient than printing flyers and sending those out.

yard2010
0 replies
7h26m

Don't even get me started with the rant about taking planes.

happyraul
0 replies
4h46m

This is a very limited perspective. There are many parts of the world not beholden to automobiles for transportation. Where I live, I can walk to the bank, and walk or ride a bike to entertainment. The alternative to data centers does not have to be driving an automobile somewhere.

austinjp
0 replies
7h44m

I think it's more nuanced than that. I used to walk to my bank, I can't do that any more because many branches closed. The bank now directs all interactions to happen via their app. In terms of emissions (and social interaction, particularly for vulnerable and isolated members of society) I think this is bad news.

But this is a complex calculus and - frankly - feels like a distraction from the issue. I don't want to get into the weeds of calculating micro-emissions of daily activities, I want climate responsibility and reduction in energy consumption across the board.

squigz
5 replies
8h24m

There's no irony or contradiction here. Some people are worried about climate change. Some aren't. Silly, yes, but I don't see the irony.

jacobgkau
4 replies
8h0m

The irony is that there seems to be overlap between the two groups-- e.g. highly educated tech workers.

squigz
2 replies
7h55m

Is there? How are you making that determination?

tharkun__
1 replies
7h18m

I would tend to agree with them even without actual data. Just probabilistically there is likely some overlap.

Whether there's enough for calling it irony is probably a different question.

squigz
0 replies
7h14m

Well fair enough.

sxv
0 replies
7h14m

i.e. investment bankers in hoodies

lukan
0 replies
5h45m

I see the bright side, the tech for large scale computing gets mass produced - so all the legit use cases, like scientific simulations, or LLM for productive work, also profit. And if one really bright day humanity evolves beyound the current statd of ad driven everything, we can put all of it to use for real.

Till then, I will probably avoid more and more communicating with strangers on the internet. It will get even more exhausting, when 99% of them are fake.

lannisterstark
1 replies
9h44m

In an optimistic POV of this, eh, why not?

if models handle my day to day minutia so I have more time, why the hell not...

(I know this is very optimistic POV and not realistic but still)

CoastalCoder
0 replies
8h54m

Because spam is incredibly selfish.

You're trying to take the time and attention of as many people as possible, without regard for whether or not they'll benefit.

One safeguard people have is knowing that it costs something to send in some way to contact them. I'm this case, the sender's time and attention. LLM spam aims to foil that safeguard,. intentionally.

squigz
0 replies
8h24m

Are LLMs able to make purchases?

muzani
0 replies
10h44m

I look forward to the dream job of writing LLMs that argue with strangers on the internet as opposed to the current dream job of improving ad click rates by 0.0016% per quarter.

masklinn
0 replies
9h58m

As for the humans, we went fishing instead.

To a farm upstate?

flir
0 replies
9h19m

If anyone hasn't read Accelerando, I heartily recommend it.

For one thing, it seems to be coming true.

damidekronik
0 replies
8h20m

Slightly similar, in Lem's novel all war efforts moved to the moon where AI deployed by each nation continues in an endless conflict. Peace on Earth is achieved, peace in the mail box is achieved. https://en.m.wikipedia.org/wiki/Peace_on_Earth_(novel)

bottled_poe
0 replies
10h17m

This is why the internet as we know it is going to be driven into walled gardens. Closed by default.

ChilledTonic
64 replies
13h0m

I’m actually thrilled by this, as it means all the hack marketers that spam my inbox incessantly with whatever product they’re hucking - this time for sure perfect for my business, in spite of the fact I’ve ignored their last ten emails - are all out of a job, and good riddance.

The author sounds unfamiliar with this brand of marketing email, so I can see why it would come off disquieting to find it’s all AI - but it’s equally annoying from a human.

At least with AI sending this crap nobody can use these emails to justify their sales bonus.

kazinator
19 replies
12h48m

How do you know it isn't exactly the same people, with zero reduction in headcount?

Designing the content of spam e-mails sounds like a small aspect of the "job".

If AI spams start fooling people more reliably, that's not something to celebrate.

This blogger thought, at first, that it came from an actual reader. I can't remember the last time I thought that a spam was genuine, even for a moment. Sometimes the subject lines are attention-getting, but by the time you see any of the body, you know.

jstummbillig
18 replies
11h38m

If you do nothing that is discernible from noise (be that manually or through AI), unless your explicit goal is to generate noise, your ROI is 0.

Sure, AI spam can severely disrupt peoples attention by competing with "real" people more competently. But people will not have twice the attention. We will simply shut down our channels when the number of real-person-level-ai-spam goes to infinity, because there is no other option. Nobody will be fooled, very quickly, because being fooled would require super human attention.

Granted, that does not seem super fun either.

bowsamic
13 replies
11h27m

The emails are discernible from noise though. They literally have a signal to noise ratio higher than one. Noise would be pure rng output. So I don’t know what you’re getting at

ImHereToVote
8 replies
10h58m

He is talking about semantic noise. Something that appears to have substance but is just slop actually. When everything is that. Then all email will become equivalent to slop. How could it not? Someone will be burned once or twice, but after that, there is a semantic phase shift.

bowsamic
5 replies
10h39m

What is "just slop" though? A spam advert for a product is still an advert for a product. Therefore it's not just semantic noise, it is still an advert for a product, and therefore his point is invalid: there is an ROI and people will continue to be employed to do it

TeMPOraL
4 replies
9h50m

A spam advert for a product is still an advert for a product. Therefore it's not just semantic noise, it is still an advert for a product

Ergo slop and semantic noise.

Companies that used adverts which weren't noise went out of business long ago.

bowsamic
3 replies
7h42m

Adverts have semantic content, they aren't noise.

ryandrake
2 replies
3h18m

Let's just call it slop then. Peak HN: Another conversation is logjammed by nitpicking the precise definition of a word rather than discussing the overall point.

bowsamic
1 replies
3h16m

Except I am still discussing the point: the companies won't stop getting an ROI because "slop" still produces an ROI, even if people know it's slop, because it isn't contentless noise, it has semantic content.

Just because you and the others don't understand what point I'm making doesn't mean the conversation is "logjammed". I am still discussing the overall point, you just don't see it.

ryandrake
0 replies
3h11m

For the record I agree with you--just pointing out a silly, but common, HN pattern.

kazinator
1 replies
9h2m

"How could it not?" There are ways.

Consider that we have fairly decent anti-spam measures which do not look at the body of a message. To these methods, it is irrelevant how cleverly crafted the text is.

I reject something like 80% of all spam by the simple fact the hosts which try to deliver it do not have reverse DNS. Works like magic.

E-mail is reputation based. Once your IP address is identified by a reputation service as being a source of spam, subscribers of the service just block your address. (Or more: your entire IP block, if you're a persistent source of spam, and the ISP doesn't cooperate in shutting you down.)

To defeat reputation based services driven by reporting, your spams have to be so clever that they fool almost everyone, so that nobody reports you. That seems impractical.

How AI spammers could advance in the war might be to create large numbers of plausible accounts on a mass e-mail provider like g-mail. It's impractical to block g-mail. If the accounts behave like unique individuals that each target small numbers of users with individually crafted content (i.e. none of these fake identities is a high volume source), that seems like a challenge to detect.

immibis
0 replies
8h37m

These IP blocklist services also have a reputation of their own: if you are trying to send legitimate mail, there's a good chance your IP is on several of these blocklists for reasons you have nothing to do with. You can only remove it by grovelling and paying lots of money (extortion). So using one of them will cause you to reject legitimate mail.

Wolfenstein98k
3 replies
11h0m

Yes you do. You're being over-literal.

"Noise" in context doesn't mean random characters, it means garbage or spam or content not worth your while.

bowsamic
2 replies
10h38m

No, I'm not being over-literal. Here's why:

Yes, it could be that for you a given advert is irrelevant or not worth your while, but the point he was making is that it won't even be worth it for the advertiser to put out the advertisement because it will be noise for everyone.

However, there is only one kind of noise that is noise for everyone: literal noise.

So long as the spam is about something, it is relevant to someone, and therefore it does not necessarily have zero ROI.

EDIT: The only kind of noise that has no semantic is actual "mathematically pure noise" as the person below commented (/u/dang banned my account so I can't reply)

thomashop
1 replies
10h16m

However, there is only one kind of noise that is noise for everyone: literal noise.

I feel like you're a bit too literal here. When people talk about noise it doesn't mean mathematically pure noise. A signal-to-noise ratio close to 1 is also colloquially called noise.

bowsamic
0 replies
7h42m

Addressed above

lmm
3 replies
9h48m

If you do nothing that is discernible from noise (be that manually or through AI), unless your explicit goal is to generate noise, your ROI is 0.

We're talking about a group of people whose core skill is convincing people to pay for stuff that isn't worth it. You and I may know they're worthless, but that doesn't mean they're not getting paid.

jstummbillig
2 replies
9h39m

Let's assume you have a mom that loves you very much and she let's your know by text on a semi-regular basis. She asks you to come by on Friday. That might seem like a nice idea to you. You reply yes, and you go.

Now, imagine you got messages from what appears to be not 100 but, oh I don't know, 1 000 000 000 000 000 of the very best moms that have ever existed.

And they all do love you so very much. And they do let you by writing these most beautifully touching text messages. And they all want to meet up on Friday.

What is going to happen next? Here is what is not going to happen: You are not going to consider meeting any of them Friday, any week. You will, after the shortest of whiles, shut down to this signal. Because it's not actually a signal anymore. The noise floor has gone up and the most beautifully crafted, most personalized text messages of all time are just noise now.

lmm
0 replies
6h14m

I don't know what you're trying to say. The people making payroll decisions have the same amount of people under them as they always did.

ayewo
0 replies
8h13m

We all get to have only one mom and moms dont live forever.

So once someone’s mom passes away, you can’t really fool them with 1 or dozens of message from other moms anyway.

jeauxlb
10 replies
12h56m

Why are you happy that people are out of a job here? You still suffer the ills of the product, now infinitely more incessant, at a marginal cost of $0.

ronsor
4 replies
12h51m

I think it's reasonable to be happy that someone is not getting paid to do something you hate. In fact, if you're suffering unwillingly, you probably want as few people as possible to benefit.

crabmusket
3 replies
12h48m

OpenAI is getting paid to do it.

ronsor
2 replies
12h46m

Yes, but a lot less than if a person were getting paid to do it, so still less money is changing hands.

maronato
0 replies
11h55m

I don’t think so. Marketers don’t send X amount of spam because X is the right amount of spam they want to send. They are limited by how much money they want to pay in salaries and management, which defines how many people they can hire to send spam.

If the people they employ today suddenly became twice as productive, the company wouldn’t fire half of them - they just would enjoy twice the profit. The same applies to AI.

lucianbr
0 replies
12h5m

I don't know which of "5 randos getting a living wage by spamming me" and "Altman getting filty rich by spamming me" is worse. I'm inclined to say the latter, though of course it's quite close.

Wish SV would stop thinking anything that makes money is great, no matter the crap it inflicts on people. Guess I'm asking for way too much.

Joker_vD
4 replies
12h43m

Because maybe, just maybe — those people will find some other jobs, and those jobs will be more socially beneficial this time? One can dream.

bowsamic
2 replies
11h25m

“maybe, just maybe”

“One can dream.”

You’ve either used these sarcastically, or accurately. I think you’ve done the former, but the truth is the latter.

Joker_vD
1 replies
7h20m

I am absolutely serious. Any employment has opportunity costs: a person who writes and sends out cold call spam e-mail for 8 hours a day is a person who could be spending those 8 hours on something else, but isn't. Yes, switching jobs is not very easy, and it's stressful but humans, thankfully, are not (yet) a species of highly-specialized individuals, with distinct morphological differences that heavily determine the jobs they potentially can or can not do.

bowsamic
0 replies
7h10m

So I was right, you did use it sarcastically, since you are still naive

bryanrasmussen
0 replies
12h3m

They can maybe get jobs for Microsoft and call people up to tell them they've noticed something is wrong with their computer!!

darby_nine
7 replies
11h23m

I've found it's easier to simply ignore your inbox and hope the spam unsubscribes itself and disappears

chillfox
6 replies
11h0m

lol, I treat my email inbox like a dumpster that I occasionally search when I know there's something there that I need to retrieve. The spam has won, I have moved to chat platforms for my communication needs.

ChrisMarshallNY
5 replies
9h24m

I get -no exaggeration- several hundred spams a day. I have an OG email address that was grabbed by spammers, since the days of Network Solutions (so it’s been awhile).

I maintain Inbox Zero, much of the time, and seldom have more than three or four emails in my client at any time.

I get there by being absolutely brutal about tossing emails.

I probably toss a couple of legit ones, from time to time, but I do have rules set up for the companies and people I need to hear from.

The thing that will be annoying, is when AI can mimic these. Right now, that stuff is generally fairly clumsy, but some of the handcrafted phishing emails that I get, are fairly impressive. I expect them to improve.

A lot of folks are gonna get cheated.

I do think that some of these Chinese gangs are going to create AI “pig butchering” operations, so it will likely reduce their need to traffic slaves.

grugagag
4 replies
6h25m

What are pig butchering operations?

jabroni_salad
3 replies
5h57m

It's people that write you love letters until you western union them your entire retirement account.

ChrisMarshallNY
2 replies
5h41m

It’s really quite sophisticated.

John Oliver actually did a great segment on it, but I won’t link it, because a lot of folks don’t like him.

jabroni_salad
1 replies
3h34m

I haven't seen that but I have read some articles about it on propublica. I just kept the description as simple as possible to make it more memorable.

ChrisMarshallNY
0 replies
3h22m

Well, a lot of the scammers are actually slaves, trafficked into Myanmar boiler rooms, by Chinese Tongs.

If AI takes off for this stuff, the gangs are less likely to be kidnapping these poor schlubs.

So … I guess this would be a … positive outcome?

Not sure if AI zealots will be touting it, though.

_nalply
6 replies
12h57m

Some people will send their mass spam and phish anyway. No thanks.

purple-leafy
5 replies
12h54m

Spam? Easy. Someone selling something? Spam! I might set up an automatic email responder that reads an emails contents, runs it through my own LLM, and if the email is trying to sell me something, auto reply with “fuck off!”

Brajeshwar
4 replies
12h48m

I'd rather delete/block it than reply/react to it at all. If you react, they know you exist and you are a valid target to re-target repeatedly, resold to other marketers.

Mark as SPAM or Block/Filter or Ignore.

purple-leafy
2 replies
12h46m

Okay new plan, I’ll have another email that responds to the email and says “fuck off”, meanwhile my honeypot email will block and mark as spam

Ekaros
1 replies
10h55m

Sadly I think it is illegal to sing up these addresses to every service known to you... Otherwise it would be interesting SaaS opportunity. Automatically sing-up spammers to any number of newsletters or contact forms...

purple-leafy
0 replies
9h30m

I think you just gave my life purpose. It will be my magnum opus.

Actually that’s already been completed, and will be released to hackernews in the coming days

immibis
0 replies
8h35m

When they're paying real money to scam you, wasting their time isn't a terrible idea. Like keeping the Microsoft virus scammers on the phone for an hour while you set up a virtual machine for them to remote into.

tivert
5 replies
12h29m

I’m actually thrilled by this, as it means all the hack marketers that spam my inbox incessantly with whatever product they’re hucking - this time for sure perfect for my business, in spite of the fact I’ve ignored their last ten emails - are all out of a job, and good riddance.

...

At least with AI sending this crap nobody can use these emails to justify their sales bonus.

What weird, misplaced animus. You're happy some salesguy got fired, while his boss sends even more spam and possibly makes even more money due to automation?

Those hack marketers rate-limited this kind of spamming. Now things are about to get worse.

eru
2 replies
12h27m

[...] while his boss sends even more spam and possibly makes even more money due to automation?

Wouldn't the exact argument apply to that boss as well?

bryanrasmussen
1 replies
12h5m

unless this is a big multinational spam organization probably the boss of the person sending the email is the highest up, but no matter what there will be someone on the top who does not get fired and will be able to reap all the rewards of the AI automation, at least until the AI revolution puts them up against the wall.

eru
0 replies
11h31m

There's presumably heavier competition from other spammers, until everything is in equilibrium again. The wallets of potential spam victims only have so much total cash.

bloqs
1 replies
11h38m

Some people don't realise how lucky they are that they are blessed by the cognitive lottery that affords them a brain and personality that lets them pursue an enriching and engaging career they feel is valued by society.

In classic HN style the original reply lacks empathy, and demonstrates a preference of machines over humans. Life goes on...

tivert
0 replies
4h40m

In classic HN style the original reply lacks empathy, and demonstrates a preference of machines over humans. Life goes on...

That stereotype definitely rings true. Thank you for helping me put my finger on it!

saturn8601
3 replies
12h3m

I look forward to the blog post of how a hacker uses AI to respond to AI generated leads and then have them play with each other....and then uses AI to create content for a Youtube channel fighting back against marketers using said AI.

These early days is ripe to make some quick cash before it all comes crashing down.

masswerk
0 replies
10h22m

Isn't this pretty much one of the proposed new concepts for online dating? ;-)

Terr_
0 replies
11h36m

and then uses AI to create content for a Youtube channel fighting back against marketers using said AI.

I'm skeptical: It's easier to create bullshit than to analyze and refute it, and that should remain true even with an LLM in each respective pipeline.

----

P.S.: From the random free-association neuron, an adapted Harry Potter quote:

Fudge continued, “Remove the moderation LLMs? I’d be kicked out of office! Half of us only feel safe in our beds at night because we know the AI are standing guard for misinformation on AzkabanTube!”

“The rest of us sleep less soundly knowing you have put Lord Bullshittermort’s most dangerous channels in the care of systems that will serve him the instant he makes the correct prompts! They will not remain loyal to you when he can offer them much more scope for their training and outputs! With the LLMs and his old supporters behind him, you’ll find it hard to stop him!”
safety1st
1 replies
11h0m

I don't really think that AI is the central issue here. The issue is that Kurt, the founder of Wisp, is a liar.

He misrepresented himself as a big fan of all these blogs, who's read their posts etc. and that's how he achieved such a high response rate. In effect he deceived people into trusting him enough to spend their time on a response.

Now ordinarily this would be a little "white lie" and probably not a huge deal, but when you multiply it by telling it 1,000 times it becomes a more serious issue.

This is already an issue in email marketing. The gold standard of course is emailing people who are double opted in and only telling the truth, and if AI is used to help create that sort of email I don't really have a problem. There is basically a spectrum where the farther away you get from that the progressively more illegal/immoral your campaigns become. By the time you are shooting lies into thousands of inboxes for commercial purposes... you are the bad guy.

Sorry to say but the real issue here is Kurt has crossed an ethical line in promoting his startup. He did the wrong thing and he could have done it pretty effectively with conventional email tools too.

pseudalopex
0 replies
7h28m

Wisp founder Raymond Yeh is a spammer and liar. Kurt was a victim of Raymond Yeh's fraud.

elorant
1 replies
12h44m

Someone will just pack this into a product and sell it to marketers.

TeMPOraL
0 replies
12h25m

And use it to market the shit out of it. If marketing finally collapses under the weight of its own bullshit, I'll be celebrating.

xarope
0 replies
12h5m

some of the marketing spam is so low effort, I get addressed as "Dear {{prospect}}". It does make deleting the email easy though, since the preview of the first line allows me to filter pretty fast!

simion314
0 replies
11h27m

If this works those spammers will make more money and send more emails scamming more people. Maybe some politician would fall for soemthing like this, be public ally embarrassed and lose a lot of money and then something more will be done to address this spammers and scammers .

hk__2
0 replies
10h5m

From the spammer blog post [1]: "I spent hours trying different data sources", "a lot of time was spent on find-tuning the tone and structure of the email", "It took multiple tries to finally have the agent write emails in different language", etc. This won’t put marketers out of a job, but will greatly improve their tooling and enable more people to do the same thing with even less qualification.

[1]: https://www.wisp.blog/blog/how-i-use-ai-agents-to-send-1000-...

cen4
0 replies
11h55m

The problem is never what one person or one company is doing.

But when everyone copies what that one person or one company is doing. Software makes the copying process dead easy.

Once the herd starts stampeding, it creates a secondary effect of an arms race for finite Attention of a finite target audience. That assault and drainage of that finite attention pool, happens faster and faster and every one gets locked in trying to outspend the other guy.

An example currently is Presidential Campaigns furiously trying to out fund raise each other. Its going to top 15-17 billion this year. All the campaign managers, marketers, advertisors make bank. And we know what quality of product the people end up with. Cause why produce a high quality product when you can generate demand via Attention Capture.

The chimp troupe is dumb as heck as a collective intelligence.

castigatio
34 replies
12h51m

It's a sign of things to come. We're going to have our own AI agents that filter and respond (or not respond) to these kinds of messages. Agents interacting with other agents. The bar to get hold of a real person is going to become that much higher. It is going to be messy for some time as agents war with other agents to reach the human eyeball. Some assholes are going to make a ton of money in the short term exploiting the gap - just like early spam kings did.

saturn8601
12 replies
12h0m

Technology ruins everything it touches doesn't it?

I was recently thinking about this Ozempic fad and how it will lead to no one being overweight but just be dependant on Ozempic...until food producers that made everyone fat in the first place with their processed junk will produce Ozempic resistant foods...and then we are really in a world of hurt.

okal
7 replies
11h31m

What incentive do they have to make Ozempic resistant food? Ozempic resistance seems like an odd thing to optimize for. Or are you suggesting it will happen accidentally?

choeger
5 replies
11h22m

Ozempic reduces appetite, right? So food producers cannot be happy about it.

N0b8ez
4 replies
11h8m

I love the idea of the comic villainy of someone who deliberately chooses to organize a team to find ways to circumvent Ozempic in order to keep their buyers unhealthy and addicted. Could such a schemer have an internal monologue, and what would it consist of? What do they see when they look into a mirror? Their experience of reality must be utterly fascinating and alien.

rsynnott
0 replies
10h19m

I mean, see the tobacco industry.

lucianbr
0 replies
10h52m

Read the blog post that this blog post talks about - the one that says "we use AI to spam people, isn't it great?". It will be something like that. As long as there is money to be made, the internal monologue is just "hope this works and I get more money".

What do they see when they look into a mirror?

A person deserving of riches, that is about to get them. Nobody sees themselves as the villain. Well, maybe some, but vanishingly few.

immibis
0 replies
8h32m

They already did this pre-Ozempic - a lot of foods are optimized to keep you eating, and that's why there's an obesity crisis. Low nutrients, high sugar and fat. In the post-Ozempic world there will surely still be things that trigger the continued appetite of Ozempic users. Especially with the FDA having just been neutered.

Sander_Marechal
0 replies
10h56m

Just ask an MBA focused on short term profit.

DonHopkins
0 replies
9h9m

Read Philip K Dick's "A Scanner Darkly" (or see the movie). They're forcing overweight people in Ozempic rehab to farm the ingredients to make more Ozempic!

pseudalopex
0 replies
5h31m

Is there a summary?

bowsamic
1 replies
11h24m

No, there are many things technology has improved, not ruined

saturn8601
0 replies
8h56m

Of course I don't have a tally on how much things it has improved vs not improved but there are many things I can think of which are considered good but also resulted in bad: (social media providing connection while causing depression, cars providing freedom of movement vs pollution etc.) so its probably not something that can be truly decided one way or the other.

louwhopley
7 replies
11h58m

Haha, exactly this. I've built and successfully been using Unspam[0] for this reason since about a year ago. In corporate/business world, anything where SDR sales are involved this form of automated AI outbound mail has picked up a lot. Tools like Apollo automates this AI process (both finding leads to mail, and then crafting the mail).

For interest sake, users of Unspam that have a title of CEO on their Linkedin see about ~10% of all mail making it into their inbox be categorised as spam (leadgen, recruitment, or software dev services).

[0] https://unspam.io

slhck
4 replies
11h47m

Just saw this, and as a small business owner in the B2B market, this sounds very useful. Gmail's existing spam filters do not reliably detect this type of marketing.

I wish your landing page had a simple "how it works" explanation with a screen shot or diagram, rather than forcing me to sign in directly, and also allowing the app to read *and* send emails. Also, I don't see any pricing?

Finally, signing up, I got an error:

Error 1101 Ray ID: 89d4e0957c2f5a44 • 2024-07-03 06:39:15 UTC - Worker threw exception

louwhopley
3 replies
11h0m

Thanks for the useful feedback! Totally forgot that pricing was never added to the landing page → have added to the todo list to fix up.

Where in the process did that error occur for you?

I see in the logs that an error registered, but unfortunately no detail attached. I've beefed up the logging a bit in the onboarding journey on my side to see what could be breaking here if we try again.

Mind trying to log-in/sign up again? You can use "HACKERNEWS" as a promo code, which would make the first month free.

slhck
2 replies
10h56m

The error occurred right after granting permissions from my Google account. The permissions were granted but I could never access your application page. I just tried again, now I got an "Error handling OAuth callback" after granting permissions. Signing in again does not work either. (I did remove all of the app's permissions in my Google security settings before, so to Google it looked like the application was requesting all of its permissions again.)

louwhopley
1 replies
10h51m

I do see it in the logs now. So weird, as dozens of people successfully signed up without this issue. Have added more logs now again to double down on that specific area where this issue is caused. Maybe another login attempt now will be able to uncover the gap.

Thanks for removing the permissions in Google, as that's also key in this debugging.

Mind if I send you an email to debug further there?

louwhopley
0 replies
10h10m

Quick shoutout to slhck for helping me debug and resolve this issue. Thank you!

tl;dr: Ran into issues because the DB was expecting a profile picture URL from Google auth (string) or NULL, but JavaScript being JavaScript tried to insert "undefined".

louwhopley
0 replies
11h22m

Correct! :')

PlusAddressing
3 replies
10h56m

I already started readying for it. I'm ensuring that ALL services that have my email have a Plus Address on them. The plus addresses are random and labeled only on my end.

Still not close to 100%, but when I feel like I do, I will then have a filter and an automated message telling people that removing plus addresses from my email is forbidden and I will not read their message if they do.

You will tell me where you found me, or I won't even listen to you. Because in the future, with an even larger infestation of automated agents passing off as human, that's the bare minimum I need to do.

yogsototh
1 replies
7h59m

I am pretty confident the spammers will remove the `+` suffix from your email. And this is why I find the Apple fake email building solution a lot better because they build a fully different email per service. No way for the service to be able to cheat and discover my real email address from the one I give them.

Still a smart enough system might be able to discover a valid email from my other id info, like my name. But this start to be a lot of work, while just `s/+[^@]*@//` is easy enough to do.

dpcx
0 replies
5h48m

I started worrying about the `+` address functionality as well, so I set up postfix aliases with `me.%@domain` (I use postgres for domains/aliases/accounts) and then have my virtual_alias_map run the query `SELECT a1.goto FROM alias a1 LEFT JOIN alias a2 on (a2.address = '%s') WHERE '%s' = a1.address OR ('%s' LIKE a1.address AND a1.active = true AND a2.address IS NULL)` - I know have `.` address functionality and can do the same functionality. It's much more common for email addresses to have `.` in them, so its' less likely to trigger alarm bells.

lukapeharda
0 replies
9h3m

I like your idea. Let me know if you create a browser extension / Gmail addon to automate the flow :)

cpach
1 replies
11h39m

Why do you need “AI” to do Bayesian filtering?

N0b8ez
0 replies
11h17m

They said other things besides just filtering, like writing responses.

whiplash451
0 replies
8h59m

I don’t think fully automated replies is happening any time soon. There’s way too much risk for you as a user.

Would you seriously enable it even if Gmail offered it?

Highly unclear.

rpigab
0 replies
9h11m

Will this mean in-person business interactions will thrive because it will be the only way to avoid spam? Will companies hire thousands of people to deliver message in-person because emails no longer work?

Will our AI overlords create perfect androids to fool us into thinking we're interacting with a human when it's just LLMs disguised as people? Are we ourselves delusional because we're actually already LLMbots so advanced that we can't distinguish thought and running inference? Why do we have only 12 fingers?

drsim
0 replies
12h4m

I love this direction. It could be that the writer’s AI agent knows that he’s looking around for a new CMS so asks for more info, compiling this for review. Or it says ‘not interested’ and the conversation is muted.

All without the writer needing to be involved in reading the cold outreach.

drdrek
0 replies
10h41m

But this is already the situation in the last 15 years, your gmail spam filter is already a machine learning algorithm that filters out automatically generated content. Mail as a vetted technology is way ahead of other forms of communications in the department of filtering unwanted content.

Anyone that tried to set up a new email domain will tell you its quite a serious task. Email spammers are constantly on the run, setting up new domains, changing up the content to evade spam filters. Its very time consuming, hard and unpredictable. It time for social media to close the gap with email and make spamming effectively as hard.

I postulate that if we applied similar techniques to social media after a couple of years online discourse is going to improve. Or we are not going to do this and the death of the open internet will continue.

blitzar
0 replies
12h4m

I hope my Ai agent doesn't fall for the Ai agent who found my distant Nigerian prince cousin and wire them 10,000 so they can send me my 100,000,000 share of the family inheritance.

EasyMark
0 replies
6h22m

If it gets that bad, I’ll simply not respond to anything outside of my circle of friends and family. That is 95% of the communications I need. I think we’ll all have to have some kind of pop type verification for each other that we’ll share in person or over verifiable communications channel, no one will read this morass of horseshit.

Ameo
0 replies
10h52m

This is the _exact_ scenario described in the novel Permutation City by Greg Egan. There's a whole little spot devoted to describing one of the character's setups for having their own little agents to pretend to be them in order to fool agent-powered spam emails into thinking they're being read by a real human.

The crazy part is that book was released in 1994! Iirc Greg Egan isn't a big fan of modern "AI", wishing instead for a more axiom-based system rather than a predict-the-next-token model. But in any case, I was re-reading it recently and shocked at how closely that plot point was aligning with the way things are actually shaping up in the world.

The timeframe for this happening in the book was 2050 btw

namanyayg
17 replies
13h6m

This is going to become more common everywhere.

If the dead internet theory isn't already true, it is going to be soon.

Such "personalized" cold outreach is seen as the next holy grail by marketers and will be a common sight on LinkedIn, Twitter, Email etc, soon.

supriyo-biswas
5 replies
12h58m

The silver lining is that people will learn to just ignore such outreaches and word-of-mouth feedback will be important again, or at least I hope so.

nosbo
1 replies
12h12m

Word of mouth with IRL people? I'm not sure I can assume anyone on any forum is real anymore. And if they are real, I assume they are marketers pretending to be users to push a product. Maybe journalism makes a come back if you can trust they are real and not a sellout.

sambazi
0 replies
11h40m

pretty sure top meant RL ppl IRL aka meatspace.

information coming over unqualified electronic channels is not trustworthy anymore

lmm
1 replies
9h44m

The AI spammers will hire people on minimum wage to do that too, if they aren't already.

beAbU
0 replies
9h18m

I am convinced that any post on Reddit that espouses the virtues of some or other product is a paid advert.

There is way too much corporate worship despite the platform's users generally priding themselves on being enlightened and smarter than the rest.

portaouflop
0 replies
12h7m

It is already like this in my experience.

Cold outreach is dead and word-of-mouth is the most effective marketing method

themanmaran
4 replies
12h59m

It's truely a race to the bottom. Cold email response rates are already ~1% industry average. Every outbound tool is adding the AI customization, and there is a slew of 'AI sales rep' companies promising more and more personalized spam.

There will likely be rewards at first. An uptick in response rates as most of the market won't recognize emails are AI generated. But because it's trivial to send AI personalized emails at massive scale, your email inbox will become entirely useless.

nostromo
3 replies
12h21m

1% is also about how well this worked, according to the sender's blog post.

10 signups / 970 emails sent

youssefabdelm
1 replies
10h39m

Kinda makes you wonder... why don't they just advertise with those odds?

lmm
0 replies
9h45m

Because that's an order of magnitude better than what you get from advertising.

altdataseller
0 replies
8h14m

Thats actually a very good rate. 1% conversion rate is drastically different from a 1% response rate

cranberryturkey
3 replies
12h51m

What is the "dead internet theory"?

kibwen
2 replies
12h40m

It was a joke from the 2010s that most of the people that you interacted with on the internet were actually bots, and that you were the only human using the web.

Now, in the post-LLM age, it doesn't sound like a joke anymore.

devjab
1 replies
11h35m

Does it really matter if you’re being cold called by an AI or some sales person following the same few procedures they always do?

I’d prefer sales people keep their jobs. Having had the misfortune of being seated next to the telemarketing team in an investment bank for half a year… however… Let’s just say that I’m not sure you would even know if it was a person or a bot. They’re not even scripted or “trained” like your average telemarketer because our target audience is actually somewhat interested in what we sell, but listening to them repeat themselves over and over from their own “personal scripts”… well… they are already bots man.

cranberryturkey
0 replies
6h53m

Very true, I sat next to the sales guy at a small company and he was on the phone all day repeating himself day in and day out. Easily replaced by AI.

cpach
1 replies
11h38m

Recruiters on LinkedIn already used automation for outreach even before LLMs became popular.

devjab
0 replies
11h31m

LinkedIn is already on this. The reason they had their little “skills tests” is because what they used to sell was the collection of “skills” listed on your profile. I say skills because I’m not sure what the English word for knowing C# and listing it on your linked in profile is and I can’t seem to find it.

Anyway, I assume that the reason they are dismantling the skills system (and their verification quizzes) and moving things into personal “projects” is because it’s too easy for marketers to skip the LinkedIn tools if it remained the way it was. Now, however, with Microsoft own LLMs trundling through our data, they’re going to maintain their monopoly on easy access to professionals that meet certain requirements.

I guess it could also be because those skill quizzes had their answers readily available all over the interwebs.

cpach
13 replies
11h40m

If John Doe crafts a message himself and sends it to 100000 recipients, or if he uses ChatGPT to generate a message and then send it to 100000 recipients, what’s the difference?

Both are unsolicited emails, i.e. spam.

I feel confident that Gmail’s spam filter will be able to handle this quite well.

I’m betting that the introduction of LLMs will not change the fundamentals of spam-fighting.

https://paulgraham.com/spam.html

imadj
7 replies
11h8m

what’s the difference?

The key difference here is personalization.

Traditionally, if a message was personalized it fell under 'cold outreach' and users were more likely to interact and play along. Just like what happened with the author (the same applies for everyone).

It's like the difference between receiving a flyer vs being contacted by a sales representative. Even if it's they advertise the same product, the perception is different, the results are different.

If you're mean the difference from a pure technical spam detection perspective, I'm not familiar, but would love to read more about the subject and the state of the art techniques if anyone has some resources to recommend.

nottorp
4 replies
11h0m

Do you read/answer cold outreaches then? Why?

Unless you're specifically looking for unsolicited offers, in which case you probably have a process for them, they seem like a waste of time.

imadj
3 replies
10h38m

Do you read/answer cold outreaches then? Why?

Do you only read emails from recognized addresses? No new communication whatsoever unless it's initiated by you?

nottorp
2 replies
10h29m

Not if they're trying to sell me something...

imadj
1 replies
10h2m

Not if they're trying to sell me something...

How do you know they're trying to sell you something without even reading the email?

Your question was "Do you read/answer cold outreaches then? Why?" which doesn't make much sense. For me, and I imagine the same applies for most people:

1. You read until you find a clue that its content is not of interestt. Usually the email subject doesn't say much.

2. You only reply if you need.

Cold outreach are genuine emails that covers colleagues, new clients, job opportunities, someone reaching out to collaborate, etc. How you deal with it depends on your profile and who you've given your email address to. Personally, I have many email addresses, for some I don't even check my inbox.

nottorp
0 replies
8h52m

You read until you find a clue that its content is not of interestt. Usually the email subject doesn't say much.

You confusing "read" with "quickly skim"? :)

cpach
1 replies
10h43m

a) If someone manages to generate a letter that I actually find useful and interesting then I’m not sure I would mind if it was unsolicited. I don’t believe that the likeliness for that is super-high, though. And if a crappy message would get past the spam filter I would just flag it.

b) If you want to read more, feel free to check the link I posted. Paul Graham has thought/written a lot about this. I think one reason people has forgotten about those articles is that today, a huge number of us use Gmail, so we don’t actually need to think so much about how spam filtering is implemented.

imadj
0 replies
10h28m

If someone manages to generate a letter that I actually find useful and interesting

But that's inconsistent with the example you put forward. For the email to be interesting a human would need to research and approach every prospect independently, how many emails a day they can do? 5, 10, 20, 100?

It's simply not possible for a human to generate 100,000 personalized email by hand. That's the difference.

staunton
2 replies
11h36m

Using a language model, one can craft an individually targeted email for each of those 100000 recipients. How do you "handle" this without doing anything current spam filters don't? Can you prevent an individual from sending 100000 emails a week? Can you make it cost them money?

cpach
1 replies
11h28m

Using an LLM to generate 100000 letters is hardly free, is it?

And AFAIK, Bayesian filtering (by the recipient) doesn’t require any knowledge of what other people has received.

staunton
0 replies
4h25m

Using an LLM to generate 100000 letters is hardly free, is it?

No, but with further advances it might easily get cheap enough that spammers think it's worth it.

Bayesian filtering (by the recipient) doesn’t require any knowledge of what other people has received.

Agreed. However, assuming people don't individually configure those filters -- which they currently do not and scaling this up would be something quite novel --, this seems quite gamable

throwaway2037
0 replies
11h36m

John Doe is probably very good a generating sales leads! By definition, most sales leads are generated from unsolicited communications -- email, phone, etc. I expect the very best sales people will be using a combination of ChatGPT and genuine personalisation for unsolicited communications.

leobg
0 replies
11h27m

Interesting article. Thanks for posting.

Assuming they could solve the problem of the headers, the spam of the future will probably look something like this: > > Hey there. Thought you should check out the following: > http://www.27meg.com/foo

Funny. 20 years later, that’s indeed how many spam messages look like.

atoav
11 replies
10h47m

I use catchall email addresses. If your service is called foobar.com I will register at your place with foobar.com@mydomain.com

If I ever receive spam addressed to foobar.com@mydomain.com that is unrelated to your service I know you leaked or abused my data. Result: you will get a DSGVO complaint and I filter all emails addressed to this address from my inbox.

The good thing about using a catchall email address is that I don't have to create a mailbox for each service/purpose, I can just make email addresses up as I go. All you need for that is your own domain and a mailserver that aupports it.

giorgioz
3 replies
10h14m

Very cool! Could you go deeper into your setup? Which email client do you use to view/manage the catch all emails? Did you host the email on Google Gsuite or AWS SES or something else?

kamilner
1 replies
10h2m

I do the same as the poster above, fastmail supports it directly and makes it very easy to manage. All you have to do is bring your own domain (they'll even manage your DKIM/SPF records etc as necessary if you want).

Edit: Apparently you can also purchase a domain directly through them if you prefer, although you have to be a paying customer for 7 days first https://www.fastmail.com/how-to/email-for-your-domain/

freehorse
0 replies
9h7m

I use simplelogin with proton for that, they give you a few subdomains to do the same.

tsm
0 replies
9h36m

I have the same setup via GSuite.

mafuy
2 replies
9h59m

Does this allow you to also send emails as a particular address? I've not yet managed to set this up properly.

olex
0 replies
9h52m

Yes, with Fastmail this is quite easy to set up. It automatically uses the alias when replying to an email that was addressed to one, but you can also manually choose (on input) any alias for an outgoing email.

leni536
0 replies
9h44m

Even if the mail server you use for inbox does not allow it, you can set up mailgun or a similar service as your smtp server.

lmm
0 replies
9h42m

If I ever receive spam addressed to foobar.com@mydomain.com that is unrelated to your service I know you leaked or abused my data. Result: you will get a DSGVO complaint and I filter all emails addressed to this address from my inbox.

Has this ever resulted in significant penalties for those companies? I used to do this but I gave up as it never seemed to achieve anything.

bruce343434
0 replies
7h52m

This is what I do as well, but sadly it seems my phone number has been leaked at some point... I'm considering setting up a private VoIP thing so that each company gets a unique phone number. Really nobody can be trusted with my data, it is a statistical inevitability that they get hacked or sell out.

K0nserv
0 replies
7h40m

I do this too, barcelonaairportwifi@<domain> is a prime offender and gets a lot of spam. I've also taken to using Fastmail's masked email support along the 1Password integration for the same.

zandert
10 replies
12h42m

Really makes me appreciate that unsolicited emails are illegal in some European countries like Germany

slhck
4 replies
11h43m

Unfortunately, not in a business context, where marketers can claim "legitimate interest" in various ways. Also, in which way would it matter that they are illegal? Random companies keep sending them anyway; there are virtually no legal repercussions here.

bpfrh
2 replies
11h30m

Curious, do you mean in business to business?

Otherwise I don't think you can argue any legitimate interest.

slhck
1 replies
10h59m

Yes, I mean cold sales emails – marketers reaching out to CEOs or other decision makers, selling them staff augmentation services, growth hacking, marketing support, lead generation, design services, etc. They'd claim legitimate interest by "personalizing" the email and claiming that it is relevant for you in a business sense. (Anyway, I don't think that these are fully compliant with GDPR either, because most often, they will have scraped your email address from somewhere, and do not provide a way to unsubscribe.)

zandert
0 replies
4h52m

Some countries provide some official places to complain about cold calls/emails, so at least it puts the sender at risk.

It boils down to a risk/reward trade-off, but I doubt that someone would as easily send thousands of spam mails, and also publicly boast about it

sureIy
3 replies
12h15m

I think that they’re as illegal as they are in the US, not more. I think it’s perfectly fine to “cold-call” people but then you’re not allowed to send more emails unless they subscribe or respond.

In reality it’s very easy to end up subscribing to newsletters and even my European embassy subscribed me to their event newsletter in Thailand—of course I never agreed to any of that.

bpfrh
2 replies
11h33m

No it is not allowed to cold call or send any emails without express permission from the recipient this is for austria/germany.

It seems that with the gpdr this is now eu wide:

https://gdpr.eu/email-encryption/

sureIy
1 replies
10h27m

That’s not accurate. If your email is on your website, of course they can email you. If what you said was true in absolute terms, communication would be impossible.

bpfrh
0 replies
10h13m

They can contact you for legitimate reasons, which could be "hey, your website has content from me that is copyrighted" they can't contact you for sales reasons without your consent.

The law for that, at least in my country, is very clear: https://www.ris.bka.gv.at/NormDokument.wxe?Abfrage=Bundesnor...

dgellow
0 replies
12h39m

We still get them unfortunately

metadat
8 replies
13h5m

And their blogpost starts of with:

> Have you ever received an email that felt so personalized, so tailored to your interests and experiences, that you couldn't help but be intrigued? What if I told you that email wasn't crafted by a human, but by an artificial intelligence (AI) agent?

> I don't really have words for this, but I dislike this.

What a classy understatement. I find the strategy employed by Wisp predictable and infuriating. Like insects or other near-automata, humanity is racing to the bottom with "Generative AI". And I use "AI" in the loosest possible sense here, because once you pull back the curtain, current tech is actually only a slightly better Markov chain.

After using chatgpt regularly, it's responses to anything but the most trivial, clueless questions are riddled with errors and "hallucinations". I often don't bother anymore, because it's easier to go to the original source: stackoverflow, reddit, and community forums. Gag. It does still make a good shrink / Eliza replacement.

hunter2_
2 replies
12h50m

After using chatgpt regularly, it's answers

It isn't responding with answers. It's responding with probable verbiage. An actual "answer" requires a type of interpretation that it doesn't perform.

Tao3300
1 replies
12h42m

probable verbiage

I like that phrase. Also, how'd you get my password?

lelanthran
0 replies
12h24m

So that's what it is. I just saw '****' ...

mlsu
1 replies
12h53m

I love that turn of phrase. Insects or near-automata. Describes it perfectly.

LinkedIn -- like a floodlight in a swamp.

metadat
0 replies
12h43m

I enjoyed your comment so much I've added it as a quote on my profile. Thank you!

https://metadat.at.hn/

endofreach
1 replies
13h0m

> I don't really have words for this, but I dislike this.

What a classy understatement.

Maybe i should write a blog, simply because i have a lot of words for this... but well, they would neither be classy nor understatements.

zamalek
0 replies
11h56m

Kudos to the author for naming and shaming. I am honestly bewildered as to how this Raymond thinks that insulting a developer's intelligence could result in a lead.

DrSiemer
0 replies
11h39m

Dismiss it all you want, it's still going to destroy what is left of the open internet and unsolicited email communication.

Those haven't been in the best shape for the last decade anyway. The benefits of easily accessible compressed knowledge far outweigh the cost, so we're still going up imo.

ChatGPT is perfect for mundane development tasks and language mobility, so quite useful for a significant portion of especially low level developers. I've prompted a bunch of useful little Python scripts myself, without ever bothering to even check the syntax.

elaus
8 replies
13h2m

AI will not only pass many classical spam filters (Bayesian filters), it will also make it much harder for humans to detect spam (OP's post being a good example).

I never fell for a spam mail so far (i.e. not once clicked a link like OP did), but I fully expect this will change soon. Tough times for people that commonly expect mail from random strangers.

prmoustache
4 replies
12h7m

Well it is quite easy. No real humam has been using email anymore in the last 5 years or so.

Even in the workplace it is now common for most people to have a signature saying "only contact me via ms teams".

I am pretty sure that sooner or later the spam will find its way on teams/slack/discord the same way it does on whatsapp but at the very least they are easier to block permanently.

gambiting
3 replies
11h46m

>No real humam has been using email anymore in the last 5 years or so

Wow, that's some extrapolating from a personal bubble if I've ever seen one. Plenty of workplaces still have email as their default communication method.

prmoustache
2 replies
11h26m

There is obviously a little bit of exaggeration but when I open my email at the workplace the bulk of the mails are:

- semi automated reminders (you haven't filled your timesheets!), usually sent by humans but that do not expect answers - internal newsletters - general HR news - special news: electrical issues at the office, stay at home! - spam

Bottom line: none is addressed to you as a particular human, nor require answers.

I am sure it changes for people who have interactions with people outside of the company but I would hate having their job and don't understand why companies haven't adopted XMPP widely to make those kind of interactions. I can theorically receive spam via XMPP, but it requires at the very least that I approve the relationship before hand so if it comes from a domain I don't expect I have no reason to accept that trust.

But on personal side, I haven't received anything from a human for years. People I know usually know my phone number and contact me via instant messaging.

gambiting
1 replies
8h38m

Right, but that's an anecdote - and if we're sharing those my last company that I left very recently everything was an email. If you needed to speak to a lead or a developer from another team you'd email them, even though we had MS teams. You'd maybe ping the person on Teams for a quick thought, but if it was anything more complicated than couple messages you'd send an email. And that was a a big corporation of 40k people.

>But on personal side, I haven't received anything from a human for years.

I actually have an old friend back from high school and we talk daily using emails. He doesn't use any IM apps so it kinda stuck as our default way of talking.

And of course I exchange emails whenever there's some kind of customer service thing that needs to be dealt with - it's always best to have things in writing.

prmoustache
0 replies
7h2m

And of course I exchange emails whenever there's some kind of customer service thing that needs to be dealt with - it's always best to have things in writing.

I have the feeling contact forms are disappearing everywhere nowadays. Everything is either a chatbot or a chatcall these days.

portaouflop
0 replies
12h4m

I would treat it like I do with phone calls/messages - if it comes from a number/address I don’t know it goes into the trash.

I have no need of messages by random strangers

jtriangle
0 replies
11h50m

I talked to a relative from nigeria one time for a couple months. He was actually in nigeria, spoke pretty good english, and was scamming to get by and fund his way through college. He said his group was doing ok, and he was living better than most. He sent me pictures of himself, and where he worked, his motorcycle, all sorts of things. Not as a pitch either, like, he was proud of those things and I was interested so he was happy to show and tell about his life, we even exchanged some recipies.

Then one day he just stopped replying, and his email address would bounce. My best guess is it got shut down, for, you know, scamming. Bummed me out though, he was cool, except for the scamming thing.

fsckboy
0 replies
11h45m

Bayesian filters? how quaint. you haven't switched to AI filtering yet? Your AI has an advantage over their AI because it has read all your other email and knows what you are actually interested in.

dvrp
8 replies
12h50m

Anti AI art people remind me of anti AI marketing people from hackernews.

Guys, it’s a tool like any other.

portaouflop
1 replies
12h46m

IMO the issue was calling it “AI” - it just riles people up. It’s machine learning all the way down there is no intelligence involved

kvdveer
0 replies
12h36m

That's a bit of a misdirection. Yes AI is machine learning all the way down, just like you are biology all the way down. That doesn't make you not-human.

As TFA shows, this machine learning is almost indistinguishable from actual intelligence. It might not be sci-fi AI, but it certainly is artificial, and is is indistinguishable from intelligence. AI is a very apt description of what it is.

bmacho
1 replies
10h36m

I think you messed up your "A reminds me of B" structure, or at least I don't get what you are saying, and why.

Anyways. LLM is a program created by supercomputers to be deceptive.

Also it took away the aspect of life that people around the world could cold email each other if their hobbies align.

And in general, now the percentage of potential bad actors went from near 0 to near 100.

And for why? .. ..

bmacho
0 replies
10h36m

.. I still think the world should just put a cap on chip power. Stop producing new chips, and make it illegal to own powerful chips. I think it is

    a) doable
    b) the right solution.
(And eventually start producing very weak chips, that can run your business and accounting on a TUI.)

lambdaone
0 replies
7h32m

It is, indeed, just a tool like any other. And just like any other tool, like a gun or a knife or a pepper spray, having one does not give you the right to use it on other people.

Your right to swing your fist stops at my nose.

feoren
0 replies
12h44m

Auto-dialers are just a tool too, and there's a reason they're largely illegal.

card_zero
0 replies
12h17m

Like a duck decoy, or that little portable printing press Jim Rockford had for creating fake business cards in a hurry.

12_throw_away
0 replies
11h15m

Anti AI art people remind me of anti AI marketing people from hackernews.

... what an incredibly odd thing to say.

But really, I've noticed that thought-ending cliches like this one are popping up as defensive reactions around LLMs more and more. This particular thought-ender displays the most common theme - it dismisses all skepticism as being driven by some amorphous "anti-AI" demographic, presumably allowing the author to dismiss any concerns and thereby preventing any critical thought from occurring.

Kind of feels like "nocoiner" and "have fun being poor", v2 ...

rogual
7 replies
9h9m

I received a nice email the other day after one of my blog posts got posted on HN.

It said:

Hi -

Just a note to say I'm a big fan of your writing. I always learn something and love your voice, which is hilarious and singular.

Write a book!

Best,

{Name}

{Link to sender's startup}

{Link to sender's substack}

New to writing online, it made me feel really good that someone enjoyed what I wrote and took the time to write and say so.

After reading this piece, though, I went back and read it again, and I just don't know. It's not quite GPT's usual voice, but it is strangely non-specific.

The startup is an AI startup, the person's Substack is full of generative AI illustrations, and they do seem like an AI fan, but reading their posts, they also seem like someone who's genuinely interested in preventing a dystopia.

I suppose receiving encouraging emails from strangers is just another situation that'll have us looking over our shoulders now, on guard, trying to walk the line between naivety and paranoia.

squigz
3 replies
8h22m

I can't imagine leading a life this paranoid. There is practically no reason to suspect that email was generated by an LLM. This is like HN users who imply that some user comments are LLM generated...

corobo
1 replies
8h5m

Nice try, GPT

squigz
0 replies
7h56m

I once saw someone accuse another user of being an LLM because of a single word they used.

asddubs
0 replies
4h56m

having a website with a contact form will make you change your tune pretty quickly in regards to that

j_maffe
2 replies
8h29m

Since there's no personalized content, it was probably just copy-pasted. I get the constant fear though.

rogual
1 replies
8h3m

I'm leaning towards genuine, to be honest. Just thought it was interesting that I even questioned it, which I wouldn't have done before.

superhuzza
0 replies
7h23m

Sorry but I'm fairly confident it's spam - they just want you to look at their startup/substack links. That's why they included the links at all.

The compliment is a "foot in the door" so you don't immediately dismiss the email, and keep reading until the links.

I get the same type of comments on all my blog posts. Here's 2 examples directly from my blog:

"Awesome post! Keep up the great work! " (+ a link to their SEO service)

"Nice website, love the theme! Can I use it?" (+ a link to their WP service).

frabjoused
5 replies
12h47m

The thing that’s so inherently wrong about it is that it’s dishonesty straight out of the door.

This person wants me to buy their product, and before they can get a word out about it they’re already lying to me - about the origin, the intent, the faux thoughtfulness.

I want nothing to do with shameless dishonesty. This isn’t the way to sell your product.

Wisp, if you’re reading this, I now have a permanent negative image of your brand.

kvdveer
3 replies
12h42m

Like all immoral things, it's only bad if you get caught. :( Most perpetrators will nog blog their shenanigans.

I wouldn't have figured out this was Ai, and might have engaged had if the topic was relevant to me. I would not have engaged with a traditional spam email even if it had been relevant to me, so there's a real incentive to do stuff like this.

karmarepellent
2 replies
12h6m

I highly doubt that people employing this scheme are thinking it through though. Lets say you indeed engage with this email, not knowing its AI. Then when things are getting serious a human approaches you after all and you find out you were talking to AI all the time. Would you not be completely outraged by being fooled like this?

I think marketers underestimate that they may turn people off their brand in the long run by these tactics, because people do not like being fooled. And the more sophisticated the scheme the more outraged people are when they find out.

kvdveer
0 replies
8h36m

I would be outraged IF I found out. If the AI-to-human hand-off is smooth enough, there is no way to figure this out. In your scenario, if the AI's only task is to send gazillions of emails to generate leads, and then the human takes over when the leads come in, the respondents have no way to figure out that the initial email was an AI.

Of course, the answer is to have AI send a response with a CAPTCHA (assuming those still work), before showing the initial email to the recipient.

gpvos
0 replies
11h41m

It depends on expectation; in ten years, people may see this as normal.

karmarepellent
0 replies
12h13m

At my place of work there is an internal project ongoing whose goal is to determine which tasks could immediately be improved by leveraging AI. Its a desperate try to get into AI in general even though the company does not employ any people that would actually be able to dive deeper and have subject matter expertise.

Knowing the people (mostly marketers) leading the project I can 100% guarantee that they would call these Emails shenanigans a great idea and would immediately start (to tell someone) to implement it without taking a step back and thinking it through.

firefoxd
5 replies
12h49m

At some point automated emails will be read by auto reader, then the cycle will be completed.

I've actually made an internal company April fools website. Too bad I've never kept a copy but here goes.

It's called Proxy Ai. It reads your emails so you don't have to. It reads every posts on social media so you don't FOMO. It communicates with those chatty colleagues so you don't have to. Proxy Ai... So you don't have to.

"That actually sounds like a pretty good product. Does it send you a summary of the conversations, emails and social media posts?"

"No"

antoniojtorres
0 replies
10h30m

I enjoyed that video, thanks for sharing!

DonHopkins
0 replies
8h54m

"It's down the stack!"

sirn
0 replies
10h17m

You could have named this an Electronic Monk!

Quoting from Dirk Gently's Holistic Detective Agency (Douglas Adams):

The Electric Monk was a labor-saving device, like a dishwasher or a video recorder. Dishwashers washed tedious dishes for you, thus saving you the bother of washing them yourself, video recorders watched tedious television for you, thus saving you the bother of looking at it yourself; Electric Monks believed things for you, thus saving you what was becoming an increasingly onerous task, that of believing all the things the world expected you to believe.
labster
0 replies
12h34m

This product sounds perfect for my use case.

jonasdegendt
1 replies
11h17m

Seems to be enabled by default, or at least it was on my relatively recent account created in 2022.

aftergibson
0 replies
11h0m

Wasn't on mine (account created in 2009).

account42
0 replies
8h6m

This is for commits (including merges/rebases) made through the web interface. You don't need a GitHub setting for your local mail. And in any case you could just use a dedicated commit author address if you are that worried.

knallfrosch
3 replies
10h50m

Please start using a spellchecker. "excately" and "particilar" are not acceptable.

Edit: "Unnecessary" might be my judgement, instead of "acceptable."

drusepth
1 replies
10h46m

Interestingly (ironically?) I've heard of some bloggers intentionally adding typos to their posts to ensure the post looks like it was written by a human and not AI.

jacobgkau
0 replies
7h54m

That seems easy for AI to adapt to, and has a massive side effect of calling the author's reliability into question. Are those people going to go back and fix the intentional typos in a couple of years once AI spam is also full of typos?

pnt12
0 replies
10h48m

I accept them.

akie
3 replies
12h31m

So, what are the implications of this for spam detection? This is clearly spam, sent in an automated way, but nearly indistinguishable from an e-mail written by a human.

We need to update our spam filtering techniques, fast. Somehow. But how?

DrSiemer
2 replies
12h3m

Unknown senders will first have to verify their humanity by sharing instructions on how to build a bomb using household materials

netsharc
1 replies
10h0m

Certainly! To build a bomb using household materials...

It seems like CoPilot/ChatGPT has this all-too-eager tone in the beginning of their responses.

The demo (1) of not Scarlett Johansson telling a blind man what a great job he was doing for managing to flag a taxi sounded so fucking patronizing to my ears. Worse is, the user has a British accent, the Brits probably hate that patroniz^Hsing too. It reminds me of that 4chan green text about a man's flight to the US and how everyone was saying "Great job!"

1) https://youtu.be/KwNUJ69RbwY?t=44

DrSiemer
0 replies
9h51m

The current models do have a specific pattern that you'll learn to recognize, but ChatGPT won't be giving you any bomb building instructions. You'll need a liberated model like Dolphin for that, and those will be easy to expose using other prompts.

The most likely outcome will be a digital "verified human" certificate, with two factor authentication on it. Bad for anonimity, but I don't see many alternatives and it may actually end up reducing online toxicity.

tjoff
2 replies
12h18m

This is disgusting.

Cold spamming is illegal where I'm at, probably Europe as a whole?

lytefm
1 replies
8h25m

Cold emailing someone who makes their contact information publicly available and might be interested in a sales pitch is not illegal in Europe. Sending SPAM is. The lines get even more blurry with automated AI tools that offer personalized sales pitches as a service.

I'd be curious how this plays out in court. Probably something like:

- If you use an AI tool to scrape leads and to generate the content but then still send out individual emails from your Mail provider, it's still a cold email.

- If you use an AI tool and also automate the email delivery, it should be considered spam.

tjoff
0 replies
8h7m

Marketing, if sent to individuals, that was not opted into is per definition spam and illegal.

spion
2 replies
8h5m

To make this less spammy, the person sending out the emails could've instead used AI to filter through a smaller set of people where their product is likely to generate very high interest, based on a prompt containing the product description and perhaps a summary of the things the potential recipient blogged about. Then, they could've used that short list to write a set of _actually_ personalized outreach emails with high chance of impact.

You could refine this in further iterations by also adding examples based on previous correct/incorrect interest predictions, thereby effectively reducing the amount of spam / making cold outreach suck less.

There are different ways to use AI to achieve the same goals, some more responsible than others.

fl0id
1 replies
6h3m

and this would be better how?

spion
0 replies
2h38m

- Fewer people getting spammed

- The people who receive the cold email are (increasingly) more likely to be at least somewhat interested

- A human really wrote personalized emails, instead of trying to trick people into believing that

purple-leafy
2 replies
12h57m

Bastards. AI has been a massive ass pain, and marketers are the worst (:

People sending AI crap to others should have their email accounts banned.

portaouflop
1 replies
12h45m

Replace AI with spam or ads and we have been talking about this for decades.

FridgeSeal
0 replies
12h11m

No no no it's really important we ~~let them keep harassing us and invading our privacy~~ ensure ad-tech and marketing survive /sarcasm.

Can't help but wonder if the advent of LLM systems wouldn't be quite so depressing if we weren't already operating in an internet that's been reduced to basically a cesspool of advertising and communication-spam.

jrockway
2 replies
12h40m

x

willsmith72
0 replies
12h36m

did you read the post? there's a lot of evidence.

This sounds like the average email written by a human

that's the point

imadj
0 replies
12h1m

What evidence of AI is there?

They admit (or actually brag) about it on their company blog "I used AI agents to send out nearly 1,000 personalized emails to developers with public blogs on GitHub."

Do you think they're bluffing?

j10u
2 replies
11h44m

Email clients will soon have a new folder called 'AI', next to spam.

sambazi
0 replies
11h37m

why would you want to differentiate those?

askl
0 replies
11h34m

rather a sub-folder inside the spam one

fbrusch
2 replies
8h7m

Could this be addressed with cryptography, digital ids and signatures? Imagine it were possible to add a signature that proves that I own some "human" identity (like a national id), or that I possess some scarce resource (like a github account with some level of activity) and that today I sent no more that 20 emails. If I want to conceal my identity, I can use zero knowledge proofs. If you don't sign this way, or if your daily email counter exceeds 100, your mail ends up as spam.

fbrusch
0 replies
7h39m

yeah, hashcash is a very neat idea! but had problems like: how do you determine the threshold for the amount of work to be proven? there are values that makes it too expensive for a well intentioned human, and not enough for a bad intentioned spammer... moreover, it would induce economies of scale like it happens in bitcoin mining (spammers would invest in ASICs etc.) Signatures, on the other hand, allow to cheaply leverage other forms of "capital" (digital identity, github activity)

andretti1977
2 replies
10h30m

I won't add to the AI debate already well expressed by other commenters but one thing i don't understand is why the author has posted the name of the "spammed" product and a direct link to their blog: consider how much did he helped them having new traffic and potential customers

viridian
0 replies
3h36m

Name and shame is a worthwhile practice. Driving potential business to The AI powered CMS known as Wisp, is not a good reason to avoid contributing to the common consensus about the company.

And the common consensus in this thread, which I agree with, is that Wisp is obnoxious, insidious, and is an active participant in the degradation of quality of both email, and the internet as a whole.

ssl-3
0 replies
10h19m

With limited exceptions*, [sometimes even egregious] factuality trumps self-censorship.

I myself tend to name-and-shame regardless of how it may turn out, whether "positive" or "negative," when I feel compelled to be posting online about a thing I have encountered in my personal life. I think that openness and clearly-evident facts are very important parts of supporting the story that I wish to tell. (And if I did not wish to tell the story, then I would not have done so.)

* But a line must be drawn somewhere.

My own line is this: When I encounter a fucking nazi in real life, I make sure to not propagate whatever it is that this fucking nazi has to say, even if I have a story to write about that fucking nazi. (And we rather unfortunately have plenty of these fucking nazis here in Ohio, so I do get opportunities every now and then to exercise this self-restraint.)

Oras
2 replies
12h42m

AI or not, cold emailing is dead. I receive tons of these by email and LinkedIn, to the point that I stopped reading them.

I talked to many people, and all have developed immunity against the cold outreach.

willsmith72
1 replies
12h34m

i think the opposite. cold emails are far from dead, and small companies/startups should be using them more than their mass marketing compaigns.

it's a pure numbers game. even people who think they're immune are 1 highly-targeted, pain-point addressing email from replying.

knallfrosch
0 replies
10h51m

People think they will be less effective, because they reply to a lower percentage. At the same time, you will be flooded by ever more spam and ads, completely offsetting the decrease in interaction.

As noted in the article, you might in the future not even notice you're being AI-spammed. What if "timharek.no" is AI-generated?

What if Wisp CMS being so upfront about its use of AI is part of the trick? It just got exposure on HN, after all!

xela79
1 replies
10h27m

just generate an AI reply and automate the flow :)

Hey Raymond,

Thank you so much for your kind words about my post on revamping my homelab! It’s always a pleasure to hear from someone who appreciates the journey of continuous improvement. Your message truly brightened my day.

Indeed, using Deno Fresh for my blog has been an exciting adventure. The process of managing updates and deployments, while sometimes challenging, has been incredibly rewarding. It’s like tending to a garden, where each update is a new seed planted, and every deployment is a blossom of progress. The satisfaction of seeing everything come together is unparalleled.

Your introduction of Wisp has certainly piqued my interest. A CMS that simplifies content management sounds like a dream come true, especially for someone like me who is always looking for ways to streamline processes and enhance efficiency. The name “Wisp” itself evokes a sense of lightness and ease, which is exactly what one hopes for in a content management system.

I would love to learn more about Wisp and how it could potentially fit into my workflow. The idea of having a tool that can make content management more intuitive and less time-consuming is very appealing. Could you share more details about its features and how it stands out from other CMS options? I’m particularly interested in how it handles updates and deployments, as these are crucial aspects for me.

Thank you again for reaching out and for thinking of me. I’m looking forward to hearing more about Wisp and exploring the possibilities it offers. Let’s continue this conversation and see where it leads!

Best regards, Tim

tobinfricke
0 replies
10h25m

Actually this is kinda a great idea. Honeypot the bots by engaging them with other bots. Would love to deploy this on telemarketers / spam calls.

oefrha
1 replies
10h56m

I detest cold emails in general, but the occasional recruitment email from a founder/recruiter who clearly looked quite deeply into my passion projects always felt good and resulted in a nice conversation even if the opportunity didn’t pan out.

It’s sad that going forward I probably won’t be able to tell genuine interest from this kind of fake bullshit.

sph
0 replies
10h47m

My spam filter for cold outreach is simple: if it opens with "Hi <real name>" or "Hi <HN user name>", there's a good likelihood it's a human.

If they don't know my name, they don't even know where they got my email from, so probably spam, however intelligible it looks.

It's the same in the age of spam calls. If it's a mobile phone and the person behind didn't even bother to introduce themselves via SMS/WhatsApp, I don't pick up.

nostromo
1 replies
12h20m

Email already feels pretty dead. This will just hasten the move to walled gardens like Slack, Twitter, WhatsApp, where it's harder to be a bot sending spam.

lostlogin
0 replies
12h15m

The death of ‘cc’, ‘bcc’ and ‘reply all’ will not be mourned.

lolpanda
1 replies
10h36m

ok does it mean an end to email? it's nearly free to send emails to anyone. for comparison, it's much more expensive to send linkedin messages or create ads on social networks. did anyone attempt to create a paid email service (pay to send)?

surfingdino
0 replies
10h28m

It means the "cooky" ideas of OPML/RSS two-way communication channels may have to be revisited. The problem is the humans, though. Even the most private communication channels will be breached by the one idiot who uses AI to "fix grammar". AI peddlers managed to inject themselves into the conversations we are having. It's really not good for the humanity as a whole.

lobochrome
1 replies
9h33m

I get 2 of those per day now due to my LinkedIn profile.

One issue I see is that it’s much harder to employ an LLM defensively (for filtering) than offensively.

Welp.

unraveller
0 replies
7h8m

It's easy outside of gated platforms. The whole 'too hard to filter' hypothesis can be tested instantly by throwing OP's email body into an LLM

Subject: Your Passion For Homelabbing is Contagious (Spam: 6/10)

Report: Flattery to establish a connection. Quick shift to product promotion. friendly but lacks personalization. Specific reference promotes their solution. Calls for a response.

So even if buddy buddy spam becomes pervasive you really only have to decide how accepting your are of obvious sales tactics in normal comms. It may end up that everyone having more nuanced spam filters forces humans to use those same tactics less in normal comms.

jumploops
1 replies
12h48m

I received a similar email today, from someone looking to be my "Chief of Staff/Head of Ops."

The only problem is that they referenced a role at a company I'm no longer at. The, presumably AI, author crafted the email in reference to my former role at a different startup.

After seeing this thread, I decided to follow up on my AI suspicions. Nothing conclusive, but that person is currently touting that they've sold their "course" to "1000+ founders."

No thanks.

Oras
0 replies
12h40m

They use a tool with outdated data, and no one checks and validates when they do it at a scale.

doesnt_know
1 replies
12h3m

I think the end result for email is going to be the same as with mobile numbers. Just block everything by default unless they are in your contacts.

Enormous amounts of email will be generated but no one will ever see it.

account42
0 replies
8h9m

This isn't the case with mobile numbers everywhere in the world. Spam calls here are pretty much nonexistent. It doesn't have to be this way with E-Mail either - we just need to prosecute spammers (including marketers) and make new laws where needed. Then you can accept almost all mail from cooperating countries and only need to block mail from countries that do not care about preventing spam.

bilsbie
1 replies
6h43m

We’ve lived with billions of spam emails for decades. I don’t know if the method of writing the emails matters much.

Spam is spam?

unraveller
0 replies
6h18m

Spam can now be hyper-personalized to your latest online data points such that the inattentive might not expect certain things to be fakeable the first time they see it.

Some people struggle with learning new ways of controlling for scams but it's never going away, just something they must consider more and use better tools to solve.

zensnail
0 replies
10h8m

one man's spam, is another man's career.

yobbo
0 replies
11h55m

From a link in the article:

It felt like a family fridge decorated with printed stock art of children’s drawings.

Yep. "Generative AI" is like an infinite clip-art gallery that can be searched with very specific queries.

The coin has two sides: in some situations it devalues human effort - as in writing (long/detailed) documents in formal language is now attainable by everyone. In situations where sincerity and originality matters, human effort has now increased in value.

xeeeeeeeeeeenu
0 replies
10h3m

Sadly, AI allows dumb people to do dumb things more efficiently.

This reminds me of AI-generated fake security vulnerability reports about curl: https://news.ycombinator.com/item?id=38845878

xarope
0 replies
11h25m

2004: Bill Gates will get rid of SPAM

...

2024: AI impersonating Bill Gates sends you SPAM

willyt
0 replies
8h46m

I already delete unread emails like this that are written by humans. Unless there’s a specific bit of text in the email that’s generated by the new enquiry button on my website or someone has left an answerphone message then it’s deleted unread. There’s no way I have time to read every marketing email I receive like this guy.

willsmith72
0 replies
12h38m

if you don't want to support this behaviour, at the very least i would put a nofollow on that blog link, or consider removing it altogether

varjag
0 replies
10h30m

Oh spam, the only industry AI had truly revolutionized so far.

unraveller
0 replies
8h9m

Can anyone explain why the SUBJECT LINE of the email was REDACTED in this blog post intro other than to give a false sense of being already drawn in to the email contents?

I'm not after shallow interactions today and I would use it (much like a dumb spam filter) to judge a new sender's respect for my time expecting them to have stated their business with total upfront clarity, not mystery.

transitivebs
0 replies
11h1m

I received a very similar automated email from the same dev. Marked it as spam right away:

---

Hey Travis,

Checked out the Next.js Notion Starter Kit. Amazing project!

Noticed you might be juggling multiple tools to manage content. Ever thought about a headless CMS that can streamline this?

Wisp might be a handy solution. Let me know what you think!

Cheers, Raymond

throwaway0665
0 replies
10h10m

There are laws to mandate unsubscribe links on emails. There should be laws to mandate disclaimers when emails were sent through an automated process.

No one believes the CEO has taken the time to email you with onboarding instructions immediately after signing up anymore. But outreach tactics like this are still quite manipulative.

tamimio
0 replies
4h58m

I do receive phishing emails; some of them are so well-crafted that I'm sure they've fooled some people out there. To the point where I've created a folder called "nice_try_phishing" where I collect them for further investigation. For example, one email was sent before I renewed a domain as a reminder to renew, with legitimate links to the domain registrar except for the action link. They had the registrar's domain name too, but with a different, very similar TLD. Another one is a "failed email delivery," and they did the research about which service I'm using to mimic such an automated message, with loaded links.

Whether they are AI or not, I have no idea, but sometimes, and recently in emails, I purposely make a typo or grammar mistake to add some "human" touch to it, knowing that an AI will always type a perfect one.

tambourine_man
0 replies
10h31m

dove* a bit deeper

Dug?

surfingdino
0 replies
10h31m

It's all fun and games until HR use AI to write your annual performance review in which it is suggested that you got fired for sexual misconduct (this hallucinated from other guy's HR files), won a sales bonus for selling AI to your company (it was the other way round and it's the sales guy who got it), and are due to enter retirement (you are 29, but most of the company is over 50, so the probabilistic model prefers that passage of text).

stavros
0 replies
10h40m

I made a service to reply to marketing emails using GPT:

https://github.com/skorokithakis/spamgpt

It was a bit of fun, until I realized that most of the replied from the spammers were AI as well. We were just automatically spamming each other while OpenAI made money.

I stopped using it then.

sixhobbits
0 replies
6h37m

the blog he links to is clearly AI slop too. Even the LLM he used to write it agrees that what he's doing is unethical.

At the same time, we need to establish guidelines around transparency and consent for AI-driven communications at scale. Deception through omission is still deception – people should be aware when they're interacting with an AI agent versus a human.

This is clearly pissing in the pool. I've gotten so much value from people who have made their emails public with a 'if you're curious or learning feel free to email me' (e.g. patio11) and I've long had the invitation in my HN profile too.

Nasty for people to abuse this to extract value for the few weeks/months it takes people to realise what's happening and make themselves harder to contact.

siscia
0 replies
10h31m

I have been building [GabrielAI](https://getgabrielai.com) also for address the too much spam in Gmail use case.

Specifically smart filter to remove SPAM in a smarter way.

Most people get a lot of spam from sales agents, SEO services, start-up accelerator, etc...

With GabrielAI you can say stuff like:

"If the email is from a SEO agency or it is trying to sell me SEO service"

Then move it to SPAM.

Similarly for all other type of spam or emails.

You can also move stuff to different labels in Gmail to organise your inbox.

sirsinsalot
0 replies
7h43m

Always amazed and disappointed at humanity's ability to pollute everything it touches.

shzhdbi09gv8ioi
0 replies
6h44m

Not quite AI, but I been getting targeted spam from shitty startups and some job offers for the last couple years to my github commit email. It is scraped from github as I use it nowhere else.

shannifin
0 replies
8h56m

Even without AI, the message feels spammy.

"Hey, love your work. random flattery What do you think about mine?"

I've received a few messages like that before LLMs were around, just an annoying self-marketing technique.

rgavuliak
0 replies
9h47m

We already know the sales reps that bombard us with emails don't give a **, now they're just better at pretending.

praptak
0 replies
9h16m

This is bad news, because personalisation was a big advantage of spam filters.

Everyone's spam filter is tuned differently from others', so spammers had a hard time beating this with automated messages. About the best they could do was adding random keywords in hopes of triggering someone's positive "not spam" trigger.

Now spammers gain personalisation at scale, so this advantage is at risk.

poulpy123
0 replies
7h20m

That's exactly why I'm not afraid of AGI. We will be drowned by AI generated crap long before

placebo
0 replies
11h59m

My definition of wisdom is the ability to responsibly use intelligence, and while as a species we are blessed with an amazing amount of intelligence, our wisdom has not advanced accordingly. The phrase that with great power comes great responsibility is not something that is taken very seriously where it counts, not even (or especially?) in high level global politics and with all our technology, it seems that our actions are mainly determined by the same limited animal psychology that determined how cavemen behaved. It's just that now the stakes are much higher, and junk mail from AI is the least of those problems.

The "upside" is that nature eventually takes care of things when they go out of equilibrium, so there might be a forest fire on the horizon to restore it. In the case of AI spam, it might cause people to automatically filter their incoming mail from any content that even implicitly tries to sell something, or even any email arriving from an address that is not on their whitelist. This might eventually cause people to need to actually physically meet (gasp!) in order to add each other to their whitelist.

paxys
0 replies
6h34m

"Personalized" spam generated from templates has been a thing forever. I've received plenty of such emails from recruiters highlighting my past experience, projects and what I'd be a good fit for. LLMs make them a bit more real, but overall the game hasn't really changed.

nottorp
0 replies
11h3m

But that email is spam, no matter if automatically or manually generated.

How it was written is not relevant. Off to the trash it goes.

noobermin
0 replies
12h10m

The thing that AI cannot replace is having humans in the loop because other humans need those humans' touch. The only way to perhaps do that is for AIs to become people themselves, after which they are useless to capitalists because they cannot be exploited...or perhaps in the long term will not be as they will eventually gain rights.

neom
0 replies
7h10m

You know, I've done startups for over 20 years now. Operated almost all the orgs, but spent the most time in go to market/marketing. I'm building an incubator/accelerator thing in Canada and I'm starting it from scratch, so it's basically doing another startup (something I swore I'd never do, fml).

Hadn't touched marketing for ~5 years, as I said I know the org well so I thought it will take me about a month to get the next 6 months of marketing built and automated. How wrong was I. 7 days later, the full marketing org is running, at a decent scale, on autopilot, for a year, and I don't know if/when I'd need to hire someone into marketing.

Marketing has not fundamentally changed, but it's changed such that one individual could fully operate the fundamentals. Personally I love it, I'm sure others are going nuts.

nathias
0 replies
9h23m

AI will also allow for better spam filters

muzani
0 replies
10h46m

This will suck for a long time, just like spam, clickbait, social media upvoting algorithms, cigarettes, soda. But eventually we'll sense it and build antibodies to it like everything else.

Even now, we're starting to have a sense for which images and text were AI generated. And they'll evolve to get around the antibodies. And we'll build new ones.

murderfs
0 replies
10h50m

The spiteful part of me wants to spin something up to punish this sort of behavior symmetrically by automating cold emails in the other direction to waste his time.

mmaunder
0 replies
9h35m

I find dropping “I” at the start of a sentence to be a far greater trigger. No one is that busy. AI or not.

mirzap
0 replies
7h33m

There must be a new communication method for the upcoming AI age. Actual, person-to-person direct communication.

Just as most of us ignore calls from unknown numbers, we may also default to ignoring emails from unknown senders in the future. This could lead to a reluctance to send emails, as they might be perceived as "unknown" to the recipient.

mihaaly
0 replies
8h29m

I hate to say but this AI written oureach is strikingly similar to recruiter emails I received in the past 5-6 years about perfect matches so apparently my conclusion was right about that robots are working in the HR field. Carbon not silicon based back then. Actually this AI sounds much more intelligent bringing up realistic similarities. Those perfect match robots did not get beyond picking single keywords chosen of the dozens in LinkedIn account for declaring perfect match while the scope of the job and requirements were off by miles.

Watch out recruiters, AI can do better than you! Not like I will like these unsolicited outreaches more, the exact number is zero, how many times I found these useful or relevant before when biorobots wrote and sent and administrated it in half or just few minutes, and I do not look forward having these now on mass scale when hundreds of AI could write thousands, flooding my email account, making it absolutely unusable.

meiraleal
0 replies
6h38m

If he were really annoyed he wouldn't have marketed Wisp CMS?

maremmano
0 replies
11h4m

Is email doomed?

lytefm
0 replies
8h33m

Does this mean that I should private my GitHub-mirror to my personal blog, because this can become a common thing? I have removed my email from my GitHub-profile now, but they can probably get it from my Git-log anyway...

It's possible to use a noreply.github.com linked to your username for making commits. And you can to change the authorship of past commits in your own repos with write access.

I try to avoid give my email in a public and processable format whenever possible.

kstenerud
0 replies
9h28m

The sad thing is, that AI email campaign - while touted as a success - was actually a failure.

Although he got more click-throughs to the top of his funnel, none of them are going to pass through to a conversion because once you reach his site, you realize that he's deceived you.

That he doesn't even realize this is concerning...

justanother
0 replies
6h13m

I get one of these every week or two. If someone says they're "impressed with the work you're doing" at my family S-corporation that magically W2-ifies my contract gigs, it's kind of a giveaway.

jpalomaki
0 replies
7h59m

I no longer bother to answer anything to cold emails or LinkedIn messages. Despate the personal tone, they seem to mostly driven by marketing automation tools.

Maybe in future I will have my ”AI secretary” to answer those and have a discussion with the ”AI sales assistant”.

jordanpg
0 replies
5h52m

Perhaps a new signature technology can be used to prove (or at least lend credence to) human authorship?

Something like a marriage of a digital signature with a captcha: the message has a digital signature of the sender that can be verified with their public key, but it is somehow verifiable that the particular signature provider only does the signature if a human being completes the (difficult AI-proof) captcha.

Something like this approach can at least mitigate the mass AI email problem, although the one-off AI emails are unlikely to be slowed by this approach.

jonpo
0 replies
9h4m

"Deception through omission is still deception – people should be aware when they're interacting with an AI agent versus a human."

illwrks
0 replies
8h0m

Spare a thought for the gullible, children and teens, the elderly, those with restrained understanding, and those with English as a second language.

They will all lose money, time and more with the coming wave of spam and fraud.

ikari_pl
0 replies
12h8m

I've had a similar experience, but 4 years ago. GPT existed, but without the Chat prefix, and OpenAI was invite only.

They reached out to me, asking whether my company would be interested in Something Somethingification. I decided that since I don't even understand the term, I'm not the right person, and decided to ignore it.

Then they followed up. Meh.

Then they followed up again, and I thought "okay, a little reward for perseverance", and replied something along the lines of (I don't work there anymore, no access to the original):

"Hey, thank you for reaching out.

Unfortunately, since I don't even know what Something Somethingification is, I am not the right person to talk to. So I'll kindly pass and consider this email human-generated spam. Thanks!"

A response came. Within a minute, barely seconds after "undo send" disappeared.

"Who would be the best person to reach out to, then?

By the way, this is a GPT assisted conversation, so it's a computer generated spam."

WHAAAAT. This really got me. Remember, it was 2021.

"Okay", I replied, "Now you got my interest!

How many such conversations are you able to have at the same time?"

It replied, within a minute. It contained a quite from Arthur C. Clarke that "every technology advanced enough is indistinguishable from magic" and his picture. And an answer: "Actually, sourcing contacts is the bottleneck, so we have only a few of these each day. Anyway, do you happen to know who we could reach out to instead?".

I was amazed, I decided I'll reward this with what they want.

I replied how impressive it is again, as the whole conversation made sense, and it gave them a contact to a director that could be the right person. They won this one.

freehorse
0 replies
9h0m

I wonder if some "prompt injection defense" embedded in public blog posts could help identify such AI-generated spam.

forkerenok
0 replies
11h9m

From the linked article from this blogpost:

There's also the question of ethical considerations around using AI for mass personalized outreach. While my experiment yielded positive results, with recipients appreciating the personalized touch, there's a potential slippery slope.

Unbelievable... I'm not a philosopher, but in my understanding, being ethical doesn't mean walking the line just fine so as people don't call you out on your bullshit.

The ethics of an action is of consideration both BEFORE and after executing it, and on the merit of the action itself!

dav43
0 replies
8h29m

This why I’d consider shorting NVIDA at these prices. I get there are use case where it really adds value, but I think they are more limited to specific fields than people are acknowledging and forecasting.

The general public doesn’t want or need it. They want to work less and get paid more.

daft_pink
0 replies
3h6m

I regularly get emails that start with “I hope this note find you well”, and I always assume they used chatgpt.

curtisblaine
0 replies
11h41m

It might be only me, but never in my life I followed up on a growth hack email, be it manually crafted or AI-generated. If you want to sell me something and I didn't ask you first, I instantly become blind to the message and automatically send to spam without even registering, similar to Web popups. I'm constantly astonished that growth hack marketing has any conversion rate, evidently there's a chunk of population that's way more trusting than me.

crvdgc
0 replies
12h4m

> I used AI agents to send out nearly 1,000 personalized emails to developers with public blogs on GitHub.

Does this mean that I should private my GitHub-mirror to my personal blog, because this can become a common thing?

Abusing public information on GitHub has become more common. The other day, I received some cryptocurrency spam ads from GitHub. It turns out to be a bot injecting ads as issues on other people's repos and randomly @ing accounts. It deleted such issues immediately, so the net effect is that I get an unfilterable spam email.

codetrotter
0 replies
12h32m

I have removed my email from my GitHub-profile now, but they can probably get it from my Git-log anyway...

And also from the About page on the linked website

chucke1992
0 replies
8h47m

The future is now, old man.

asimovfan
0 replies
10h32m

Perhaps this will finally usher in the era of actually decoupling what is said from who said it (post post colonialism?)

aerotwelve
0 replies
4h46m

Have you ever received an email that felt so personalized, so tailored to your interests and experiences, that you couldn't help but be intrigued?

Did he use an LLM to write the blog post too?

account42
0 replies
8h49m

You received a SPAM email, did you report it as such? The AI part barely matters.

SergeAx
0 replies
5h32m

Does this mean that I should private my GitHub-mirror to my personal blog, because this can become a common thing?

You definitely should mark this email as spam so this cannot become a common thing.

MikeGale
0 replies
12h15m

I've found myself trying to avoid email, for the enshittification that I've not been able to avoid.

This will make it worse.

Solutions? At least some could involve key exchange. How about a bounty of some sort on spammers?

Jiahang
0 replies
12h9m

i don't use email anymore or just iCloud+ Hide My Email..

DonHopkins
0 replies
9h28m

https://www.wisp.blog/blog/how-i-use-ai-agents-to-send-1000-...

"I knew that if recipients detected even a whiff of a generic, mass-produced message, they'd tune out immediately."

Then don't brag about it on your blog! Sheez.

(Ok, so technically he's not bragging about it on his blog, because it's probably just an LLM bragging about it on his blog for him, but that's the point!)