The author links to the somewhat dystopian blog where the email sender is quite proud of their work. Their words (or perhaps that of an LLM):
Could an AI agent craft compelling emails that would capture people's attention and drive engagement, all while maintaining a level of personalization that feels human? I decided to find out.
The real hurdle was ensuring the emails seemed genuinely personalized and not spammy. I knew that if recipients detected even a whiff of a generic, mass-produced message, they'd tune out immediately.
Incredibly, not a single recipient seemed to detect that the emails were AI-generated.
https://www.wisp.blog/blog/how-i-use-ai-agents-to-send-1000-...
The technical part surprised me: they string together multiple LLMs which do all the work. It's a shame the author's passions are directed towards AI slop-email spam, all for capturing attention and driving engagement.
How much of our societal progress and collective thought and innovation has gone to capturing attention and driving up engagement, I wonder.
I remember seeing a talk from Jonathan Blow where he made a comparison: in the 1960s top engineers worked for NASA and put a man on the moon in a decade, basically doing computations by hand. Today, we have super advanced computers and tech companies enjoy 100× times more of the top engineers than NASA ever had, and they are all working toward making you click on ads more.
Which do you think is more important? Putting man on the moon or ecommerce? I reckon you been able to get on a device, see a biscuit ads, order one from foo.com and have it shipped to you. Think of how much tech it takes for that to happen, that is more tech than NASA built to send many to the moon, the internet, packet switching, routing, fiber optic, distributed systems, web servers, web browsers, ads, cryptography, banking online, and so on and so forth. We love to trivialize what is common, but that clicking on an ad is not an easy problem. Clicking on ads has generated enormous wealth in the world which is now bootstrapping AGI.
Clicking on ads helped with our goal to AI today. Showing you the right ad and beating those trying to game it is machine learning heavy. When was the first time we started seeing spelling correction and next word suggestions? It was in google search bar. To serve the correct ads and deal with spam? heavy NLP algorithms. If you stop and think of it, we can drop a think line from the current state of LLMs to these ads click you are talking about.
That last line kind of makes the point. Is any of that actually inspiring to a young child?
It sure as heck is inspiring to a critical thinking adult. There's been enormous value added to all world's citizens.
Interesting. In my experience, advertisement and the incentives around it have led to the most devastatingly widespread removal of value in human culture and social connections that we've seen in this generation. Huge amounts of effort wasted on harvesting attention, manipulating money away from people, isolating and fostering extremism, building a massive political divide. And centralizing wealth more and more. The amount of human effort wasted on advertisement is staggering and shocking.
I don't think your average adult is inspired by the idea of AI generated advertisements. Probably a small bubble of people including timeshare salesmen. If advertisements were opt-in, I expect a single digit percentage of people would ever elect to see them. I don't understand how anybody can consider something like that a net good for the world.
How does non-consensually harassing people into spending money on things that don't need add value to all the world's citizens?
I mean, you'd see the same thing if paying for your groceries were opt-in. Is that also a net bad for the world? Ads do enable the costless (or cost-reduced) provision of services that people would otherwise have to pay for.
Is that seriously the comparison you want to make here? Most of us think the world would be better if you didn't have to pay for food, yes.
Ads are not charity. There is clearly a cost, otherwise they would lose money. They do not generate money out of thin air. "Generate" and "extract" aren't synonyms.
They do not enable any costless anything at all. They obfuscate extraction of money to make it look costless, but actually end up extracting significant amounts of money from people. Ad folks whitewash it to make it sound good, but extracting money in roundabout ways is not creating value.
"Adding value" and "Generating wealth" are always the vague euphemisms that these guys fall back to when they try to justify much of today's economic activity. Adding value for who? Generating whose wealth? The answer is usually "people who are already wealthy." Of course, they'll downplay the massive funneling of wealth to these people, and instead point to the X number of people "lifted out of poverty in the 20th century" as if capitalism and commerce was the sole lifting force.
I wish some of these people would think about how they'd explain to their 5 year old in an inspiring way what they do for a living: And not just "I take JSON data from one layer in the API and convert it to protobufs in another layer of the API" but the economic output of their jobs: "Millions of wealthy companies give us money because we can divert 1 billion people's attention from their families and loved ones for about 500 milliseconds, 500 times a day. We take that money and give some of it to other wealthy companies and pocket the rest."
I think this is a rationalization of an enormous waste of work. The effects generating wealth are indirect. In that regard you could argue that betting is generating wealth too. Advertising is like a hamster wheel people have to jump onto if they want their place in the market.
A similar amount of wealth would be generated if every advertised product would be represented by a text description, but we have a race to the bottom.
There is advertising and advertising of course but most of advertising is incredibly toxic and I would argue that by capturing attention, it is a huge economic drain as well.
Of course an AI would also be quite apt at removing unwanted ads, which I believe will become a reality quite soon.
I fear statements like this go too far. I can't agree with the first part of this sentence.
I feel this about both marketing and finance:
They are valuable fields. There are huge amounts of activity in these fields that offer value to everyone. Removing friction on commerce and the activities that parties take in self-interest to produce a market or financial system are essential to the verdant world we live in.
And yet, they're arms races that can go seemingly-infinitely far. Beyond any generation of societal value. Beyond needless consumption of intellect and resources. All the way to actual negative impacts that clutter the financial world or the ability to communicate effectively in the market.
This is quite a statement to make.
Please elaborate on what enormous value has spam ads and marketing emails added to _world_ citizens?
Unless of course by “world” you mean Silicon Valley venture capitalists..
I think the answer is pretty clear in the fact that so many of them, bluntly speaking, just don’t give a shit any more. I absolutely don’t blame them.
I keep hearing the phrase "generate wealth" in regards to advertisement and from the mouths of startup founders, but in almost no other context. I'm not familiar with the economic concept of "wealth generation" or its cousin "creating value".
Is the idea that any and all movement of money is virtuous? That all economic activity is good, and therefore anything that leads to more economic activity is also good? Or is it what it sounds like, and it just means "making some specific people very wealthy"? Wouldn't the more accurate wording be that it "concentrates wealth"? I don't see a huge difference in the economic output of advertisement from most other scams. A ponzi scheme also uses psychological tricks to move money from a large amount of people to a small amount of people. Something getting people to spend money isn't inherently a good thing.
Maybe this was your point, but this is built in to one of the definitions of GDP, isn’t it? Money supply times velocity of money?
I’m no economist though I’m sure there are folks on here who are. But this seems like an unfortunate fact that’s built into our system- that as laypeople we tend to assume that ‘economic growth’ means an increase in the material aspects of our life. Which in itself is a debatable goal, but our GDP perspective means even this is questionable.
For example, take a family of five living out in a relatively rural area. In scenario one, both parents work good paying remote tech jobs and meals, childcare, maintenance of land and housing, etc. are all outsourced. This scenario contrubutes a lot according to our economic definitions of GDP. And provides many opportunities for government to tax and companies to earn a share of these money flows.
Then take scenario 2, you take the same family but they’re living off of the grid as much as possible, raising or growing nearly all their own food, parents are providing whatever education there is, etc. In this scenario, the measurable economic activity is close to zero- even if the material situation could be quite similar. Not to mention quality of life might be rated far higher by many.
What rating an economy by the flow of its money does do is, and I’m not sure if this is at all intentional, is it does paint a picture of what money flows are potentially capturable either by government taxation or by companies trying to grab some percentage as revenue. It’s a lot harder to get a share of money that isn’t there and/or not moving around.
Perhaps my take on economics is off base but, for me, seeing this made me realize just how far off our system is from what it could and should be.
GDP is a measure. I'm very much not an economist, but I am extremely skeptical that the health of an economy can be reduced to any single number. Goodheart's law and all.
I concede that GDP is a good indicator, but I think you can have things that help GDP while simultaneously hurting the economy. Otherwise any scam or con would be considered beneficial, and it would make sense to mandate minimum individual spending to ensure economic activity. A low GDP inherently shows poor economic health, but a high GDP does not guarantee good health.
In my mind (noting, again, that I'm no economist), economic health is defined by the effectiveness of allocating resources to things that are beneficial to the members of that economy. Any amount of GDP can be "waste", resources flowing to places where they do not benefit the public. As Robert Kennedy famously pointed out, GDP includes money spent on addictive and harmful drugs, polluting industries, and many other ventures that are actively harmful.[0]
[0]: https://youtube.com/watch?v=3FAmr1la6w0
Going back to the previous posters monetary velocity statement, if you have a trillion dollar GDP, but it's just two AI's bouncing money back and forth high speed while all the humans starve in the street your economy is "great" and totally awful at the same time. The one number has to be referenced against others like wealth inequality.
"Generate wealth" means "make somebody's number go up" i.e. allocating real resources/capital somewhere, with the assumption that 1. allocating that capital creates a net boon for society and 2. those who have "generated wealth" are wise and competent investors/leaders and their investments will create a net boon elsewhere. The first point is probably not especially true very often in contemporary tech (other than 'job creation') and is arguably not true for advertisement. The second point is not really a given at all and seems to be pretty consistently shown otherwise.
In the grand scheme, what you’re talking about is very zero-sum, while stuff like making rockets is not. Uber vs Waymo is a good example of how adtech can only go so far in actually creating wealth.
"Putting man on the moon or ecommerce"
The comparison here is between moonlanding and advertisement. So I choose the moon obviously.
Ecommerce can work just the same without LLM augmented personalized ads, or no advertisement at all. If a law would ban all commercial advertisement - people still need to buy things. But who would miss the ads?
They are clearly talking about one aspect of the industry which is the marketing part related to maximising engagement. It is not meant to be conflated with the e-commerce industry as a whole.
It took way too long to convince myself this wasn't satire. I still wish it wasn't.
It made me realize that I think many computing people need more of a fundamental education in "hard" physics (statics, mechanics, thermodynamics, materials science) in order to better understand the staggering paradigm shift that occurred in our understanding of the world in the early 20th century. Maybe then they would appreciate how much of the world's resources have now been directed by the major capital players towards sucking the collective attention span of humanity into a small rectangular screen, and the potential impact of doing so.
Marketing manipulation and spam is less important.
Just wait. Enough of us will get pissed off that we will develop AI agents that sit between us and the internet.
A sufficiently advanced personal assistant AI would use multimodal capabilities to classify spam in all of its forms:
- Marketing emails
- YouTube sponsorship clips
- Banner ads
- Google search ads
- Actual human salespeople
- ...
It would identify and remove all instances of this from our daily lives.
Furthermore, we could probably use it to remove most of the worst parts of the internet too:
- Clickbait
- Trolling
- Rage content
I'm actually really looking forward to this. As long as we can get this agent into all of the panes of glass (Google will fight to prevent this), we will win. We just need it to sit between us and everything else.
Until _that_ company gets overrun by MBAs who are profit-driven then they start injecting ads into the results.
It will come in the vein of "we are personalizing the output and improving responses by linking you with vendors that will solve your problems".
Found companies with people that share your values. Hire people that share your values. Reject the vampires. Build things for people.
Unfortunately it turns out that at the end of the day one of the most common values is the love of massive piles of money. Vampires don't catch on fire in sunlight like storybook villains, they will invite themselves in, sidle up beside you, and be your best friend. Then in the moment you are weak they will plunge their fangs in.
Competing with bad actors is very, very hard. They will be fat with investor money, they will give their services away, and commonly they are not afraid to do things like DDOS to raise your costs of operations.
There will be a uBlock Origin for that.
I get what you are saying but what is the end result when someone is so shielded from the outside when they decide to block everything that irks them and stuck in an echo chamber?
What if the user is a conservative voter and considers anything counterpoint to their world view the worst part of the internet and removes all instances of it from their daily lives? Not to say that isn’t already happening but they are consciously making the choice, not some AI bot. I can see something like this making the country even more polarized.
Same as it ever was.
Growing up as a southern evangelical before the internet, I can promise you that there has never been a modern world without filter bubbles.
The concept of "fake news" is not new, either. There has been general distrust of opposing ideas and institutions for as long as I've been alive.
And there's an entire publishing and media ecosystem for every single ideology you can imagine: 700 Club, Abeka, etc. Again, this all predates the internet. It's not going anywhere.
The danger isn't strictly censorship or filter bubbles. It's not having a choice or control over your own destiny. These decisions need to be first class and conscious.
Also, a sure fire way to rile up the "other team" is to say you're going to soften, limit, or block their world view. The brain has so many defenses against this. It's not the way to change minds.
If you want to win people over, you have to do the hard, almost individual work, of respecting them and sharing how you feel. That's a hard, uphill battle because you're attempting to create a new slope in a steep gradient to get them to see your perspective. Angering, making fun, or disrespecting is just flying headfirst into that mountain. It might make you feel good, but it undoes any progress anyone else has made.
This was present in the book Fall;, or, Dodge in Hell. (Published in 2019; takes place in the near future) Everyone had a personal AI assistant as you describe to curate the internet. A big part of the motivation was to filter the spam. A secondary affect was that the internet was even further divided into echo chambers.
Someone decided that marketing is now a tech problem. Artists have been replaced by software engineers. The net result is creepy AI emails.
I fell for oldschool marketing yesterday. Im moving into a new appartment in a couple months. The local ISP who runs fiber in my new building cold-called me. I agreed over the phone to setup the service. That was proper target marketing. The person who called me knew the situation and identified me as a very likely customer with a need for service (the building has a relationship with the ISP). I would never have responded to an email or any wiff of AI chatbot. They only made the sale because of expensive human effort.
It's the tech that put you on a queue to be called
There was no tech here. My new landlord contacted the local ISP, the one they liked to work with, to say they had a new tenant arriving soon. I'd bet that my connection will have been setup long before I arrive, at a time convenient to the landlord and local provider. A landlord recommending a favored local vendor to a tenant, or a tenant to a vendor, is the sort of human relationship that predates electricity.
Sales people were never artists. Cold calling is not art.
And yet someone is building all those super advanced computers and AI models. Someone is launching reusable rockets into space. Someone is building mRNA vaccines and F1 cars and humanoid robots and more efficient solar panels.
The "smart people are all working in advertising" trope is idiotic. Just an excuse for people to justify their own laziness. There is an infinite number of opportunities out there to make the world better. If you are ignoring them, that's on you.
Which is true. But clearly far fewer people work doing that than in advertising or some other seemingly meaningless grunt work. And I’m including the technological plumbling work with many on this site, myself included, have depended upon to support themselves and/or a family.
Which at best is effectively doing minor lubrication of a large and hard to comprehend system that doesn’t seem to have put society as a whole in a particularly great place.
It’s as if real issues like climate change aren’t a thing that needs solving…
Financial incentives, huh?
I keep seeing these posts on HN and thinking, man, these are some smart people. Training LLMs, doing all this amazing AI stuff like this guy with the email agents and the other guy with the dropping of hats, and then I open the posts and it's just some guy making API requests to OpenAI or some similar disappointment.
Nowadays, an "AI Expert" is someone who knows how to download an AI client lib and prompt the AI to perform tasks. These are people who are not even technical and have no idea how all this works, but they can at least follow a Youtube Tutorial to get a basic website working.
As someone who actually has a university degree in Artificial Intelligence, I feel like this is always how it's been. Before, an "AI Expert" was someone who knew how to use Tensorflow, PyTorch or Keras. Before that, an "AI Expert" was someone who knew how to write a Monte Carlo simulation, etc etc.
You could of course say the same for frontend engineer or backend engineers. How many frontend engineers are simply importing Tailwind, React, etc? How many backend engineers are simply importing apache packages?
Where do you draw the line? Can you only be an AI expert if you stick to non-LLM solutions? Or are AI experts the people who have access to hundreds of millions of USD to train their own LLMs? Who are the real AI experts?
I would liken it to cars. There is a difference between engineers, mechanics, and mechanics that know a certain car so well that they fabricate parts that improve upon the original design.
Good comparison. Engineers who build cars and understand their intricacies oftentimes just work on one small thing at a time, even in teams. Like a team just working on breaks. The mechanics can piece the stuff together and keep it working in a real world setting. But nowadays a self-declared "AI Expert" in that metaphor might be just some person who knows how to drive a car.
I used to work on breaks, but then I realized I was more productive when I actually stopped and walked around a bit.
i draw the line at people claiming to be experts in something they have only done for a year
Also in most cases they were a "crypto expert" just two months ago.
And they were a leadgen/SEO expert a few years ago. These technogrifters just move from one hot topic to the next trying to make whatever buck they can smooth talk people into giving them.
Business as usual iow. Used to be scrum masters, then javascript "experts", then crypto bros.
Snake oil salesmen we called em back in my day ;-)
Someone who can get a website working is actually technical.
It’s more about being on the front of the hype train and being endlessly positive versus competence.
I can't see this working long term though. Being endlessly positive and ignoring your actual competence sounds like a recipe to eventually bite off more than you can chew.
Oftentimes this is fervor is channeled into personal brand building, which rarely has any sort of feedback mechanism that is tied to actual competence.
It's a calculated move on their part.
Brand building actually sounds good and productive to me, as long as it doesn’t approach fraud.
If your audience likes your brand and doesn’t distinguish between your services and services done by more competent providers, then you’ve found your niche. So: snake oil is not fine; but Supreme branded brick sounds ok to me, even if I wouldn’t buy it myself.
I guess the author will find followers who enjoy that approach to software and product growth. If spamming wasn’t part of it, I’d be ok.
When “altcoins” took off I spent a while racking my brain trying to figure out what special tech I could offer, how I could build my own blockchain, incentivize miners…
When I realized it was just dudes copy-pasting a “smart contract” and then doing super shady marketing, it was already illegal in my jurisdiction.
Things I wish become taboo: Admitting to use AI content.
Everyone is so comfortable doing shit like this.
I much prefer admission to hiding it. It lets you easily see who doesn’t deserve your time
While that might work great on the individual level for a little while, it's unfortunately not how normalized taboos seem to work long-term. You're just going to see more and more people who don't deserve your time until you're wanting for anyone who actually does.
This has been mentioned before, but I can see the benefit in having curated webrings and similar listings. Where people can verify the content is not LLM generated.
As soon as that becomes effective, you'll have dozens of SEO sites and experts giving seminars on "How to get your LLM-generated website into curated webrings." An entire cottage industry will spring up for the purposes of corrupting legitimate webrings and/or creating fake astroturf webrings that claim to be curated.
Oh, what about the petty fights between different webrings accusing each other of using AI generated content....
Reminds me of the early days of the web.
I can see it, perhaps positively, investing far less importance and effort into online things. With admittedly a lot of optimism, I could see it leading to a resurgent arts and crafts movement, or a renewed importance put on hand-made things. People say "touch grass"; maybe AI will make people "touch crafts" (bad joke, I know).
That's boasting, not admission
It's already like this for creative communities in things like illustration and writing. You will (rightly) get ostracized and blocked by your peers for using AI. It's a signal for poor quality for most people in those spaces.
Definitely interesting to see the different culture in tech and programming since programmers are so used to sharing code with things like open source. I think programmers should be more skeptical about this bullshit, but one could make the argument that having a more flexible view of intellectual property is more computer native since computers are really just copying machines. Imo, we need to have a conversation about skills development because while art and writing accept that doing the work is how you get better, duplicating knowledge by hand in programming can be seen as a waste of time. We should really push back on that attitude though or we'll end up with a glut of people in the industry who don't understand what's under all the abstractions.
I think it depends on the context. I think there's artistic cases for it, for example I've played around with using AI tools to extract speech from its background music for use in further (non AI-based) music which I don't think is an unethical thing to do.
News at 11, spammers use sophisticated techniques to increase the profitability of spam. This is absolutely shocking and never before seen, what is the world coming to.
In all seriousness, manipulation and bullshit generation emerges as the single major real world use of AI. It's not good enough yet to solve the big problems of the world: medical diagnostic, auto accidents, hunger. Maybe just a somewhat better search tool, maybe a better converational e-learning tool, barely a better Intellisense.
But, by God, is it fantastic at creating Reddit and X bots that amplify the current line of Chinese and Russian propaganda, upvote and argue among themselves on absurd topics to shittify any real discussion and so on.
I don't think this is unique to this specific technology.
People can be both wonderful and despicable, regardless of era or mechanism.
Sure, but I'm talking about the good:bad ratio of some creations. I really have strong hope for AI, and that we won't regard it in retrospect like the multi-stage thermonuclear device, the landmine or tetraethyl lead additives.
Not to dismiss any of the negative aspects of "AI", but it seems utterly foolish to compare it to those 3 things.
In May reports emerged of this suicide by a young man in Australia - not AI related. https://www.barefootinvestor.com/articles/this-is-the-hardes...
The following month, reports emerged of 50 girls in one Australian school being exploited in very similar ways by nothing more than a kid with a prompter.
https://www.abc.net.au/news/2024-06-25/explicit-ai-deepfakes...
Scaling this type of exploitation of children online is trivial when you think about anyone with basic programming skills.
The Techno Optimists manifesto is what appears to be utterly foolish to me when you figure out that there is not one mention of accountability for downside consequences.
I hope you're right. I'm less optimistic.
Do you think those countries are the only ones doing this? Just the other day there was a scandal about one of the biggest Swedish parties, one that's in the government coalition, doing exactly this. And that's just one that got caught. In countries like India and Brazil online disinformation has become an enormous problem, and I think that in the USA and Europe, as the old Soviet joke went: "Their propaganda is so good their people even believe they don't have any".
This process should not require a human in the loop.
Consider:
* spammers have access to large amounts of compute via their botnets
* the effectiveness of any particular spam message can easily be measured - it is simply the quantity of funds arriving at the cryptocurrency wallet tied to that message within some time window
So, just complete the cycle: LLM to generate prompts, another to generate messages, send out a large batch, wait some hours, kick off a round of training based on the feedback signals received; rinse, lather, repeat, entirely unattended.
This is how we /really/ get AI foom: not in a FAANG lab but in a spammer's basement.
At least one sci-fi novel iirc had an AI spam filter achieve sentience, because the task basically amounted to contrastive-learning a model of humanity.
That’s one of Peter Watt’s Rifters trilogy, I think maybe the second one? Been a few years since I read them. I think it’s a biological neural net, not an Ai per se. Lots of big ideas in those books, but not a lot of optimism and some rough stuff.
Having worked in the field, I think you’re more likely to achieve AGI by intelligently watering tomatoes in a hothouse.
Very well could be. Seconded. After all, it could very well become one of the largest vehicles for "mass training", ever ...
PS. Howerver, see comments downthread about "survivorship bias". Not everybody will reply, so biases will exist.-
This is sort of why I feel somewhat pessimistic about AI - the inevitable most profitable usecases being so bad in aggregate for a society with almost no bounds or values other than profit. It will never be easier to waste peoples attention.
This is not a problem with AI but with a system in which there are no other values other than "make the most money fast".
"No other values"? When and how is such Doomer Hyperbole getting into HN articles?
This is half of major reddit subs now and I fear the same low quality comments will take over HN.
People need to go out and touch some grass.
or maybe you need to touch some grass.
I need some grass.
Funny how they're self assured no one whiffed their AI bullshit. This is survivorship bias, he's looking only at all the planes that came back to port. The people who did - they just didn't reply. He can't prompt them.
When you spend $200 on spamming people you need to believe it was effective
The planes coming back from the bombing raids in WWII come to mind.-
The guy writes a post about how to send spam effectively, and then offers the subscription link in the end with "Promise we won't spam you". Yes, I totally trust you...
It sounds like extortion.
"I'm sending spam that sneaks past your spam filter. Sign up to make it stop."
Of the people who replied. I bet plenty figured it out, but didn't bother to reply.
Expect to see someone else write a blog post on How I Used AI to fool an AI Spammer
...of course they'd probably get an LLM to write the article too.
Also from that blog post:
While this is similar to what other founders are doing, the automation, scale and the email focus puts it closer to spam in my book.
I do believe that commodified attention is the most logical currency of a postascarce society, so best case... quite a lot.
Note my 'best case' scenario for the near future is pretty upsetting.
Facebook + Instagram is $100B+ business, So is Youtube and Ads.
An average human now spends about ~3h per day on their screens, most of it on social media.
We are dopamine driven beings. Capturing attention and driving up engagement is one of the biggest part of our economy.
It is not only that too much is wasted on superficial nothing instead of choosing to make something with essence and benefitial for the society but it is sucking away those minds engaged in really useful things.
Now do Google.
In defence of that guy, he's only doing it because he knows it's what pays the bills.
If we want things to change, we need to fix the system so that genuine social advancement is what's rewarded, not spam and scams.
Not an easy task, unfortunately.
I’m unsurprised to see a lot of very shallow usage of AI. Most users don’t have a real use case for the tool.