return to table of content

How to write stuff no one else can

yamrzou
36 replies
1d10h

This sentence caught my attention:

You can’t just Google these kinds of things. That’s why they’re so interesting.

So what Google did is dilute the information advantage. The more interconnected we are, the harder it becomes to maintain that advantage.

ssivark
25 replies
20h50m

Google, ChatGPT, Gemini, Perplexity, etc are all retrieval services (with some rudimentary remixing capabilities thrown in, for the latter few). Some human has to actually have interesting insights and write them. Retrieval services help with distribution & discovery. The less we’re connected, the harder it is to find the right audience who value your (niche) insights.

Of course attribution and linking is important.

wordpad25
21 replies
20h16m

AI so much much more than retrieval since it knows how to correctly apply exist concepts to previously unexplored areas.

You can ask ChatGPT how one might implement and enforce laws in a world where magic exists. Or time travel. Or whatever. Because it knows laws and it knows common fiction tropes and can reason well enough to surmise how to apply them together.

oldandtired
10 replies
19h53m

You make the following comment regarding artificial stupidity systems (AI)

since it knows how to correctly apply exist concepts to previously unexplored areas

and yet these systems know nothing at all. Far too many people (including the developers of such systems) have failed to understand that none of these systems can go beyond the programming that humans have incorporated into them. Yes, they appear to exhibit certain [abilities] but no more than any other essentially mechanical device and the limited capabilities that we have been able to design in them.

You can certainly pose questions and these systems (within the constraints of the programming involved in them) can retrieve and correlate data that is accessible. Bus the insights drawn will require human involvement.

Over decades, I have built tools to help in the analysis of all sorts of data sets and it has taken a human asking the [right] questions to get appropriate outcomes.

We do not understand our own intelligence let alone being able to build any artificial intelligent system that can operate on its own.

What does amaze me though is that we create natural intelligence systems all the time and they are called children. I have been avidly watching the development of my youngest grandchild and she makes an absolute mockery of any artificial system we have built anywhere.

In a very real sense, every artificial stupidity system (AI) that we have built is as complicated as a hammer or a knife.

It is what we (human beings) do with these simple tools that determine the outcomes.

snoman
8 replies
18h27m

I get what you’re saying, and will echo that these systems are still just tools that enhance what we, as humans, are capable of doing. But I think you’re failing to consider that even comparatively simple systems can exhibit emergent behavior, well beyond their programming. You’re being unreasonably reductive and dismissive.

ethanwillis
4 replies
18h20m

The specific behavior you get is a result of the programming. what does "beyond the programming" mean? I think it's actually a meaningless statement.

I would love to see an actual example where a program does something not encoded in itself already.

encomiast
2 replies
18h1m

In what sense is a novel, never-before-seen image that comes out of DALL·E 3 "encoded in" in the system? The image may be, in a sense, encoded in the latent space of images, but so is every image (of the same dimensions) that ever has been or will be.

oldandtired
1 replies
14h14m

The images generated are based on two things:

1). the code within the systems

2). the data entered into the system.

Those images are the result of how the data is processed by the code. Even if you put in a random number generator, the code processes those numbers based on it internal code constraints.

Now a well designed and built code base may be able to generate every image of the same dimensions. You can certainly write code that can exhaustively generate every possible image, but the question here is whether this exhaustive generation is intelligent.

nl
0 replies
10h50m

This isn't really true except in the most reductive sense.

If you ask for a picture of Shakespeare dressed as Spiderman reading a McDonald's advert on the moon, no one has programmed that and the fact it knows these individual concepts is no different to the fact a human has seen them too.

ben_w
0 replies
18h1m

The specific behavior you get is a result of the programming. what does "beyond the programming" mean? I think it's actually a meaningless statement.

The laws of physics encode every motion and interaction of every particle.

None of those fundamental laws directly says "water is wet".

I would love to see an actual example where a program does something not encoded in itself already.

You see it every time you use a machine learning system. Even the simplest models, say a linear regression line in a spreadsheet, "learn" from the data, not from the code.

oldandtired
2 replies
14h7m

Emergent behaviour can occur, not a problem. But if you study such systems, I think you will find that the emergent behaviour is based on the programming involved and is not "beyond the programming".

When it comes to intelligence, this is not something that we can say is actually emergent.

There are currently a number of projects that are looking into intelligence and free-will. There are researchers on the same teams who hold quite different opinions - the results for these projects are not at all conclusive.

scooke
1 replies
10h33m

I admire you for pressing home your point that others are missing. I practice a visual art form (which I won't name; many other smaller cultures around the world too have their own) which will never "emerge" from AI _unless_ it is programmed in, or trained on the visual art itself. Even though, I don't see how it could ever figure out the intricate detailed meanings without it being programmed. The people trying to counter you are thinking only within the culture within which these AIs have been created, and thus it does seem to them that anything AI creates is emergent because it seemingly created soemthing they haven't, didn't, couldn't, wouldn't. Without the programming (never mind the electricity), AI is still a blunt tool.

ronald_raygun
0 replies
3h20m

It is shocking to me how many people miss the fact that the big prediction machines trained on lots of data, are fundamentally historical and based on that data?

philipswood
0 replies
19h26m

and yet these systems know nothing at all. Far too many people (including the developers of such systems) have failed to understand that none of these systems can go beyond the programming that humans have incorporated into them.

within the constraints of the programming involved in them

I think the point of machine learning is that programming isn't "incorporated into them".

These systems may not be intelligent yet, but we certainly didn't program them - the majority of their features and abilities arise from the training data they were exposed to.

cycomanic
4 replies
17h36m

Well in my experience it more knows how to incorrectly apply existing concepts to incompatible areas and exert great confidence while doing it.

I have just two recent examples where I thought it would be faster to use chatgpt instead of reading the manual.

1. How to asynchronously (e.g. asyncio) record from the microphone in python. Chatgpt gave me good which looked elegant, but didn't work. Well turns out pyaudio isn't compatible with asyncio.

2. Trying to install a docker of some server (I think it was wallabag?) with an SSO using openid. Again I got a elaborate docker compose file and instructions for the server configuration. After trying to implement it for a while I realised that those configuration options don't exist. Chatgpt just invented them.

My conclusion from these is that one really shouldn't ask AI question were one already knows the answer/correct way and you just want to have something create the boilerplate. I shudder and the thought that people are proposing to use AI for medical purposes without supervision by a professional.

p1esk
2 replies
14h40m

When you say ChatGPT, do you mean 4 or 3.5?

thejohnconway
1 replies
10h44m

4 does this, but I don’t think this it the show-stopper some people seem think it is. You can ask it to correct itself by feeding it the error messages. It’s like have a very knowledgeable, extremely fast programmer on call. Sometimes their guesses aren’t right first try — so what?

p1esk
0 replies
10h19m

Maybe. I just don’t like to confuse the two, it’s like talking to a middle schooler vs talking to a graduate student. In both cases you can say the experience sucked but you should specify which was it.

tzs
0 replies
17h8m

I've seen similar, although in my cases the wrong answers turned out the be useful. They used commands or functions or APIs that I had not known about, and when I looked into those found that they could solve my problem when used correctly.

wordpad25
3 replies
20h12m

Wanted to give it a try. I didn't even consider many of these and doubt I would've come up with such a good list even if I had a lot of time. Much less than 1 second it took to generate the response.

In a world where magic exists, laws and the legislative system would likely undergo significant adaptations to address the unique capabilities and risks presented by magic. Here are several ways in which these systems might be different, based on various assumptions about the nature and scope of magic:

1. *Regulation of Magical Practices*: Just as with professions and practices in our world, there would be regulations governing who can practice magic, how it can be learned, and where it can be used. Licenses or permits might be required for practicing certain types of magic, similar to medical licenses or driving licenses.

2. *Magic-Related Offenses*: New categories of crimes would exist, such as unauthorized enchantments, illegal curses, magical fraud (using illusions or mind control for deceit), and unlicensed necromancy. Laws would specify what constitutes a magical assault versus a non-magical one, with potentially different penalties.

3. *Enforcement and Jurisdiction*: Special law enforcement units trained in counter-magic or magical defense would be necessary to handle crimes involving magic. The jurisdiction might depend on the nature of the magic used; for example, cross-dimensional or time-altering magic could involve a higher authority or international magical council.

4. *Evidence and Investigation*: The collection of evidence might include magical means, such as truth spells, scrying (viewing distant or future events), or reviewing memories directly. However, there would also need to be laws protecting individuals against invasive magical investigations.

5. *Rights and Protections*: There would be laws to protect non-magical beings from discrimination or harm by magical individuals or entities. Similarly, sentient magical creatures or beings might have their own rights, leading to more complex legal considerations regarding interspecies interactions.

6. *Ethical Use of Magic*: Legislation would likely address the ethical implications of spellcasting, summoning, and other magical practices, including the use of magic on unwilling subjects or creating sentient beings. Regulations might prevent or restrict the use of magic that can alter someone's will, memories, or identity.

7. *Magical Contracts and Agreements*: The legal system would need to address the binding nature of magical contracts, which could have irrevocable consequences. Laws might specify how such contracts can be fairly negotiated and what remedies exist for breaking a magical contract.

8. *Intellectual Property*: In a world where spells, potions, and magical items are created, there would be intellectual property laws protecting the creators of magical inventions. This could include patents on spell formulas, copyright on magical texts, or trademarks on magical item designs.

9. *International Relations and Security*: On a larger scale, there might be international laws governing the use of magic, especially in conflicts or espionage. Treaties could limit the use of particularly destructive spells or ban the creation of magical weapons of mass destruction.

10. *Education and Public Safety*: Laws would govern the education and training of individuals with magical abilities, possibly requiring mandatory schooling to ensure that magic is used safely and responsibly. Public safety regulations would address the storage of magical materials and the construction of magically protected buildings.

In this magical world, the legal and legislative system would need to be flexible and innovative, capable of addressing the ever-evolving nature of magic and its impact on society. It would be a blend of traditional legal principles and new rules designed to manage the unique challenges posed by the existence of magic.

pcc
0 replies
19h51m

Fun little exercise: while reading above, see what happens when mentally substituting “magic” with “AI”, words like “enchantments” with “algorithms” and so on.

bena
0 replies
20h7m

All of these are basically "what we have now, but with magic".

I'm sorry but how magic and the legal system interact is a subject explored in fiction. Hell, even Harry Potter touched on the subject. And that's not going to even touch authors like Jim Butcher and Piers Anthony who have entire series about "this, but with magic".

aspenmayer
0 replies
18h32m

You basically just described the anime Witch Hunter Robin.

https://en.wikipedia.org/wiki/Witch_Hunter_Robin

ben_w
0 replies
18h9m

To an extent, but it's only about 85-95% correct even in the good domains, which is fantastic for a few steps… and then it makes a fundamental error and half the time doesn't understand corrections given by domain experts that would get it back on track.

This limitation is the only reason why all of us all have jobs, so do plan for your economic future on the assumption the AI get past this flaw with no warning — how many steps it can do before falling over isn't something I can predict, despite regular use of even the OpenAI playground before ChatGPT came out.

QuantumGood
2 replies
19h52m

The longer I think about something, the dumber AI gets.

I have given our internal knowledgebase documents to ChatGPT and asked for summaries, and it adds in things from the general internet that are wrong to very wrong contradicting our KB (we are very far off the beaten path of conventional audio studio design testing).

skybrian
0 replies
17h55m

I think this effect is because we learn and the AI doesn't. (In the short term, for any given release of the AI.) Start asking about a topic that's new to you, and it will "know" things you don't. Then you learn those things, and it keeps making the same mistakes.

Over the longer term, new releases have improvements, but the feedback loop is currently pretty slow. It will likely get faster.

iamwil
0 replies
17h37m

It's like the Gell-Mann Amnesia for AI.

hawski
5 replies
21h25m

That's also something I often encounter with ChatGPT. It often is very wrong about things I ask, because I often wonder and try things off the beaten path. That's our advantage against LLMs.

adtac
2 replies
19h30m

Purely probabilistically, trying things off the beaten path is just a matter of higher LLM temperature. Turning up GPT-4's temperature is basically an expensive /dev/urandom pipe today, but I don't see any fundamental reason why LLMs can't catch up. Maybe all it takes is tinkering with how temperature is calculated.

jwilber
0 replies
11h18m

No - temperature is a sampling measurement from a distribution. If you adjust temperature, you still ultimately pull from the same distribution.

Parent is saying the advantage is in pulling ideas from outside that encoded distribution.

ben_w
0 replies
17h53m

You've missed the metaphor, I think. Higher temperatures will make it "more creative" for lack of a better term, but there's a lot of specialist knowledge it doesn't have and which you can't give it just by twiddling a dial.

It has a massive speed advantage that lets it read the whole internet, but it's dumb enough that it also needs to, and when even that doesn't give it enough examples, it's like asking me to improvise Latin based on what I recognise of the loanwords in English.

from-nibly
1 replies
19h37m

No the advantage is that you have context. That context is HUGE. That's why you can't and won't be able to have chat gpt actually do your job for you. You still need to ask it questions. And to even know what questions you need to ask you need to have context.

Jensson
0 replies
19h33m

You don't just need a context, you need to finetune a model around that context since working memory isn't enough. That is what human workers do, they spend the first months finetuning their model around the company context before they can start to provide valuable code.

Once you can pay to get your own fine tuned version of the big LLMs maybe we can start to do real work with these.

cjohnson318
2 replies
21h23m

The fact is that we're not that interconnected though. Finding people that have useful, actionable information, and want to share it with you, is extremely difficult.

fxtentacle
0 replies
14h13m

Or maybe we have just conditioned people to not share useful information publicly anymore.

Amazon selling what used to be open source projects has certainly had a chilling effect on people publishing source code under a permissive licence.

Jensson
0 replies
20h27m

That is a search issue not an interconnectivity issue.

chubot
0 replies
18h16m

If you want to maintain an information advantage, you should definitely not rely solely on the web. Simply reading a book is often 10x better. Comment from a few months ago:

https://news.ycombinator.com/item?id=38369583

One thing I've found comically underestimated is books. It's not whether the information is physical or electronic, but what actual corpus of information is available

...

So yeah I'd say 3 main repos of knowledge are: the open Internet, printed books, and Scihub, and many people today only use the first one

Though I'd also put "talking to the right people" and by extension "working at the right place" pretty high up there in getting novel information.

I think for a certain generation, Google probably diluted their ability to access other knowledge bases, and act upon that info. And for the generation currently learning how to learn, LLMs may do the same thing.

Aurornis
21 replies
17h40m

For example, on LinkedIn, someone may say they were CMO of a company for just three months, and if you take that data at face value, you may think something very bad must have happened for a company to hire and lose such an important role in so short a time.

But when you look closer, you see that this person was a sophomore in college at the time, that the company was just a month old, and that actually, it wasn’t so much an executive hire as it was kids trying to get something off the ground.

Although not the main point of the article, this is much better advice for finding high-potential employees who are under the radar than the article that was on the front page a few days ago: ("How to hire low experience, high potential people" https://news.ycombinator.com/item?id=39288669 )

When the last article hit the front page I couldn't believe HN was upvoting an article that advocated for deep-diving into people's personal early life history as a hiring technique. That's like reading the tea leaves for interviewing candidates.

Instead, this article reveals the real secret: You have to be prepared to do some legwork and research, as well as look beyond the obvious. You could go hire a lot of smiling ex-FAANG people with perfectly curated LinkedIn profiles and professional headshots by throwing large salaries at them, but you're not going to get the best results that way. It's also one of the least efficient uses of your headcount budget and those people are almost guaranteed to ghost you in 18 months when someone else offers them a fancier title.

There is a lot of gold waiting to be discovered among people who aren't curating perfect LinkedIn profiles or collecting famous company names for their resume. It takes some work to find it, but it's out there.

edmundsauto
8 replies
15h1m

This is funny because Facebook was famous for adopting this attitude early on. Hire people with potential instead of credentials.

If you have a good perf review system and don’t mind false positives it can work. Maybe it worked better before boot camps.

febeling
7 replies
10h17m

What changed with boot camps?

ildjarn
6 replies
9h19m

It’s the shift of perception from technology being “geek stuff” to a lucrative field on par with finance, medicine and law. Before, someone who knew a bit of programming was likely to be passionate and with high potential. Now they may still be, but also they may just a way to get rich quick.

vik0
5 replies
9h12m

But the vast majority of people who go to programming bootcamps don't get rich quickly, or rich at all. Same with most people in the world who practice medicine or law

Or did you mean to convey that people who go to boot camps think it's a way to get rich quick? Your wording is a bit confusing toward the end there

shakow
2 replies
8h53m

I think he means that ‶guy who knows to code″ used to be a statistically strong marker of ‶tinkering guys with a potential″, which is less true now that IT is now socially internationally perceived as a high-reward career, hence a one where people may go either because they like it, or just because they want to make stacks.

kasey_junk
1 replies
2h57m

The problem with this theory is people have been saying it for 25 years. At least as long as I’ve been in the profession. And it’s not historically accurate in the past either. Programming was a clerical job for much of it’s history.

ozim
0 replies
50m

Have to agree, the 'dot com bust' was around 25 years ago, bubble started somewhere 1998-1999.

rstephenson2
0 replies
1h50m

Boot camps take the “seems to be passionate about coding in their free time” signals and help their students try to fit those signals by encouraging them to build personal projects on GitHub etc. This somewhat dilutes the ability of recruiters to check off “has some GitHub projects” as a heuristic, however useful that was to begin with.

bryanrasmussen
0 replies
3h42m

But the vast majority of people who go to programming bootcamps don't get rich quickly, or rich at all. Same with most people in the world who practice medicine or law

but they do get a reasonable shot at a middle class lifestyle with potential to do really well at some point.

That's better than a lot of career paths give you nowadays.

intelVISA
7 replies
3h30m

A highly curated LinkedIn is a red flag - nobody producing value has time to tend that walled garden.

ronald_raygun
2 replies
3h8m

Really? Is it a walled garden when like 90%+ of recruiters are on there?

Would you say “a good resume is a red flag, nobody producing value has time to tend that”.

The big irony is making things look nice, be readable, be sellable is creating value. In the case of making a LinkedIn or a resume, the value your creating is in terms of your very own capability to get a job and have a high take home pay!

jacurtis
0 replies
2m

I have a highly optimized LinkedIn, yet I don't really spend a ton of time on there. I log in once or twice a month, answer messages, and do a few things. Maybe make a post once every 2 months or so. Nothing crazy, yet my profile is very well optimized.

And this is why: Everyone looks at it. Far more people then you would expect.

I can't tell you how many meetings I have gone into where I walk in and see that everyone at the conference table has my LinkedIn profile up on their laptops. They knew they were meeting with me, had no idea who I was, so they looked me up on LinkedIn right before. Its my introduction and sets the tone for a meeting or presentation. This experience is not only in the professional world either. I recently (in my mid-30s) decided to go back to academia to get a PhD and now I am encountering people in academics who use my LinkedIn to learn about me when I reach out for academic or research assistance. When applying for stuff, presenting at conferences, all of it uses LinkedIn as the central way to communicate your identity. Having a nice profile helps me set a positive first impression that helps me gain more responses.

Also, I can't fail to mention that recruiters use LinkedIn as a search tool. Not every recruiter is worth your time, but the right ones certainly are worth it and LinkedIn helps connect you with more of them. THey also use your LinkedIn when communicating with their clients. Almost every job interview you do, you can assume the hiring manager and interviewers have looked at your LinkedIn profile before meeting you. Most have probably looked more closely at your LinkedIn profile than your resume.

Curating a LinkedIn profile is not that much work. Maybe 30-60 mins per month is all it takes if you have a lot happening and maybe 1 hour per quarter for everyone else. Its not a huge commitment, and yet the gains in my opinion are far greater than the small amount of time spent on it.

I don't care if people want to hate on LinkedIn and say its stupid. If anything that benefits me, because it helps optimized profiles stand out. Maybe theres a few people (like the parent comment) who have an aversion to LinkedIn for some reason and will look down on someone with an optimized profile. But I truly believe it offers far more value than it detracts. The people that look down on it are usually people that are not in any position of power anyway or they are a micro-minority.

intelVISA
0 replies
2h39m

That is true, I suppose...

gessha
2 replies
3h9m

A sizable chunk of engineers from the most competitive companies got fired - people used to optimizing for everything - resume readability, LeetCode hards, system interviews, etc.

You think they won’t optimize their profiles for their next job?

samatman
0 replies
1h56m

There's a line somewhere between "optimized" and "highly curated" where I consider LinkedIn a negative signal. Those are both utterly subjective phrases, but nonetheless.

My LinkedIn is utterly moribund, which has come up in interviews a few times. Either they say "you don't really use LinkedIn, huh" or I explain the same. As far as I know this has cost me nothing.

Hayatabad
0 replies
2h9m

He has a point. It depends on how optimized it is.

jayd16
0 replies
10m

How curated are we talking here? How much effort does it take? You mean stuff like weekly social posts about synergy and new paradigms?

ilrwbwrkhv
2 replies
13h22m

What's the impetus for headhunters to do all this legwork though? Just like noone got fired for buying IBM, the headhunters at large FAANG almost always go for the shiny, smiling LinkedIn profiles.

ozim
1 replies
11h32m

For headhunters none, for you if you are running your small company - huge. Great hire can pull you out off troubles, bad hire can run company to the ground.

jmathai
0 replies
2h1m

I am trying to build a hardware prototype which expands beyond my experience in software.

There’s no way I would try and hire someone with lots of experience and credentials. There’s too much risk.

Instead, I found a recent college grad who I believe is capable of doing what’s needed at this stage.

I face lots of risks and the last thing I want is to add an expensive hire to it.

dataflow
0 replies
1m

What about that example do people find to be high-potential? I'm a bit confused because to me 3-month "CMO" of 1-month-old company as a college sophomore sounds at best like a slightly positive signal, and at worst a strongly negative signal, since it could mean anything from "they had enough ambition to work and study at the same time for 3 months and learn stuff on the job" to "they couldn't focus on classes and/or wanted to do something that quickly would put a big-sounding title on their resume".

austin-cheney
20 replies
16h32m

The article misses something super critical: why?. More specifically why a key few hires can change the trajectory of a company. The answer almost always comes down to one word and applies companies of any size: leadership.

Leadership is a concept largely foreign to the software industry for two reasons.

* Most software developers are never exposed to strong personalities and have no idea what strong leadership looks like. An excellent software developer tends to score high in agreeability, but a strong leader knows how to turn that down to 0 for maximum confrontation and/or defiance.

* Strong leaders know when to not follow trends with extreme criticality. This is highly paradoxical since so many software companies are funded by advertising which is a business of growing trends. Real leaders set new trends and take share from existing players setting their brand reputation in the process. Most people in software are deathly afraid to abandon conventions of comfort whether in business or in product/process innovation.

munificent
9 replies
15h52m

I'm sure all software developers have been exposed to strong personalities. Often in the form of bullies, primadonnas, class clowns, big men on campus, pointy haired bosses, etc.

The problem is that—as a generalization—strong personality tends to correlate inversely with competence, and software engineers esteem the latter above all else (because a machine is not swayed by charisma, after all).

Good leadership is really the combination of strong personality and competence. That combination is so rare that for most software engineers, the safest bet is to simply try to avoid employers with strong personalities entirely.

Madmallard
4 replies
15h35m

"strong personality tends to correlate inversely with competence"

citation needed

nottorp
1 replies
3h39m

"if you feel a need to be aggressive, it means you've done no deeds that will speak for you"

How about that?

Madmallard
0 replies
2h18m

also seems wrong

edmundsauto
1 replies
14h59m

Not to support the generalization but this describes my bias. Just as the flashy person with diamonds and a nice car does not indicate class.

It also kinda makes sense if you think it through. Strong personality types receive less feedback. Feedback helps us grow.

zmgsabst
0 replies
13h4m

Aggressive personalities aren’t the same as strong personalities, in precisely the way status trinkets aren’t class.

I’d say strong personalities are essential for consensus:

When it’s clear who is holding court, there’s much less talking over each other and bickering, precisely because the leader has made it clear what the expectation is — and everyone knows it will be enforced.

“Let’s quiet down for a moment, I really want to hear what @edmundsauto is saying.”

austin-cheney
3 replies
13h38m

Competence at what?

Avoiding strong personalities purely for the sake easing hostility sounds a lot like cowardice. A better course of action is conflict resolution.

ephemeral-life
2 replies
10h21m

Competence at what?

There is a class of people who believe leadership is something you can learn on its own. These are the people who join all the useless commitees for resume building purposes. Leadership only has value when paired with significant knowledge in some domain.

Avoiding strong personalities purely for the sake easing hostility sounds a lot like cowardice.

Cowardice is when you avoid something, because of fear. GP is talking about avoiding people who will annoy you.

austin-cheney
1 replies
6h5m

Cowardice is when you avoid something, because of fear. GP is talking about avoiding people who will annoy you.

That sounds like autism. Avoidance for what ever imaginary reason your creative mind can conjure is still cowardice. The excuses only fool yourself. Other people will see it as they choose.

ephemeral-life
0 replies
3h26m

ah yes. the internet autism diagnoses.

Aurornis
6 replies
16h27m

Leadership is a concept largely foreign to the software industry

Maybe in the companies you’ve worked for, but I haven’t found this to be true at all

An excellent software developer tends to score high in agreeability, but a strong leader knows how to turn that down to 0 for maximum confrontation and/or defiance

Hard disagree. “Maximum confrontation and/or defiance” has never been a goal of good leaders who are trying to build a team that works together.

Encouraging people to speak their mind is good. Encouraging “maximum confrontation” is just going to create chaos. The goal is to work together to ship, not to argue and defy all the time. I can’t think of anyone who would want to work on a team where everyone had agreeableness dialed down to 0 where leaders encouraged confrontation all the time, except maybe for people who just like to argue a lot.

Most people in software are deathly afraid to abandon conventions of comfort whether in business or in product/process innovation.

Another strong generalization that I can’t agree with. Most people I’ve worked with in software have been so aggressive about bucking trends and trying new things that we’ve had to dial it back a notch. A lot of the debates I’ve had with teams have been about choosing boring, stable technologies over the newest cutting edge technology that’s popular on Twitter. Same goes for business strategies, where I’ve had to deal with everyone from product managers to sales people trying to do things their own creative way when the standard, boring practices are what finally got the job done.

vegetablepotpie
1 replies
8h9m

I can see where you’re coming from and I have some more context to add, which can be helpful.

This phrase:

a strong leader knows how to turn that down to 0 for maximum confrontation and/or defiance

Feels like something uttered by a cocaine snorting MBA who gets high off creating hostile work environments and enjoys firing people.

I’ve been in environments where everyone is agreeable because they have to be agreeable. They are agreeable to a fault. These environments have a banal toxicity that is hard to pin down, but it shows when there is disagreement. These orgs value agreement above all else, above performance, above achieving goals. They suppress valuable insight because it is uncomfortable to them. FUD is a great rhetorical strategy that I’ve see used to squelch conversation, thinly veiled appeals to authority is another. This breeds complacency and destroys value.

We have a management class in the west that believes that management can be a mechanistic exercise of gathering metrics, assessing performance, and assigning corrective actions. That it can both be systematized and abstracted away from the work is a core assumption of western management. This is a paint by numbers approach, similar to Searle's Chinese room, and works to take the leadership out of management.

You’re right that most of the time the best choice for your business and customers is to choose the boring, stable technology that works. Avoid the rewrite, don’t use the hottest new frameworks, or languages that are in vogue.

Experience shows generally to dismiss developer desires for novelty. But sometimes you do need a novel solution and your developers are the ones who are going to tell you. You will not figure this out by following the rules taught in management school. Metrics like CPI and SPI are only going to tell you that you’ve made the wrong choice on your project when it’s too late. You need to make a decision, which means seeking out information and making the best decision based on the information you can get. This takes leadership.

The best programmers I know have strong opinions. They will tell you that you are wrong. They do this to learn both to test themselves and the people around them. If they are wrong, the withdraw, then more on. It’s the most healthy thing I’ve seen and is actually what builds success.

Disagreement is healthy and good and should be encouraged, when the goal is knowledge gathering. A leader who is turning agreement to zero is a leader who wants to be decisive.

austin-cheney
0 replies
7h13m

Feels like

Stop the madness. It does not matter how things feel. It matters how things are measured, which includes employee retention and delivery and product quality. Most of the comments here loudly scream none of these people have been in management.

The one big difference between a leader and a contributor is ownership. Everyone here has danced around the idea of ownership without addressing it. Unless you have owned liability in a managerial capacity it doesn’t matter how things feel, because your perspective is too narrowly construed.

Disagreement, when voiced, is a form of confrontation. It is healthy. Healthy teams are brutally honest. Most of software absolutely abhors confrontation.

austin-cheney
1 replies
16h20m

Leaders do more than build teams. Team building is inward looking. Real leaders are outward looking and have to be willing to defend their team(s) at personal cost.

DavidPiper
0 replies
10h7m

This reads as a criticism of team building (maybe that wasn't your intent, but it was my first reading.)

Strong teams need a balance of transformational leadership, servant leadership, some laissez-fair management and some micromanagement (etc, etc).

I.e. situational leadership.

In my experience, given how many ANDs you need, the most successful teams have a combination of leadership styles and roles from different people at different times. Balancing leadership and responsibilities across people also allows the personal cost of such activities (whether it's emotional labour, political backlash, etc) to be distributed as well.

This, I also believe, is why co-foundership appears so often in stories of successful companies. You always end up with "the leader that is good at X despite Y", and there is always someone who can come in to provide "Y without necessarily X" in a way that makes their joint leadership work in ways that individually they would fail.

whstl
0 replies
6h3m

> Encouraging “maximum confrontation” is just going to create chaos. The goal is to work together to ship, not to argue and defy all the time. I can’t think of anyone who would want to work on a team where everyone had agreeableness dialed down to 0 where leaders encouraged confrontation all the time, except maybe for people who just like to argue a lot.

To "turn that down to 0" doesn't mean operating 100% of the time in non-agreeable mode. It also doesn't mean doing that only with your team. It also doesn't mean encouraging people to do it. It's just that sometimes you gotta put your foot down.

You jumped to another extreme, but the whole point is just that both extremes are problematic. An always 100%-agreeable leader needs a lot of luck to succeed.

wgj
0 replies
14h20m

tends to score high in agreeability,

> Hard disagree. “Maximum confrontation and/or defiance” has never been a goal of good leaders
willsmith72
1 replies
15h57m

what you call the "software industry" aligns with my experience in the "enterprise software industry"

outside of that, it couldn't be further from my experience. many many defiant and confrontational founders/PMs, who often actively avoid the accepted "best practice" and rethink what they're doing if they find themselves in the majority

Aurornis
0 replies
15h27m

many many defiant and confrontational founders/PMs, who often actively avoid the accepted "best practice" and rethink what they're doing if they find themselves in the majority

My experience as well. Big enterprise software companies reward keeping your head down and toeing the line, but "confrontational and defiant" personalities trying to reinvent the wheel are a common feature at every other type of software company.

Sai_
0 replies
16h19m

You may have missed the point of this article. Here, we’re being exposed to the “how”. Then why is unrelated to the why. If you have a why, the is the how to follow.

leetrout
7 replies
17h2m

I have a quip I have repeated for years that aligns with this:

The people talking about the stuff largely arent the people doing the stuff because theyre too busy doing the stuff.

Where, in my case, in our tech circle, that is generally design and development. Of course there are outliers.

baxtr
6 replies
5h11m

Absolutely. At the same time it’s a bloody shame they don’t.

Great people who actually do great stuff need to become people who talk about it too.

Otherwise only those that master talking will get exposure.

enriquto
5 replies
4h53m

Otherwise only those that master talking will get exposure.

Or maybe we shouldn't organize our society so that "exposure" is such an overwhelmingly positive thing.

hiatus
2 replies
2h45m

Exposure, as I understand it, is becoming known to more people. I can't see how society, which is predicated on interpersonal relationships, could be organized to dissuade increasing your renown.

trealira
1 replies
2h27m

Perhaps with some sort of score that rewarded accomplishing things more than just talking about them. Then you could look at someone's score and trust that they've accomplished more. Seems meritocratic, but also somewhat dystopian, honestly.

jmathai
0 replies
1h58m

I’ve trended in the opposite direction. I don’t particularly enjoy taking about any successes I may have had and I don’t care to make sure others (including in the company I work at) are aware.

It’s not great for career progression. But I actually don’t care about that. So instead of playing that game I play one I actually care about which looks very different than one where exposure matters.

bruce343434
1 replies
2h39m

There's only so much you can do. If someone isn't advertising themselves, they can not be expected to be found in a sea of people who do.

If I solve the navier stokes equations or whatever, but never talk about it... Yeah. Who would know. How could they know?

Tree, woods, sound.

enriquto
0 replies
1h44m

If you solve the Navier Stokes equations, you'll give a couple interviews at some local media and then keep working in other problems. The director of your lab will not make a career for himself talking about your achievment.

Some exposure is of course necessary, but when it becomes the main driving force it turns ridiculous.

danbruc
6 replies
5h54m

Tangent. I understand the appeal of a four hour work week but how can people take that seriously? You will either have to have a pretty unappealing standard of living or you will have to exploit other people. If you only work four hours per week, then you can only consume as much as can be produced in four hours each week. But I guess people mostly hope to make the exploitation strategy work, manage to only produce something for four hours per week but consume stuff that took others forty hours to produce.

dinkleberg
5 replies
3h26m

It has been years since I read it, but iirc the main point was to build businesses that once they are up and running don't require much much work to keep bringing in money.

I suspect achieving what he was aiming for is much harder now than it was when he wrote the book, as the barrier to entry to starting businesses goes down and the competition continues to rise.

But I think there is still something to be said for the goal of building a "lifestyle" business rather than trying to build a hyper-growth startup. If you are strategizing through the lens of "How can I make enough money to survive and thrive with the minimal amount of time spent?" vs "How can I grow my business to $100m" you'll come up with very different plans.

danbruc
4 replies
2h38m

Also known as selling something at an inflated price and hoping that no one will undercut you.

dinkleberg
3 replies
1h58m

How so? If I build a simple app that does its job and does it well, I don't need to work on it for 40 hours a week for the rest of my life. After the initial effort, you can put in almost no work, just periodic maintenance, and you can keep bringing in sales.

There are lots of opportunities like this. There is nothing morally or ethically questionable about this. It doesn't require you to over-inflate your prices. That is the power of software. The VC hyper-growth world is heavily focused on continuously changing your software, but that isn't a requirement (and in fact, is often not desirable to your customers).

danbruc
2 replies
1h22m

If the market works as intended, someone will replicate your app and sell it cheaper, the price should eventually converge to the costs. And you might not agree, but I find it morally questionable [1] to run a business that pays good money for little work.

[1] There is of course more nuance to this then expressed here.

dinkleberg
1 replies
55m

How does that make any sense? If you’ve built something that provides good value to people, more than they are paying for it, how can you believe that it is a morally questionable arrangement?

danbruc
0 replies
37m

Imagine a tiny economy with two people. There is one guy that produces everything humans need and he can produce one day worth of supplies in four hours. But to keep his business running he depends on a piece of software that the other guy wrote, without it he could not produce anything. The other guy happily licenses his software for one day of supplies per day. Now the only thing he has to do besides enjoying all his supplies is to now and then spend an hour of work to maintain the software.

ustad
4 replies
10h2m

“I plug names of lesser-known key employees into Google and Spotify” ?

Whats with spotify?

subarctic
1 replies
9h49m

Maybe for podcasts? That's the only thing I can think of.

dinkleberg
0 replies
3h25m

You wouldn't want to interview someone with bad taste in music.

hailt
0 replies
9h43m

For featuring in a podcast episode, I guess.

causal
0 replies
2h3m

Spotify has a really good search engine, may uncover some relevant audio work.

tirthvora
4 replies
11h49m

To double down on one of the points a bit, its well observed and accepted that the CEO or senior authorities get all the attention, even in politics; But there's always the less-known team behind it that did most of the work. They don't get the credits because they couldn't take the RISKS, that the senior folks did. Senior positions come with its associated risk and stakes, and not everyone has the guts to tackle that kind of exposure.

travisjungroth
1 replies
9h7m

Senior positions come with its associated risk and stakes

Strong doubt on the risks part. Sure, for founders. But for executives at established companies, the more senior you are the less risk you have. Severances get larger, firing rates get lower, and you probably have more cash in the bank.

f1shy
0 replies
5h47m

AND if you get fired you go to destroy the next company... They always have work.

mordae
0 replies
8h37m

Nobody asks them and I've seen attempts to prevent assistants of members of Parliament running as candidates. They have the know-how and the moat is pretty narrow so the acting politicians felt the need to widen it somewhat.

hnthrowaway0328
0 replies
4h55m

A lot of those risks are for other people, not themselves. They are there for a variety of reasons but accepting risks is probably not the primary one.

mattfrommars
4 replies
20h42m

I read the article, what mind opening tactic did the author explain to the public?

Appears to be blogspam.

tmountain
2 replies
20h25m

Going down a layer below Tim Ferriss to his staff… /s

teaearlgraycold
1 replies
18h6m

I agree this is kind of trivial, but doing anything more than the average person already puts you in the high percentiles.

My abstract strategy for making contact with the right people:

Try to imagine things from the perspective of the people you're trying to contact. What is their day-to-day, what are they sick and tired of? What will make your cold email stick out in that context? It's click-bait targeted at a specific person. Again, this is probably just one level above what the average person will do. But that already puts you ahead of most people. It gives you some chance of reaching the right person.

dinkleberg
0 replies
3h19m

The thing is, almost all of the "secrets" of successful people are trivial, it is just that few people are doing them consistently. Anybody could practice basketball as hard as Kobe Bryant did, but almost nobody is actually willing to put in the consistent work over years and years.

vrc
0 replies
19h58m

I think the extent to which writers have to get creative in finding sources for novel pieces is taken for granted. I did not realize how tactical and painstaking the process is to create the content that I casually skim and enjoy. So I strengthened my appreciation of the process

soneca
2 replies
17h57m

I was kind of curious in what the author had to say, but his sole motivation being ”getting ahead of competitors” really put me off.

Not really much thought or deep reflection on my part,I just feel that they are a type of person that I don’t want to be or follow advice.

So I stopped reading.

Aurornis
1 replies
17h48m

The daily newsletter business is a crowded space, to say the least. The barrier to entry is as low as it could possibly get and the unit economics are brutal.

Getting ahead of competitors isn't some abstract, arbitrary goal. It's the only way to survive in a business like this.

If your goal is just to write for the sake of writing and you don't care if anyone reads it, this article probably isn't for you.

soneca
0 replies
15h51m

I think it’s almost there. I care if people will read me (I am an amateur fiction writer), but not a competition to read me and not others. Or praise my writing more than they praise others.

thenerdhead
1 replies
19h37m

What was the answer?

Find a human who can bypass robots.txt/limitations of ChatGPT and find less traveled sources given the internet fails to archive itself and important information can be wiped any minute? Isn't that just called good journalism?

gamepsys
0 replies
18h3m

The author primarily talks about interviewing interesting but un-interviewed people as a new source of interesting information to write about. The author talks about their process of finding new people to interview.

simne
0 replies
19h8m

And it is extremely hard to find person, who prefer the people who were deep in the trenches.

What I see most time, managers buy bright cover, very few bother to look deeper.

And it's easy to understand - as said wisdom, "No one fired due to buy IBM solution". When I agree, not many people are like IBM with near cloud-less clear reputation, but it's just simple and need less energy, to not try swim against flow.

nhggfu
0 replies
18h35m

excellent post.

neilv
0 replies
13h43m

For my back-burner startup, the reason it would work (in theory) was due to a particular thing we were doing that none of the other players in the market would do, even if they thought of it.

(Unfortunately, just as I got time to move it to front burner, it was probably killed by LLM consumer popularity.)

l2silver
0 replies
4h15m

I love articles like this. Short, concise, and filled with interesting and unique tips instead of the usual cliche dribble. I particularly loved the outlining of Zack's methods.

irrational
0 replies
20h27m

I wonder if Stephen Hanselman is being inundated with email right now and doesn’t know why?