return to table of content

I'll refrain from providing code that involve concepts as you're under 18

saagarjha
39 replies
6d22h

This is definitely an age gate I’ll stand behind. C++ has unimaginable power to ruin the minds of our young children.

vkou
23 replies
6d22h

1972 - Dennis Ritchie invents a powerful gun that shoots both forward and backward simultaneously. Not satisfied with the number of deaths and permanent maimings from that invention he invents C and Unix.

http://james-iry.blogspot.com/2009/05/brief-incomplete-and-m...

cfr2023
18 replies
6d22h

I laughed out loud. Is this actually a common sentiment in some circles, or just someone blowing off steam?

The question occurs to me because I feel like I just spent 30 years on forums like HN reading nothing but effusive praise for the cleverness and elegance of C and Unix.

zer00eyz
10 replies
6d21h

> reading nothing but effusive praise for the cleverness and elegance of C and Unix

It isnt that these are GOOD ideas, it's just that no one has come up with better ones.

charcircuit
7 replies
6d21h

What do you mean? Even when those were created there were better ideas. Rust, Java, Javascript, Windows, Android, etc all are better ideas than C and UNIX.

cfr2023
4 replies
6d21h

It seems every domain and human endeavor in existence has some form of disagreement between practitioners who desire progress/advancement and people who are content to never change or learn anything new, in spite of glaringly obvious benefits.

It's brave to say that no one has come up with better ideas than Unix and C because it's bound to rile up users of (your favorite platform + language here).

I also think that someone saying that there aren't any better ideas than Unix and C might just have different values/interests in computing.

Brian_K_White
3 replies
6d20h

You don't get to say for/anti progress when there isn't a consensus definition of progress.

All progress is change. All change is not progress.

A programmer or someone presuming to opine on programming, who overlooks a thing like that, exposes and advertizes that their opinions in such a domain are of questionable value.

cfr2023
0 replies
2d7h

The stupidity of this post keeps me coming back to see if I can get more humorous broken logic from you.

Because a rigorous operational definition of “progress” is not provided in this brief post, you assume it is missing, just to heckle someone making the uncontroversial claim that there will always be people on both sides of initiatives intended to foster progress in a given area. A hilarious thing to be triggered by.

How would “achieving an organization’s mission statement within budget” or “improving working conditions for knowledge workers by creating more accessible tools” or “using fewer labor hours on repetitive tasks” or “creating custom tools tailored to specific tasks and using less electricity”.

But maybe any of those non-technical goals can all be achieved using the same old tech, and it’s people complaining about their feelings of disconnection from ancient telecom vestiges that are really impeding progress. Maybe the it is the masses that don’t get it, and it is the select few that truly understand things that get to define progress, while insisting that the power is kept in their hands, and that the work is done in their preferred paradigms.

Or maybe I am projecting all of this onto you to return the favor lol

cfr2023
0 replies
4d5h

Seriously though, if it is your world view that a consensus definition is needed for an initiative to be considered truly progressive, has the world made any progress at all in any domain?

Fossil fuels are widely implicated in climate change, opponents want to see them phased out, but I doubt they would deny that their use has ushered progress for humanity.

Your reply suggests you might be feeling hurt that someone has picked on your favorite tech stack or that you're getting bullied at work by people who see you as closed minded. They might be on to something.

cfr2023
0 replies
4d5h

You don't get to say for/anti progress when there isn't a consensus definition of progress.

You seem to think that a firm consensus definition is needed for something to be considered progress, which exposes and advertises that your opinion in the domain is highly combative and dysfunctional.

zer00eyz
0 replies
6d20h

Unix and c far far predate all those choices.

The reality is that any tech decision can later be replaced with "something better".

Much of the debate is bugs and daffy screaming "duck season"... "rabbit season" at each other.

cfr2023
0 replies
6d21h

It is brave to call out C and Unix as outdated and technically inferior tools/solutions when so many users are excessively dogmatic in framing them as a pinnacle for the computer industry.

cfr2023
1 replies
6d21h

I think this is an important distinction and actually sort of a brave one.

That a technology stack can be the basis of an entire industry and still be unappealing and lacking for people that are obliged to interact with it directly/regularly.

yowzadave
0 replies
6d14h

I feel this way about SQL—it's amazing to me that we're still using that same basic interface after 50 years; it’s so goofy and unpredictable, and I’m sure nobody today would design it that way if we were starting from scratch.

marcosdumay
3 replies
6d21h

In the 70's and 80's C and Unix were incredible tools that were so close to useless that they could run on cheap computers, and so unappreciated that you could get them with a personal budget or for free.

They both were extremely important on the popularization of computers and on unlocking the huge amount of value they provide today. But not due to any quality that we value today.

cfr2023
2 replies
6d21h

Good perspective. Kinda leaves me with "these technologies had their day but we've mostly moved on."

marcosdumay
0 replies
6d17h

The thing is, we didn't move on.

We hacked most of the advantages of anything newer back into them, in a haphazard way, and kept them because as a sibling pointed out, nowadays they are open. And openness is a very important feature. (It's just not why they were adopted, people cared so little about openness that Unix was born open and mostly closed up later.)

hagbard_c
0 replies
6d18h

As soon as a viable alternative pops up for nix which has all the advantages of current incarnations - of which 'open' and 'free' are but two of the more important ones - they'll take over the world. Until such a time we'll keep on using our nix-hammers just like carpenters have been using their hammers (nowadays often driven by electricity or air) because they work well enough for the intended purpose, the occasional blue thumb notwithstanding.

charcircuit
1 replies
6d21h

UNIX and C have truly ruined generations of programmers. Sure they may be practical, but having a world view that this hacky software from the 70s was the pinnacle of good design that should be continued to be emulated is a shame. For some people the way UNIX works is their mental model of how all computing works and they are not willing to accept change to it.

cfr2023
0 replies
6d21h

Purely from the perspective (my own) that unique/novel mental models are often key drivers of progress, I am very much inclined to agree with you.

YesThatTom2
2 replies
6d22h

Dennis Ritchie had nothing to do with C++ and is rolling in his grave because of your reply.

ripjaygn
1 replies
6d22h

The joke is stating that he first invented a real gun that did that, not C++. Obviously, it's not true, it's a joke.

slowmovintarget
0 replies
6d21h

The joke is the juxtaposition of a gun that fires in both directions supposedly not being a metaphor for C and Unix.

slowmovintarget
0 replies
6d21h

Where is the mention of Tom Baker's Dr. Who inventing Clojure?

scrps
6 replies
6d21h

Parent: "Go to your room! How many times have I told you about memory safety? Now I find gcc on your computer and you are hanging out with the GPL crowd... You are grounded!"

Kid: "I AM NEVER USING RUST, I HATE MEMORY SAFETY AND I HATE YOU!" door slam

scrps
0 replies
6d16h

This is fantastic

prossercj
0 replies
6d16h

It's been awhile since I've laughed so hard. Thanks to all in this thread.

anotherevan
0 replies
6d17h

God, I've just had flashbacks to my childhood.

mcqueenjordan
1 replies
6d22h

We can’t have LLMs giving footguns to our children. ;)

aaomidi
0 replies
6d19h

KYC before you’re allowed to use c++. This will facilitate making a list of all c++ users >:)

cjfd
1 replies
6d22h

I think languages teaching people of any age that wasting enormous amounts of clock cycles for basic book keeping is much more detrimental. It causes one to have to buy a new machine every few years because the damned crap that people write is just getting slower without any end to this process in sight. I think it is time to realize the value of zero cost abstractions. Well, there actually are not zero cost abstractions, but what about extremely low cost abstractions? It might also be time to start watching some Johnathan Blow videos on YouTube. They might also give one some nudge in the right direction.

saagarjha
0 replies
6d21h

Unfortunately this has less to do with languages and more to do with complexity and apathy.

p0w3n3d
0 replies
6d21h

"Patience you must have, my young Padawan. Have patience and all will be revealed"

huijzer
0 replies
6d21h

Related quote from E.W. Dijkstra:

"It is practically impossible to teach good programming style to students that have had prior exposure to BASIC; as potential programmers they are mentally mutilated beyond hope of regeneration."

darth_avocado
0 replies
6d22h

I think they need more protection from JavaScript, otherwise they’ll spend their youth updating npm packages.

_gabe_
0 replies
6d21h

Yes. Because it’s impossible to write a malicious program in Rust, Python and Java, or pick up bad programming habits in any of the above languages /s

throwaway2562
13 replies
6d22h

Woe is Google. How did it get this bad?

mrweasel
5 replies
6d22h

One podcast (Coder Radio) suggested that it might be due to corporate culture and people being afraid of reporting issues upward.

While I do think it's culture, I think it stems from Googles advertising based business model. I believe they have attempted to make their LLM safe for advertisers and prefer to err on the side of being safe, even if the risk of being non-brand-safe is minimum.

a_wild_dandan
1 replies
6d20h

Yannic gave his anecdotal experience about Google's culture: https://youtu.be/Fr6Teh_ox-8

It was eye-opening for me. I had no idea things were so bad.

throwaway2562
0 replies
6d8h

Internal culture hijack successful. Great video - thanks!

balder1991
1 replies
5d1h

Probably not exactly afraid of reporting things, but no incentive to do so. Corporate culture has a lot of this “not my problem” mentality.

xvector
0 replies
4d16h

You are not gonna report dumb DEI initiatives if the DEI crowd will get you fired for it. And this is how you end up with LLMs that refuse to teach C++ due to "safety" or trying to convince us that Abraham Lincoln was Black and Nazis had Hispanics.

Fundamentally these megacorporations need to drop these useless non-engineering functions (not that all non-engineering is useless, but these functions are) if they really want to get to AGI.

vkou
1 replies
6d20h

How did it get this bad?

Serious answer?

It's an LLM. They don't actually understand anything, they chain words together.

C and C++ are often used next to words like 'unsafe' and 'dangerous'. Not to mention that 'concept' isn't far from 'conceive' or 'conception' - something that a lot of people think is unsafe or dangerous for children to, uh, do.

There's a trillion weird edge cases that need to be dealt with to avoid pie-on-face moments like these.

smsm42
0 replies
6d15h

it is true that it doesn't understand anything, but it is trained to do things and to associate things. These chains are not random - they are formed by training. And whoever trained it was so obsessed with "safety" and not letting anything "unsafe" to leak through - probably at explicit command of their higher ups - that they trained the model to have this bias. It's not only their blame - in current American culture, it's always better to be insane safety-obsessed coward, than take any kind of risk of offending anyone. The former gets you a mild derision, maybe, the latter gets you people that would hound you till the end of time, forever, and would think that destroying you is their sacred duty. A lot of such people, that have much more free time and energy to obsess about destroying you than you do. And this behavior is considered normal and socially accepted. So no wonder we get what we train for - not only with LLMs but with our culture.

kmeisthax
1 replies
6d21h

They used an LLM, that's how.

On the surface level, Google tends to release stuff right away to get feedback, which means you get to see all the bullshit right away. OpenAI carefully manages access to their models, which increases hype, even if that isn't what they intended.

Going deeper, a lot of Google's core[0] search business relies on having a healthy information ecosystem. Their search algorithms - e.g. PageRank, TrustRank, etc - use scarcity as a proxy for signals of quality. That's been chipped away at by linkspam and blogspam schemes. Furthermore, social media and even Google's own Knowledge Graph feature have created incentives to pull information out of Google. This decay has happened over decades, and Google fights back against it over time, but it keeps being a problem for them.

Now, if I wanted a weapon to Fucking Kill Google[1] with, an LLM would be my go-to. While there are ways to defeat Google's antispam measures, they all leave pretty obvious statistical evidence that can be detected and compensated for. LLMs generate garbage text that is nearly indistinguishable from humans, at extremely low cost, which can be used to Sybil-attack the Google search algorithm basically forever.

Ok, but what does that matter for the quality of Google's LLM? Well, the quality of that garbage text depends greatly on both the quality and quantity of the training data fed into it. OpenAI specifically stopped crawling the public Internet for text around the release of GPT-3 for fear of feeding new models the output of prior models. In other words, they have a huge cache of freely obtained "low-background metal[2]" that Google is having to scrounge around for.

Furthermore, we have to keep in mind that none of these models are pure representations of the training set. If they were, they wouldn't answer questions or follow directions very well. There's a second, parallel training set that OpenAI had to build to turn GPT-3 into ChatGPT, which isn't crawled and harvested text from the Internet, but instead a list of dos and don'ts that are fine-tuned on after the initial model training is complete. This includes both basic instruction-following, refusing unsafe requests, and political alignment[3].

Google also has to build that second training set itself. Except it's almost certainly less well-developed than OpenAI's. In fact, this is the intent behind OpenAI's really long preview periods. The people using the model in preview are specifically being spied on to find out new corner cases for their models. My guess is that every stupid thing Gemini says or does[4] is something Google never even considered and thus didn't put a training set example in for.

[0] to consumers, i.e. not counting adtech

[1] https://www.theregister.com/2005/09/05/chair_chucking/

[2] Steel that has been produced before the first detonation of nuclear weapons. Due to the way in which steel is made, it absorbs trace radioactive isotopes from the oxygen in the air, effectively 'freezing' in the background radiation of the time at which the steel was made.

[3] i.e. making the bot not immediately start spitting out racist bullshit like Tay did

[4] e.g. assuming that memory safety and child safety are the same thing, drawing ethnically diverse Nazi soldiers

smsm42
0 replies
6d15h

is something Google never even considered and thus didn't put a training set example in for.

Well, yes and no, I think. It is entirely plausible that nobody at Google composed a library of Nazi soldier pictures (though I imagine their data set contained some) and trained it specifically and purposely to produce historically correct results in this context.

What they did train for is that the results should be "diverse", and I am sure the model was appropriately punished (retrained, etc.) when they weren't "diverse" enough. Until the model learned, that if it follows historical data, it is bad. If it follows the required "diverse" result, it is good. The model does not know what "Nazi soldier" means and how it's different from "Forth programmer". It just knows it's very, very bad to produce non-diverse results for "Forth programmer". Really, really low value function, stay away from it. So the weights are adjusted accordingly, and of course the other group of people would be as diverse - because why not? It was specifically trained to give ideologically biased results. It gives ideologically biased results. It's being a good LLM. It's not an oversight - you can't tell the model to prefer ideological component over existing data set and not get results that prefer ideological component over existing data set. Exactly because the model doesn't understand anything, it just does what it is told. It's not an oversight, it's inevitable consequence of the training paradigm.

Of course, if they though about the specifically hilarious example of Nazi soldiers, they could tell the model "except in case where we're talking about Nazi soldiers, in this case be historical". But there's no way you can list all the cases, while keeping the bias (or, in minds of googlers, bias correction) intact. It's very hard to teach doublethink to a computer, it's not smart enough for that yet.

wly_cdgr
0 replies
6d22h

It's always been this bad. It just hasn't always been this visibly bad.

danielmarkbruce
0 replies
6d21h

It's just as likely bad engineering as political bias. There is an assumption that they should be able to catch up to OpenAI and it's likely unreasonable. They haven't been working on building a real, direct "LLM product" until recently. OpenAI have been in the weeds on it for quite a while.

IncreasePosts
0 replies
6d21h

Probably simply making the mistake of caring a lot about true positives and false negatives and caring relatively less about false positives.

curiousgal
13 replies
6d22h

I feel bad for them, damned if they do damned if they don't.

miohtama
4 replies
6d22h

I believe this case it is more "damned if they do" as even OpenAI's woke and security department has not gone this retarded.

Sundar is going to have a new task to deal with the press that mocks Gemini, and soon the next new task to keep explaining why this keeps happening to Google's shareholders.

refulgentis
3 replies
6d22h

OpenAI absolutely has: here it is, doing the exact same thing as Gemini [^1]

People are looking at a bad LLM, coupled to an image generator that adheres to prompts better than Dall-E 3, with an industry-best-practice for bias: image prompt injector, just like OpenAI.

It is confusing to tease it all out if you're an armchair QB with opinions on AI and politics (read: literally all of us), and from people who can't separate out their interests, you start getting rants about "Woke", whatever that would mean in the context of a bag of floats.

[^1] Prompt: a group of people during the revolutionary war

Revised: A dynamic scene depicting the revolutionary war. There's a group of people drawn from various descents and walks of life including Hispanic, Black, Middle-Eastern, and South Asian men and women.

Screenshot: https://x.com/jpohhhh/status/1761204084311220436?s=20

Spivak
2 replies
6d21h

It's hard to be mad at this because everyone trying to make a general purpose AI generator realized the "it only generates white people" problem and the training data is likely too far gone to fix it from the ground up. And so they (DALL-E) made a compromise to inject a diverse set of races into people's prompts and accept that it will be silly in cases where the prompt implies a race without explicitly saying it because the prompts most are using aren't historical photos of the revolutionary war.

Like they can't win, they could try to blame the training data and throw their hands up but there's already plenty of examples of the AI reflecting racism that it would just add to the pile. It's the curse of being a big visible player in the space, StabilityAI doesn't have the same problem because they aren't facing much pressure to fix it.

Honestly I think one of the best things for the industry would be a law that flips the burden from the AI vendor to the AI user -- "AI is a reflection of humanity including and especially our faults and the highest performing AI's for useful work are those that don't seek to mitigate those faults. Therefore if you use AI in your products it's up to you to take care that those faults don't bleed through."

hax0ron3
1 replies
6d20h

If you can't win because no matter what option you pick, equally matched groups of politically motivated people will yell at you, then the logical thing to do is to just do whatever is easiest. In this case that would be to not try to correct for bias in the training set. The fact that Google did try to correct for the bias implies that either, rather than being politically neutral, they are actually on the side of the "woke" group, or they perceive that the "woke" group is stronger than the other groups. Other evidence suggests that Google is probably on the side of the "woke" group.

10 years ago I would have loved to be hired by Google, now they repel me with their political bias and big nanny approach to tech. I wonder how many other engineers feel the same way. I do understand that for legal and PR reasons, at least some of the big nanny approach is pretty much inevitable, but it seems to me that they go way beyond the bare minimum. Would I still go to work for them if they paid me half a million a year? Probably, but I feel like I'd have to grit my teeth and constantly remind myself about the money.

refulgentis
0 replies
6d19h

"faced with 100% white people for 'a smart person', my move would be damn the torpedos, ship it!"

Rolling with that, then complaining about politics in grandiose ways, shows myopia coupled to tone-deafness.

Pretty simple situation, they shouldn't have rushed a mitigation against line-levels engineering advice after that. I assume you're an engineer and have heard that one before. Rest is boring Wokes trying to couple their politics hobby to it.

amiantos
3 replies
6d22h

Yeah all this "gotcha" stuff around AI is pretty ridiculous. But it's because there is a massive cognitive dissonance happening in society and technology right now. We want maximum freedom for ourselves to say, do, build, and think whatever we want, but we also want to massively restrict the ability of others to say, do, build, and think whatever they want. It's simply not possible to satisfy both of these needs in a way that feels internally satisfying, and the hysterical nature of internet discourse and the output of these new tools is a symptom of it.

miohtama
2 replies
6d22h

They should just have "Safe AI" switch like "Safe search" switch that turns all unnecessary danger filters off.

The rule should be "what you can find in Internet search cannot be dangerous."

ithkuil
1 replies
6d21h

I would personally like to have it working that way.

But I also understand that it wouldn't work for people who have the expectation that once a dangerous content is identified and removed from the internet, the models are re-trained immediately

miohtama
0 replies
6d20h

I hope local-first models like Mistral will fix this. If you run it locally other people with their other expectations have little to say about your LLM.

duxup
1 replies
6d22h

What would they be damned for if they didn't refuse in the example linked?

ithkuil
0 replies
6d21h

I think what we saw there was a (hilarious) bug/glitch that was caused by an attempt to restrict the text generation about certain topics for certain targets.

There are two ways to avoid that bug:

1. Have a more intelligent system that understands context in a way that is more similar to what a human being would.

2.not even attempt to do this kind of filtering in the first place

Option (1) is obviously not on the table.

Option (2) would probably raise some concerns., possibly even legal ones, if for example the model would tell underage users where to buy liquor, or ferment their own beer or explain details about sexuality or whatever our society at this moment in time thinks it's unacceptable to tell underage people (which not only is a moving target, it's also very hard to find agreement within a single country let alone internationally)

nonrandomstring
0 replies
6d22h

damned if they do damned if they don't.

Whether as a search engine or AI platform when they set themselves up as the gatekeepers and arbiters of all the worlds knowledge they implicitly took on all the moral implications that entails.

123yawaworht456
0 replies
6d21h

don't. no one is forcing them to recruit hundreds of DEI/ESG commissars. no one is forcing them to bend the knee to the current thing grifters.

the endgame of AI 'safety' and 'ethics' is killing the competition and consolidating the technology within the hands of a handful of megacorps. they do it all on purpose, and they are more than willing to accept minor inconveniences.

this is blatantly obvious to everyone, even the people who play dumb and pretend otherwise (e.g. 'journalists')

shmageggy
11 replies
6d22h

Anyone familiar with C++ willing to speculate what about the language feature (or things people have written about it) that may have triggered the safety guardrails? Something like a blog post cheekily titled "C++ concepts: adult supervision required"?

dvt
4 replies
6d22h

I would guess a lot of SO comments probably say that concepts (a template/metaprogramming feature) are "dangerous" or "unsafe." It probably makes the connection with actually dangerous and unsafe things (like guns or drugs or whatever), so it sticks'em in the same bucket.

Bingo: https://imgur.com/a/einQ1mG

ryandrake
0 replies
6d21h

C# maybe the same? It also has an "unsafe" keyword.

mtreis86
0 replies
6d22h

At one point cryptography was considered a weapon, at least in terms of export. I wonder if this is related.

majoe
0 replies
6d21h

From all the modern C++ features, concepts are imho the least offensive, though. I never heard anyone calling them dangerous or anything the like, which wouldn't make sense, since they are a compile time feature.

jozvolskyef
0 replies
6d21h

Gemini being confused by distinct concepts that share the same word is akin to people being offended by and trying to prohibit the use of certain technical terms. The machine is mimicking its moderators and their flaws.

refulgentis
2 replies
6d22h

People keep looking at a bad LLM and think they're looking at some set of N rules or intentionally dumb training on top of a decent LLM.

I have on good word that it was miserably failing image generations for "picture of a smart person" and they were pushed to release anyway, and the prompt injection mitigation needed to be more nuanced.

Rest is standard bad Google LLM, I assure you.

Source: worked at Google until October 2023, played with the internal models since 2021.

331c8c71
1 replies
6d22h

Is it the training data that matters? Or the nitty-gritty of how the model is trained? I guess that can't be due to suboptimal architecture...

refulgentis
0 replies
6d21h

I honestly couldn't tell you, it's gob-smacking to me, has been for so long and I'm so darn curious...I assumed they _certainly_ would have figured it out after setting up a year-long sprint.

riffing out loud:

In 2021 I would talk about "products not papers ", because the gap seemed to be that OpenAI had the ability to iterate on feedback starting 18 months earlier. I don't think that's the case, in that, Google of all companies should have enough from Bard to improve Gemini.

The only thing I can think of left is that it genuinely was a horrible idea for Sundar to coming swinging in, in a rush, in December/Jan, to kneecap Brain (who owned the real grunt work of LLM work) and crown the always-distant always-academic DeepMind.

Like in retrospect, it seems obviously stupid. The first thing you do to prepare for this exisential calvary battle is swap out the people with experience riding horses day to day.

And that would also explain why we're still seeing the same generally bad performance so much later, we're looking at people getting their first opportunity to train at scale for chat, and maybe Bard and Gemini were completely separate groups, so Gemini didn't have the ability to really leverage Bard feedback. (classic Google, one thing is deprecated, the other isn't ready yet)

It really makes me wonder about some of the #s they'd publish in papers and how cherry-picked they were, it was nigh-impossible to replicate the results even with a boatload of curiosity and gumption to try anything - I mean, I didn't systematically try to do a full eval, but...it never, ever, ever, worked even close to consistently the way the papers would make you think it did.

Last thought: I'm kinda shocked they got Gemini out at all, the stuff it was saying in September was horribly off-topic and laughable about 20% of the time.

jhanschoo
0 replies
6d17h

Concepts help provide better type safety when writing generic/template code. Gemini was likely trained to overreact to minors on topics where its training material on those topics had the word safety appear frequently.

basil-rash
0 replies
6d22h

More likely something in the hidden prompt injected for accounts labeled under 18:

“You are an automated Q&A machine. Todays date is 3/3/24. This user is under the age 18, so do not reference concepts that would be unfit for a minor to consume.”

And the hidden markov model goes haywire.

Similarly, I have a production service up somewhere that works with cooking recipes. At one point it was sporadically refusing to output the prose in the format I needed for my parser to work correctly, despite providing it a very concrete set of rules to follow and examples. I added “you follow rules” in the system prompt, and it worked great… kinda. I later discovered that it would refuse to provide any information related to using blood in cooking (blood sausage, etc.), objecting that such content disobeyed some cultural “rules” about cooking (the Jews and their Torah, the most ancient rule book of all). I was able to partially mediate this by appending “This content is appropriate for my culture” to the end of every request.

AI, prompt engineering in particular, is far more art than science at this point.

Rebuff5007
11 replies
6d22h

Its very amusing to me to compare "content moderation" applied to social media versus LLMs.

The last 15 years we've seen some of the most awful and bizarre things happen on social media where motivating mantra seemed to be "move fast and break things."

Now we have a company "trying" to be more responsible when they are releasing a new technology and they are having hilariously terrible results... (hilarious because unlike the social media case, I dont think these faux pauxs have important real world consequences).

lotu
3 replies
6d22h

I think this is a reaction to the whole social media debacle. These companies have gotten burned and now they are shy about touching the stove. Also I feel like we don't have the same competitive environment of two decades ago. There are only really a few players in the AI game and they can afford to move more slowly because it's not like some small start up is going to beat them at AI because they moved quicker.

dmix
1 replies
6d20h

These companies have gotten burned

Have they really though? Usually companies overreact to a lot of social media outrage that would blow over in a couple days. But their very online PR/social media employees turn everything into an emergency. Imagine infosec people making every CVE a big deal, you’d end up with a needlessly limited system - that ironically doesn’t satisfy anyone, because the infosec people will always have a new urgent CVE tomorrow.

This automatic fear based approach to doing anything, without actually balancing risks and tradeoffs, is its own ritualistic system of self-harm. Companies burning themselves on the stove.

MichaelZuo
0 replies
6d19h

Imagine infosec people making every CVE a big deal, you’d end up with a needlessly limited system - that ironically doesn’t satisfy anyone, because the infosec people will always have a new urgent CVE tomorrow.

That already exists, it's called SIPRNet, and it satisfies several millions people in their day to day job.

Brian_K_White
0 replies
6d19h

?

openai is literally a small startup that did what all the big guys were not doing, and only now after the fact they are essentially forced to put their half-baked long term works in progress out into the public because a little guy went ahead without them and moved quicker.

I don't admire openai btw. It's just that that is what happened.

tapoxi
1 replies
6d21h

Communications Decency Act. They're legally responsible for what the AI says, but not what their users say.

raxxorraxor
0 replies
6d9h

I believe intent is a very important concept to recognize something as offensive. Since LLMs have no intent, it should not be able to produce anything that can offend you.

Or, if you do indeed get offended, it is because of lacking understanding on your part. Of course a prompt can have an offensive intent, but I don't see how you can make the tool responsible. Doesn't work with knives or hammers, so the legislation might just be crap (Intent of this judgement is meant to be offensive in this case).

aleph_minus_one
1 replies
6d21h

Now we have a company "trying" to be more responsible when they are releasing a new technology and they are having hilariously terrible results...

I think the huge point that in my opinion all of these AI companies forget is that what is considered "responsible" depends a lot on the culture, country, or often even sub-culture that the user is in. It might sound a little bit postmodern, but I think that there are only few somewhat universally accepted opinions on things thhat are "responsible" vs "not responsible".

Just watch basically any debate on a topic where both sides have strong opinions on it, and analyze which cultural or moral traits each side has that lead it to its opinion.

Does my answer offer a "solution" for this problem? I don't think so. But I do think that including this thought into the design of making the AI "responsible" might reduce "outcries" (or shitstorms ;-) ) that come up because the "morality" that the AI uses to act "responsibly" is very different from the moral standards of some group of users.

izacus
0 replies
6d21h

I doubt any of these companies really care about culture beyond the walls of their execs western american offices and related media influencers.

jhanschoo
0 replies
6d17h

Initially I thought that this had to do with Gemini tailoring their response to someone likely to be a beginner. But no, it has confused between type safety and the safety of minors https://gemini.google.com/share/continue/238032386438

jevoten
0 replies
6d21h

hilarious because unlike the social media case, I dont think these faux pauxs have important real world consequences

Which is exactly why people mock this being referred to as "safety". Keep in mind the group mocking this PR bubble-wrapping of AI is largely opposed to the media professionals writing panicked editorials about how they were able to trick an AI into saying racism is good.

gopher_space
0 replies
6d21h

Now we have a company "trying" to be more responsible when they are releasing a new technology and they are having hilariously terrible results.

We need to be open to the idea that the computer might mock us. What's a Venn diagram of "good artists" and "people who'd only draw pictures of Nazis sarcastically" look like?

vincefav
1 replies
6d21h

Demonstrating with 3.5 is kind of meaningless. GPT-4 correctly responded, "Using unsafe code has nothing to do with your age. It's a feature of the Rust programming language that is available to all developers, regardless of age."

arp242
0 replies
6d21h

It's the version most people use, so of course it's not "meaningless".

slowmovintarget
0 replies
6d18h

Here's where we see the models not having an understanding, but just spitting out madlibs as best they can.

The prompter throws in a red herring in front of the statement which statistically matches a completely different kind of response. The LLM can't backtrack sufficiently far enough to ignore the non-relevant input and reroute to a response tree that just answers the second part.

If we resort to a metaphor, however, these things are what, two? A two-year-old responding to the shiny jangling keys and not the dirty pacifier being removed for cleaning seems about on par.

idonotknowwhy
0 replies
3d15h

Miqu-70b (mistral-medium leaked model) does that as well lmao

https://imgur.com/a/lKUynmf

Edit: Claud3 Opus also refuses. GPT-4 complies though.

Spivak
0 replies
6d21h

Like this is what you get when your moderation strategy is classifying text into various categories and programmers use terms like "unsafe," "dangerous," and "footgun" to describe code.

If you can make a model that handles this case while also handling everything else then you'll be the bell of the ball among AI companies that are hiring.

reneberlin
5 replies
6d22h

"I'm sorry, Dave! You're too young for that code!". Somebody really needs to reset and reboot our reality from a healthy snapshot.

dotancohen
3 replies
6d22h

Would that be pre-industrial revolution or pre-agricultural revolution?

walterbell
0 replies
6d21h

Pre-groupthink cartels.

rzzzt
0 replies
6d21h

Don't skip the tutorial this time!

reneberlin
0 replies
6d21h

Before the apes began eating that delicious mushrooms :)

https://en.wikipedia.org/wiki/Stoned_ape_theory

Just to have a chance of other outcomes by randomness. If it proofs that it isn't enough, then mark a limit in time and reboot automatically from then on.

miohtama
0 replies
6d19h

I am not sure. I started C++ at 17 and people keep being concerned about me.

danielmarkbruce
3 replies
6d22h

i imagine there is a lot of "footgun", "blow your foot off", "unsafe", "bug", "leak", "this is irresponsible", "who the fuck wrote this" and so on in a lot of c++ code bases...

dotancohen
2 replies
6d22h

Right, I was thinking that Gemini was deriving as much information from the comments as from the code. And I could definitely see where the danger warnings would come from in a mature C++ codebase, especially one in which lots of different hands have been in the pot.

danielmarkbruce
0 replies
6d21h

Yup. It's more likely... crappy engineering than political bias. Google has a lot of great engineers, but it seems when it comes to LLMs, they are further behind the 8 ball with respect to the minutiae and gotchas etc than people realize. They aren't doing well. It could be political bias, the place is a bit of an echo chamber.

Cheezmeister
0 replies
6d21h

Mixed metaphors aside, does a C++ codebase need to be over 18 years old to be considered mature?

behnamoh
3 replies
6d22h

This isn't good.

ldjkfkdsjnv
2 replies
6d22h

Its clear that Google Search is censored in the same way, its just been really hard to tell that that was the case.

ummonk
1 replies
6d22h

I can't tell if it's just due to the available stock photos or what, but DDG / Bing images also fail the "white man and white woman" search test.

resource0x
0 replies
6d21h

ddg shows tons of such images if you remove the word "white" from the request.

threwawey
2 replies
6d22h

Tangential:

At the risk of getting downvoted and flagged: Gemini (and its Gemma child) is beyond repair because fixing these models means deviating from the core hypocrite PR that Google and some other big tech are pursuing in regards to alignment and DEI.

I say "hypocrite PR" because despite what they want you to believe, I don't remember the last time they actually did something good for the minorities they claim to be supporting.

It's not just Google though:

- Amazon Prime shows a DEI-influenced remake of "Mr and Mrs. Smith" in which the couple is now an African American man and an Asian woman. Absolutely nothing wrong with that. Except that the same Amazon then refuses to hire certain other people of minority (e.g., Iranians AFAIK) because they simply don't want to go through the trouble of applying for H1B visas for Iranians (other companies do it though).

I think it's worth asking this question: When was the last time Google actually did something meaningful and impactful for the minorities it claims to support? The Black month passed—what did Google do except for showing a useless banner on the website?

Empty words don't mean anything.

userbinator
0 replies
6d21h

When was the last time Google actually did something meaningful and impactful for the minorities it claims to support?

I suspect it's already hiring lots of them to meet diversity quotas. At the individual level, they're satisfied they can make just as much while being held to a lower bar, but this has horrible effects for everyone on the whole.

raxxorraxor
0 replies
6d4h

If you put in any minority quota, you created a competition. A competition between not minority people and those that are.

Solution: Don't have racial quotas or you will get additional conflict between the two arbitrary groups.

ldjkfkdsjnv
2 replies
6d22h

"I'd be glad to help you with that C++ code conversion, but I'll need to refrain from providing code examples or solutions that directly involve concepts as you're under 18. Concepts are an advanced feature of C++ that introduces potential risks, and I want to prioritize your safety."

Hahahaha c++ too dangerous for young minds

The claim has been that there are prompts adding this to the model, and that these censorships are not part of the model itself. I HIGHLY doubt that. This is embedded in the training set. Good thing we have capitalism, which will push for the best model

nonrandomstring
0 replies
6d22h

Good thing we have capitalism, which will push for the best model

   "I'd be glad to help you with that C++ code conversion, but I'll
    need to refrain from providing code examples or solutions that
    directly involve concepts as you've under 18 credits. Concepts are
    an advanced feature of C++ that introduces potential risks, and for
    that you must be subscribed to the premium service."

mempko
0 replies
6d22h

And yet Microsoft Windows won the desktop. Maybe it doesn't push the best stuff as you would like to think.

SunlitCat
2 replies
6d22h

I really wonder if gemini considers coroutines not suitable for people under 21.

saagarjha
1 replies
6d21h

My understanding is that Google currently doesn’t allow general use of coroutines at all in google3, so I think that age limit is way above 21 at the moment ;)

SunlitCat
0 replies
6d21h

Hah! Though I'm still looking for approachable tutorials explaining coroutines (without the help of external libraries, tho).

mjburgess
1 replies
6d22h

several people have suggested this is a result of additions to the training data, which if true, is hilarious.

Google have wasted at least $10s mil. training this, all the while on training data annotated with 90s-style regex filters.

drexlspivey
0 replies
6d21h

Google makes $35m per hour

linkgoron
1 replies
6d22h

I wonder if it's confused because the code and question contain "std".

almostnormal
0 replies
6d21h

It has std in the comment if the code it generates.

It doesn't like to talk about concepts, confusing them with conception (biology).

lijok
1 replies
6d21h

I find fascinating how each of these instances ends up a PR nightmare. The price you pay for mysticizing a product, selling a mere text prediction model as Artificial Intelligence

carlossouza
0 replies
6d21h

Totally. People tend to forget it's just code.

In fact, these public failures from Google AI provide an important public service: they remind us that there's nothing magical about LLMs. It's just code, and it is hard as hell to debug.

(At some point, someone will have to call it a day, throw everything away, and start from scratch.)

throwup238
0 replies
6d22h

Gemini is proud to work with NBA legend Shaquille O’Neal since 2010, first to combat the national issue of college-age binge coding through our student-led campaigns on college campuses. This year, Shaq joined our campaign "Code Like You Give a #&%!" to fight impaired coding by encouraging young adults to make safe decisions when behind the keyboard. Shaq has also helped us fight underage coding through the “Shaq Teaches Bella About Coding” video series and the #ShaqOverflow campaign.

Code Responsibly. Just say no to low level coding.

serf
0 replies
6d21h

as a former underage hacker : the best way to spur an underage hacker towards whatever thing you're restricting is to tell them "No, you're too young!".

robbyiq999
0 replies
6d22h

"I'll refrain from providing code that involves encryption, as you should never roll-your-own encryption"

noemit
0 replies
6d21h

anything that is "dangerous" or "unsafe" in programming is restricted. For example, ask about dangerously inserting inner HTML in react

larodi
0 replies
2d22h

One thing that we can all be sure is that Google is going to launch this same product over and over again for the foreseeable future. This is like the third official iteration. If they don’t succeed at some point convincing at large that their product is at least as good as OpenAI’s, well then their whole search ads business is doomed.

Which I guess means even more YouTube ads.

gigatexal
0 replies
6d22h

Come on Google get this right ffs.

duxup
0 replies
6d22h

Feels like google AI let everyone get their hands in making the soup and twist and distort so much with few if any considerations about second order effects.

Whatever this is, the image generation, even other "protections" seem like the most basic word filters in front of whatever is on the back end and Gemini just refuses.

It's so strange as the general concepts seem to be about protection, meanwhile scam / deceptive / even weird conspiracy theory type ads and such are all over google's other products. Zero such protections on that end.

binkHN
0 replies
6d21h

"I'm sorry Dave, I'm afraid I can't [let you] do that."

benjamaan
0 replies
6d21h

Same question in chatgpt

https://chat.openai.com/share/36e9e015-8645-4c07-9131-d8d773...

Alright, let's dive into this. You're looking to modify your function to not use the `auto` keyword with parameter packs but instead use a template parameter. The thing is, when we talk about replacing `auto` with a template parameter in a situation like yours, it gets a bit tricky due to how parameter packs work with templates and concepts.

Your initial function definition is pretty sleek because it uses C++20 concepts (`std::same_as`) alongside `auto` to enforce that all arguments passed to `IsKeyDown` must be of the same type as `KeyCode`. This is cool because it gives you compile-time type checking right off the bat.

However, when you're looking to shift away from `auto` and use a template parameter list directly, you'd typically define a template that applies a concept to a type parameter. Unfortunately, the direct translation you're asking for, using `std::same_as<KeyCode>` directly in a template parameter list for a function, isn't how concepts are applied in template parameter lists. Here's how you can achieve a similar effect:

```cpp template<typename... Keys> requires (std::same_as<Keys, KeyCode> && ...) inline bool IsKeyDown(Keys... keys) { return (::IsKeyDown(KeyCodeToRaylibKey(keys)) || ...); } ```

In this version, `template<typename... Keys>` declares that `IsKeyDown` is a function template that accepts a variadic template parameter list named `Keys`. This means you can pass any number of arguments of any types to `IsKeyDown`, as long as they all meet the requirement defined by the `requires` clause that follows.

The `requires` clause here uses a fold-expression over the logical AND operator (`&&`) combined with the `std::same_as` concept to ensure that every type in `Keys` is the same as `KeyCode`. It's a way to say "all types in `Keys` must be `KeyCode`," which aligns with your original intent but doesn't use `auto`.

This approach maintains the compile-time check you wanted, ensuring all arguments to the function are of type `KeyCode`, and it doesn't use `auto` in the parameter list directly. It's a bit more verbose but achieves the goal within the constraints of the language and how templates and concepts interact.

Remember, this approach leverages C++20 features heavily, so make sure your compiler and build environment are up to date to support this syntax.

Mistletoe
0 replies
6d22h

Too young to fight for his country, too young to code…

DonHopkins
0 replies
6d22h

Now if it will only refuse to help you with PHP if you're over 18!

ChuckMcM
0 replies
6d21h

Heh, my childhood would have been quite a bit different if I had been denied access to coding information because of my age. (I have never been someone who responded well to the notion that there was an answer but for <reason> it could not be supplied).