Leetcode style interviews probably serve two functions:
1) A way to suppress wages and job mobility for SWE. Who wants to switch jobs when it means studying for a month or two? Also, if you get unlucky and some try hard drops an atomic LC hard bomb on you now you have an entire company you can no longer apply to for a year.
2) A way to mask bias in the process while claiming that it’s a fair process because everyone has a clear/similar objective.
Meet someone who went to your Alma mater? Same gender? Same race? Give them the same question as everyone else, but hint them through it, ignore some syntax errors, and give them a strong hire for “communication” when they didn’t even implement the optimal approach…
Or is it someone you don’t like for X reason? Drop a leetcode hard on them and send them packing and just remain silent the entire interview.
To the company this is acceptable noise, but to the individual, this is costing us 100s of thousands of dollars, because there’s only a handful of companies that pay well and they all have the same interview process. Failing 3 interviews probably means you’re now out $200-300k of additional compensation from the top paying companies.
I’ve interviewed for and at FAANGs. I can’t believe the low bar of people that we’ve hired, while simultaneously seeing insane ridiculous quad tree/number theory type questions that have caused other great engineers to miss out on good opportunities.
Someone will reply to me “if you know how to problem solve you will always pass.” Ok, come interview with me and I will ask you verbatim one of those quad tree/number theory/inclusion exclusion principle questions and I’d love to see you squirm, meanwhile another candidate is asked a basic hash map question.
I'm sure anyone determined to do so can act unfairly regardless of what process is in place, but the fact that there is a standardized test in my mind does the opposite and makes the process much fairer. Assuming a fair-minded interviewer, the process gives a chance to a candidate whose resume may have less vaunted names on it to demonstrate their skill. I'm quite sure that I'd never have had some of the opportunities I have, not having a CS degree, if it weren't for whiteboarding interviews. I can't imagine any possible process that would thwart interviewers intentionally subverting it to hire their friends.
The problem isn't standardized tests, it's that leetcode questions are about having the time to have learnt the answer beforehand, rather than raw ability for problem solving.
That’s not true. In any discipline when confronted with a test there are two strategies: brute memorization of the question/answers, or developing the skills to tackle the problem dynamically. You cannot categorically claim that LC tests are largely memorization tests rather than raw problem-solving skills. That is just the approach you are capable of taking. Not being able to see up the mountain doesn’t imply there are no climbers above you.
If that were the case then a normal, well accomplished software engineer shouldn't need to "grind" leetcode to pass an interview.
its just a cargo cult. Getting someone to do a code review is a much much better test of skill:
Do they ask questions?
are they kind in their assertions?
At what point do they go "I don't know"?
Do they concentrate on style or substance?
do they raise an issue with a lack of comments?
do they ask why the description of the PR is so vague?
When they get a push back, are they aggressive?
All of those are much better tests than "rewrite a thing that a library would do for you already"
No, what this shows is that the skill range for accomplished professional software devs is absolutely massive. What these companies want is to find the tail end of this very wide distribution. Leetcode interviews do a decent job at this. If you have been coding for a decade and can't do leetcode mediums with almost no prep, and hards with a moderate refresher on data structures, then you're simply not the in the right tail of the skill distribution and they don't want you. This is what so many in our industry can't accept: you're just not talented enough to earn a FAANG job.
I mean it doesn't, because I'm at a FAANG, At a FAANG you are infantalised from the very start, sure you passed a very difficult interview where you have to balance a binary tree efficiently as possible. But you're going to use none of those skills here.
what you actually end up doing is copy/pasting some random code you found using internal code search, because the sensible way of doing it can't happen as that would involve porting a thirdparty library, and doing all the procedural work that follows.
so you hack some shit together, ship out out and hope that it doesn't break. You then decommission the nasty hack you shipped last year and claim credit for innovating. Is your product not hitting the right metrics? loosing users? doesn't matter, so long as the super boss is happy that you've hammered in the REST API for the stupid AI interface, you're not going to get fired.
In a startup/small company, if you fuck up, the whole place is going under. Need metrics? you'll need to find a small, cheap and effective system.
Here, we just record then entire world and then throw hundreds of thousands of machines at it to make a vaguly SQL interface. Don't worry about normalising your data, or packing it efficiently just make a 72 column table, and fill it full of random JSON shit. Need to alter a metrics? just add a new column.
In short, don't praise or assume that FAANGs are any good at anything other than making money. They are effectively a high budget marvel movie, Sure they have a big set, but most of it is held together with tape and labour. Look round the side and you'll see its all wood, glue and gaffer tape.
FAANGs want the top .1% of developers, they don't necessarily need them for most roles. But the point is to hire developers that you could put into any role in the company within reason and have them be successful. 99% of development work at a FAANG is pretty unexceptional and doesn't require exceptional developers. They hire for that exceptional 1%.
FAANGS want a load of loyal, naive people who are willing to work loads of overtime and not ask too many questions. Who better than posh kids from great universities who haven't quite figured out that life isn't a meritocracy yet!?
Sure they also want the top 0.1%, but they have a different interview track. Do you think all those OpenAI engineers that were going to follow Altman were asked to do leetcode?
I don't know about OpenAI specifically, but I have heard of interviews at other top ML research positions were partly based on leetcode problems.
FWIW I sort of agree with you.
Background:
I'm in a FAANG type company now, a YC company, 3000+ engineers. I'm a Staff SWE with 20+ years of experience (ECE degree) and make $600k+ per year. I've went through the promo cycle here (it sucks).
I can't do any leet hards and can do leet mediums after studying. Some easy's take me a couple of tries. I usually do very poorly in interview coding exercises.
** throwaway , main account is 10+ years old.
Curious, would you say FAANG offers the right challenges to stay in the 0.1% or 1% if one started out there? Are they actually in the right place to grow?
No, these engineers at FAANG companies could not solve those problems cold without having been taught how. I have worked at two, which is how I know. I have never seen a question in an interview I haven’t seen before. Many of these questions went unsolved for decades in the industry, so no, these engineers, who mostly aren’t DSA experts but distributed computing experts, could not solve them cold. I also saw how interviewers used questions to re-enforce their own biases on university, gender, or home country in these interviews.
Sure I can. By the time you get to Leetcode hard, these aren't just "can you derive the answer". The questions by design take 45+ minutes and have some weird quirk in it that is nominally related to the core concept being tested. These aren't necessarily meant to be done on the fly during an interview period.
a better analogy is that youre on a road and you see a freeway above you. The people above you aren't "better", they are simply on another road, to another destination. But they aren't necessarily worse either. They could be on their way to a dead end job or could be a billionaire CEO.
That is to say, it's useless comparing yourself to other people you don't know. Everyone has their story.
Thank you. You just proved my point that “categorically LC is not largely memorization” by reinforcing that only in specific cases in some specific levels that you do need some specific domain knowledge.
That's exactly true.
LC tests typically copy problems from the university the interviewer graduated from. College programs differ, so this is really a case of what you were introduced to.
There's a fairly popular online LC test company in my corner of the world which was formed by graduates and lecturers from a certain university and they started out by just giving the problems from the curriculum. Result was heavy bias in favour of students and alumni of that university.
This sounds like you want to penalize students who studied for the exams. Or at least not reward them.
Like all interview formats, it’s a proxy for understanding if the prospect would be able to get the work done and be a good fit with the rest of the team. I’d say it’s a pretty good proxy for work ethic at crunch time as well.
If your complaint is that a normal person wouldn’t have the time to study these things in detail, why would a company want to hire someone who has external obligations?
External obligations like full time employment and a family?
Yes exactly. All things being equal, I’d rather higher someone who’s going to dedicate his entire life to the soul crushing work he will be assigned.
Wow, look guys I've found the fool.
I don't disagree; but why does the work need to be soul-crushing?
Why should a job be an exam? For someone who has worked at both FAANGS and startup, I've never found a job that remotely matches a leetcode problem.
Most companies are building products anyway.
The only value in leetcode is you should be able to solve a couple in a short time and thus prove you are least know something about writing code. We use them as an interview prescreen because once in a while someone seems like a good person we would like to work with, but we have no clue after the interview if they can really code.
We had one person who worked on [censored] 20 years ago, then was manager of [non-programmers, rest censored] - now wants to get back into coding - can this person still code? If so I want them, but if they have forgotten everything... I of course censored details for privacy reasons.
s/exams/interview/g
Interviews are like exams but you don't have any clue what topics are on the test. If Leetcode was some licensed, standardized approach to getting some license to verify my ability to code myself out of a paper bag: I'd hate it, but I'd grit my teeth and study it. exams can be studied for.
But it isn't, so I can be studying leetcode questions and be hit by a dozen other topics. I don't have time to study everything, and the market right now isn't worth pinpointing specific companies unless you have a stellar reference.
I couldn't disagree more. Someone so prepared for interviews have the least skin in the game. Layoffs come and they didn't work > 40 hours but were otherwise excellent? oh well, get another job in a month because interviews are a breeze for them because they breathe DS&A.
I'd love to know the answer here as well. Why do companies internally penalize workers who were laid off, but then try to "steal" currently happy employees but make them jump through these hoops? The logic seems backwards; interviews should be hard and depress wages for the laid off, desperate workers so you can get a desperate unicorn. But you want someone who lacks the time to study because of current job obligations to get to an offer faster. Their proof of work ethic is being employed to begin with.
it's a good measure of whether they will sacrifice their home life and spend unpaid time doing extra work for you.
Some of us derive the answer on the spot...
For the hashmap problem, sure. Any SWE worth their salt should be able to figure those out.
I highly doubt you are sight-reading atomic bomb LC hards.
Some people can't do it whether they had the time to study, which the point of the test.
Any advanced topic requires struggling through a big set of "special cases" to master.
This is how I learned mathematics for example.
After you have learnt enough of the special cases (trees) you start to be able to connect them and see their relation (the forest).
So are we just going with a base assumption that interviewers can NEVER be trusted with anti-bias training and learning how evaluate people fairly? The examples mentioned in this comment section are all blatantly intentional biases that people are choosing to use. The amazing part is that all the “standard test eliminates bias” people seem to the most ignorant to where bias helps THEM. Forcing people to study for two months is blatantly discriminatory to age and family status, at a systemic level. While “this white straight guy might explicitly choose to give the other white straight guy an easy question,” is very subjective and intentional on the individual level. Like, employees can always choose to do bad things, in any situation. That’s why we have at-will employment…
Is the culture just so broken at these companies that it’s hopeless to expect people NOT to blatantly exploit the system for their friends? Why don’t people get fired for doing that?
How is this any less discriminatory than any other assessment based interview where you need to prepare? Non-assessment based interviews end up being vibes based which is much more discriminatory.
You’ve set up a false dichotomy here; for any given position, there is typically a way to conduct an assessment-based interview that doesn’t require too much preparation for qualified candidates.
For example, at my current job, we hire web developers with Rails experience. Our technical interview process consists of either a pair programming session or an async/take-home task (candidate’s choice) which requires the candidate to implement a small feature in a Rails codebase. We do have some room to improve on objective evaluation of the candidate’s performance, but there is a test suite and a rubric which we use to evaluate their work. None of this should require that the candidate study, unless they’re coming in to the interview without Rails experience.
This may work out great if you happen to have worked on Rails for your last job. However, I doubt that everyone you interview is actually that familiar with Rails but rather is pursuing any sort of opportunity that they can get. In that case, they would actually more time to brush up multiple different tech stacks than simply on algorithms.
In that case, the interview question is working as intended. For most of our roles, we want several years of Rails experience, and we are clear about that fact in the job listing. If someone applies without Rails experience, they either didn't read the job listing, or are desperate to find any job. While I empathize with folks in the latter situation, our positions really do require the experience, and the job market isn't so bad right now that a smart candidate should be going a long time without finding something.
If you happen to have Rails experience but it was several jobs ago, the task we give you is basic enough that you should be able to Google what you need during the task to refresh your memory fairly quickly. In fact, I did this when I applied, having not worked with Rails in a number of years.
Edit: My main point is, even if you technically do have to "study" (really, just Google a few basic Rails concepts) if you're rusty, everything you do is preparing you for the actual job. Studying how to implement a hashmap, or computing the longest palindrome from a string of characters, or whatever other harebrained problem FAANG etc want to ask, is 99% of the time not really helping you prepare for those jobs.
because most other assessment based interviews are based on what you do on the job, likely stuff you've done at previous jobs as well. Less prep time when you spent thousands of hours already as a career.
Yes. Because interviewing is
1. hard, but no company has proper full time proctors. So "expert interviewers" are a rarity
2. not standardized in the slightest. So your performance varies entirely by the interviewer, their style, and their mood that day.
3. some weird blind test where you hope you studied the right topics. You're not getting the best out of a candidate, you're basically hoping they read your mind and know exactly what you want them to say.
Sure. but that's a universal problem. "It's who you know, not what you know" is advice that has spanned centuries. I can't even blame the modern tech industry for that one.
That + the above issues with interviewing mean you're always going to go with a referral over a random applicant.
That isn't always true. When I interview HR gives me the exact questions I'm allowed to ask. These are vetted both to prevent me from asking something illegal, and also by research to get the type of things useful for interviews. Sometimes it is annoying - you can easially finish the interview with a great score but I have no clue if you can write code or not. However we are carefully trained on how to ask the questions and how to grade them.
That makes sense for soft questions. But I doubt it's HR devevloping a dynamic programming problem and writing a rubric for how good a score you can give based on a response.
I was mostly referring to technical tests, but I understand there are definitely some set of questions you need to ask no matter what. I don't really knock recruiters too much for repeating the usual "are you authorized to work in the US" kinda stuff even though it is the first question on their job application.
1. I’ve worked at multiple companies that absolutely have expert interviewers who design the interview questions and then teach mid-level engineers how to proctor them correctly. It’s like one 30 minute meeting, it’s not that big a deal.
2. All tech companies I’ve worked at since around 2015 have completely standardized interview questions, sometimes also hosted in a GitHub repo where any employee is free to comment or even open a PR to request a change. Every candidate gets the same questions. This thing where “faang” interviewers just pull a random LC question out of their ass is complete insanity to me. An organized set of questions takes a senior engineer like a week to organize and commit to… And if your questions can be memorized and recited by rote memory and the candidate can do well on it without the proctor knowing, then your questions are BAD or your proctor is a moron.
3. I’ve never worked anywhere where I would describe the interview like this. Even the startup where I was the first engineer and there was no formal “test”, it was an hour long chat with the technical cofounder where he grilled me about coding skills and past experience/accomplishments. I won’t take a job if the interview isn’t asking me things that focus on my existing experience and skills, it’s a red flag about the company culture in general.
As for your “universal problem,” I disagree with fatalistic takes where you just throw your hands up and say “whatever shall we do” all the while YOU are the one benefitting from the system that cannot be changed. This is how simple-minded people think about the world.
A leet-code test would be much more standardized if candidates could solve it at home. Just send me a link to the quiz and let me solve it within a specified time frame.
I've done tests like this for some companies. It felt a lot fairer and more closely resembling the actual work environment than live leet-code interviews, with biased interviewer(s) and a stress factor that's not a part of the actual job.
As a hiring manager I HATE leet-code tests, and they do nothing to differentiate candidates, but a take home in the era where people run chatGPT beside the interview window, or have someone else do the interview for them? Not a chance. You are 100% correct that it is way more representative, but the prevalence of cheating is ridiculous.
Take home is fine if you discuss it later in the interview. But also there should be some pre-screening to keep the number of interviewees reasonable.
I totally understand you, but want to offer a different perspective.
They will also be able to use ChatGPT on the job. And StackOverflow. And Google. If they know how to use tools available to solve a problem, that will benefit them on the job.
If you're testing them for what ChatGPT can already solve, then are the skills being tested worth anything, in this day and age?
Take-home LeetCode, even with cheating will still filter out a good chunk of candidates. Those who are not motivated enough or those who don't even know how to use the available help. You'll still be able to rank those who solved the task. You'll still see the produced code and be able to judge it.
Like other commenter points out, you can always follow up the take-home LeetCode. Usually, it becomes apparent really quickly if a candidate solved it on their own.
This does seem like a vexing problem, especially when interviews are conducted remotely.
I wonder if either of the following could be cost-effective:
(a) Fly the candidate to a company office, where their compute usage could be casually monitored by an employee.
(b) Use high-quality proctoring services that are nearby to the candidate. E.g., give them 1-2 days in a coworking space, and hire a proctor to verify that thy're not egregiously using tools like ChatGPT.
Or alternatively, would it suffice to just have a long conversation with the candidate about their solution? E.g. what design trade-offs they considered, or how might they adapt their solution to certain changes in the requirements.
This is what privilege looks like. The inability to see barriers that affect others in a worse position.
Standards do not imply fairness only consistency.
You got a test that filters out people who lack the time to train for these tests. Basically, devs with a life (see family)
IIUC, in your view using standard tests is a damned if you do, damn't if you don't scenario.
Is there a solution that you'd recommend?
It can make the process fairer but it's not a given. You can do the classic "no dogs, no blacks no Irish," and that was a standard across pubs in England. It certainly wasn't fair, unless you were racist.
If you're committed to fairness, then a standard will help. It gives you a clear point to fix and improve things and something you can use to measure if you're achieving your stated goals. That's definitely something you can't do if you're just making things up as you go along.
And yes, I made a deliberately provocative statement. I'm obviously not saying whiteboard tests are the equivalent of segregation.
...a standardized test? No. There are tests. They sure as heck aren't standardized.
Maybe they should be, since everyone seems to be doing the same thing.
I don't think the problem is the format, i.e. a 30-50 minute interview on simple coding with DS&A problems, but the escalation.
The reality is, fizz buzz got us 75% of the way there. It turns out when pressed, a lot of people can't write code. Yes, there's false positives, but there's also people brute focing their way through via copy & paste.
This doesn't manifest as a person who can't do any task, just as a person that's slow, delivers weird abstractions, and would take a lot of your time to get anything useful from.
But those people are also making those arguments, because, as you said, there's hundreds of thousands of dollars in it.
I've also used FizzBuzz at several companies, and the insane amount of people it filters out continues to boggle my mind.
Do you tell them what the "mod" operator is before giving it?
The failure rate of FizzBuzz has always struck me as depending on the idea that you can do a lot of programming and just never need that operator.
Yeah I know how to implement FizzBuzz since it's such a meme, but I've basically never used the mod operator in real code. Maybe it comes up in more math-y code I suppose, but for most backend/frontend/SQL code I've never reached for it.
I’ve used it for coloring alternating lines differently in UI code, and as a lazy way to log only every so many times some loop runs.
I only know it well because it was covered near the beginning of one of the first programming books I picked up (on Perl 5) and it stuck with me because it seemed wild to me that they had an operator for that.
More mathy code like checking if a number is even or splitting a total number of seconds into minutes and seconds?
Depends on the application. I've written accounting software that makes use of it, along with heavy use of floor() and ceil(), including in SQL.
No but I permit internet access as long as they don't search for the solution (I trust them not to do that, and don't monitor what they do)
You're too trusting.
Whilst I agree, his point still stands that either they do it fast or never.
I once worked with a guy who was an incredibly good developer and I was surprised when he didn't see anything special about the number 64 (i.e. a power of two) - turns out that he'd never done any bit fiddling type work so he hadn't had to think in those terms. It wouldn't surprise me if a lot of people hadn't heard of "mod" either....
A huge majority of programming work is basically just CRUD stuff and other data shuffling. It’s not surprising that someone wouldn’t have needed to work with big shifting (or modulus) in that case.
He was an expert in complex multi-organisation enterprise integration and was the go to guy to work out why horrific distributed transactions were failing... He also did a lot of cool stuff as side projects in his own time - just none of them happened to involve worrying about powers of 2.
Unless you specifically want a compiling and running version of FizzBuzz you don't actually need to use or know about the mod operator.
At least for me it would be sufficient if the person used a function like IsMultipleOf(x, m), or Remainder(x, n). This would at least make it clear what the function did even if they didn't get the exact operator.
The other thing to note is that the mod operator works differently on different languages and platforms.
Not with positive inputs, which is the domain of FizzBuzz.
Even if you don't know what "mod" means, if you have no idea know what a remainder is, and that the problem calls for it, and you can't derive the mod operator using integer addition, subtraction, multiplication, and division, then your math and problem solving skills are pretty weak, which FizzBuzz tests.
I stopped using fizz buzz a long time ago. 90% of candidates can't define a 2d array in their chosen language without first filtering the candidates, where you get to about 50%.
Even if that's so, modulus (or at least the concept of remainders) are elementary school math and any competent programmer could bang together an (inefficient) modulus operator in a few minutes.
So even in a language w/o a mod operator, it's not a hard problem if you understand how to solve problems with code.
You don't actually need mod to do fizzbuzz, even if that's the most obvious way for people who know what mod is.
But without any "real" math at all you can do it with, eg, two counters and some if statements. Or if you recognise that there's a repeating pattern you can work out that pattern manually and just write code to emit it over and over.
I failed FizzBuzz the first time someone gave it to me in an interview...
The specific failure was that I first attempted to solve by using repeated subtraction. The guy kept asking me to "solve it a different way", or saying "there is a better way to solve this". I tried using arithmetic tables, I tried using results about base10 remainders and I even tried using one of the corollaries to Fermat's little theorem to speed it up for larger inputs... every time I was told I was getting it wrong because "there was a better solution". In the end he pointed out that the only solution he would accept was use of the mod operator.
Since then I have actively kept a tally: I naturally use the mod operator an average of twice a calendar year, it has always been in personal life code when dealing with complicated geometry problems, the bit of code containing it almost always fails on some edgecase because at the point of using mod it is convoluted.
FizzBuzz is a highly artificial problem. It makes sense that people who are not familiar with it will assume that there is an elegant solution. But in the end the right approach is to be very boring and to notice that you need to check for divisibility with 15 before you check for divisibility with 3 and 5.
FizzBuzz is a problem that doesn't have an elegant solution. That is the point: to see how you approach the problem. (there are 3 possible solutions, each in-elegant in their own way)
I don't like FizzBuzz because it over-weights the interviewees knowledge of the relatively obscure modulo operator. Yes, there are other ways to do it, but the expectation of FizzBuzz is that the candidate has that "Eureka" moment and remembers modulo.
If I need a "Non-Programmer Weed Out" question, I'd rather give a problem that is 1. as easy as FizzBuzz, but 2. is just 'if' statements and loops and cannot be solved by knowledge of a particular operator (or bit twiddling tricks).
Honestly sounds like a bad interviewer. repeated subtraction is a good first step and I would try to push more if that was the first implementation. But If you could derive a base 10 remainder you know conceptually what problem the mod operator is trying to solve.
a % b = a - (b * a/b) /assuming a sane language with integer division, else cast a/b to int/
Figuring the above operation (or getting close) is when you should more or less pass, and That's a good point to show the interviewee what the operation is. That should be the point of an interview problem, to show the thought process, not that you know fancy tricks.
But alas, I was shown an XOR swap in an interview last week and spent 3 minutes deriving it on paper instead of 3 seconds saying "oh yeah, a => b and b => a" to a trick that I haven't seen since college some decade ago. The current market loves tricksters, I suppose.
And yes, the actual real world use of modulo is surprisingly sparse, despite easily imagining situations where you could use it.
I once froze in an interview when asked a simple technical question - I'd been giving a presentation for an hour on how to launch a new product and I was asked by the CEO how to do something technically trivial - my brain could not do it. So he probably thought I was some marketeer pretending to be technical - which isn't really true.
I suspect quite a lot of people who are labelled as "can't code" are freezing like I did.
When I ask people to code FizzBuzz I:
I see this usually:really? they get 30 minutes + internet and they couldn't google "javascript how to get divisible by 3"? That's just bad research at that point.
Yes that sounds pretty sensible.
Do you ever have a conversation with these people as to why they think they couldn't do it?
I suspect some do, and some will be a lot when you bunch them all together.
However, counter point, I've had people forced on me through much of my career who just can't code for the most part, and despite being pretty reasonable about it ( it's part of the nature of the work I do ), I'm very rarely surprised by competency.
Same experience, used FizzBuzz at many places and always got surprised by the amount of people it can filter. The best interview process I've ever ran at a company consisted of a basic FizzBuzz for about 15-30 min followed by a pairing session no longer than 1h30m on a problem that could get as tricky as we wanted to assess their skill level.
We would both test the basics as well as go through with the candidate on how they think, how they collaborate, help them out if we felt nervousness was impacting them showing their skills, and in the end got a much better grasp on how skilled they were than if we were looking at Github repos or giving DS&A trivias to solve.
Some years ago I was remotely interviewing at Google, and I was asked to code up the reversal of a linked list. But my brain just froze.
This is something I can easily solve in 5-10 minutes, correctly handling every plausible corner-case.
I'm curious how common this is, statistically speaking. I'm also curious how it correlates with other things about the interviewee.
A lot of people couldn’t even read this thread if someone was watching closely. They’d recognize words and idioms, but couldn’t make sense of these or think. It’s called anxiety and rarely has anything to do with actual performance. Anxiety is a frequent guest in a smart guy’s home.
As an early 2000s integrator/analytics I learned to write code when someone watches out of interest or straight time pressure (still taxing sometimes). But most developers that I know intellectually curl up in such situation, regardless of their skill or performance levels. We worked with one very math-y low-level-skilled guy whom our clients described as “literally freezes for an hour without moving, should we pay for it?” when he did field work. He was a very strong developer otherwise.
Not that a company must hire or want these people, but the idea of writing code under uncanny pressure all day makes as much sense as that swordfish scene.
I have empathy for the false negatives. I have been them, or maybe at times I'm simply a false positive, the problem is those people are likely to freeze no matter the interview.
Further still, I get push back with folks citing self-prescribed medical conditions, but the same people generally display the same behaviours during the working day.
So other than contract to hire, which typically limits you to people with a job, I don't personally have a better way.
I can't speak to the statistical claim that "those people are more likely to freeze no matter the interview."
But just my personal anecdote: I do fine on take-home coding tests, but I freeze up on live-coding tests.
This is one reason I'm vexed by the allegedly common cheating in take-home coding tests. It makes employers suspicious of the testing style that I'm best at.
I've actually had people cheat on live coding sessions.
For north of $2m over your career, cheating probably is the smart thing to do, especially for a borderline candidate, as there's a fair amount of evidence that the prestige on your CV will make things easier going forward.
However, my problem with take homes was never that the candidate would cheat, but rather they'll probably spend way more time than the 2 hours allocated.
I'm actually less worried about the candidate doing that, than I'm worried that the interviewer bakes in a bunch of assumptions like having a machine setup to do the task, having the specific domain knowledge and experience, and then accidentally trolls the candidate with little to no avenue for feedback.
What tipped you off?
Also, I'm curious: do you think having them discuss their solution in depth would have been a good countermeasure?
I mean it was pretty obvious with their eyes darting then several lines of code appearing in the code editor.
Of course, this doesn't mean I've discovered 100% of the cheaters, just the obvious ones.
To some degree. But not everyone communicates as well as they code or vice versa, and then it comes to what you're trying to qualify.
The problem you are describing is interview variance and hiring bias, not leetcode. This happens irrespective of interview style.
Many companies have question banks that are specially designed to be fair/have some contextual relevance (ideally) to some "realistic" problem. Or at least, many of the companies I've interviewed at follow this model. I consider these coding questions to be "leetcode" style because at the end of the day they are an isolated coding problem, even though they may not be a problem from leetcode verbatim.
Companies that execute on that style of interview well are generally fairly pleasant interviews, at least in my experience. Good companies/interviewers will gauge more than just the final code to determine a hire or no hire. And a large portion of companies also have hiring ratings on a scale to make it less binary.
Question banks that are too big: huge variance, and OP's point stands.
Question banks that are too small: leaked on eastern forums immediately, candidates show up reading answers out to you (some of the guides include guidance on when to pretend to think, I am not kidding).
The idealized version of "question banks" might work. The real one does not; you'd require employees constantly scouring forums in every language known to mankind, immediately removing anything that gets leaked. On top of that you'd probably require a competent committee overseeing all questions in the bank constantly and ensuring the lack of variance in difficulty.
Source: I interviewed at and for Goog and Pltr.
> leaked on eastern forums immediately
What exactly are "Eastern forums"? "Eastern" what? Europe? Asia? The world?
The most common example would be 1point3acres.
I'd prefer to not be more specific; I chose the word Eastern on purpose.
For DEI committees reading this: I am both eastern european and asian, so I hope to be exempt from any scrutiny.
So you wanted to say "Chinese forums" but couldn't say it out of fear?
no.
I wanted to say Eastern, as I have examples from both Asia and East Europe.
I wished to not be more specific so as to not derail the discussion into "user kidintech implied that nation X has a tendency to cheat".
Out of curiosity, can you share some examples from eastern Europe? Where does one find such websites?
Also, IMHO, blanket generalizations like "Asia" and "Eastern Europe" in such contexts can actually be more offensive than just mentioning the one country where the thing happens since you're basically painting with tar a whole sub-continent with dozens of different countries, just for the things happening in one country.
What I mean is, if by Eastern Europe, you actually mean some dodgy Russian forums, I think a lot of Eastern Europeans from Bulgaria all the way to Poland and the Baltics might feel offended of being included since we are not the same.
Yeah, there's also the implication here that Americans rarely cheat. They aren't as public because English is under a microscope, but there are definitely answer banks if you know the right person and can fork over the cash.
If it's anything like High School/College, the sad part is these kinds of people could probably do well in interviews regardless. These answer banks are simply the difference between an A and an A+. And sadly the current market seems to only want A+ candidates. Who's fault is it really?
> And sadly the current market seems to only want A+ candidates.
You mean the current FAANG market paying top dollar. I know plenty of unknown companies taking B candidates because they aren't paying top dollar (in Europe at least)
>Who's fault is it really?
The governments and central banks for devaluing the currency post-2008 with their zero interest rates, causing the value of savings and wages to plummet and the value of assets, housing and stonks to skyrocket, causing people to chase get rich quick schemes on the stock market and on the jobs market. Coupled with the VC promoting for an unsustainable growth (more like pump and dump) of so called "tech companies" who's products were not economically viable from the get go, they just survived on zero-interest money and fake promises, artificially boosting the demand for SW workers casing many young people to go into tech just to chase money, money that's now gone and so is the demand for coders.
Without this artificial demand for devs caused by zero rates and overhype in the tech industry of financially unsustainable products that were banking on skirting the local laws (AirBnB, Uber, etc), then those people chasing money would have went into finance or investment banking to chase money there instead of causing a huge backlog of candidates in tech. Just my 0.02$
in my experience: not these days. And I work in games, so "top dollar" was never in question.
That's a fair take. Tech in some ways was indeed a necessity with a huge reach, so I don't think the overhype was the difference between being a trillion dollar industry and a million dollar one. But tech would probably still be a billion dollar industry without all the factors you mentioned.
Games... well, stability was never a factor, and I knew that going in. It's a shame they are doing the exact same pump and dump schemes tech has fallen into. And it's not like layoffs were uncommon after a project finished; it's just that they are doing it purely for better looking earnings call instead of "we cannot keep funding studios anymore".
No, because I feel like mentioning previous employers AND mentioning the languages I speak would get quite specific.
You are interested in Eastern Europe, so you should be familiar with olympiads; ask around any circle of ex-olympiad participants and you are bound to find something.
I am not sorry if you took offense; you're either from one of these countries and are clueless to the circles that exist next to you, or you are not from one of these countries and are trying to be offended on behalf of others.
>ask around any circle of ex-olympiad participants and you are bound to find something.
Obviously I'm not a golfer.
So New York, Boston, Washington, Baltimore, Philadelphia, Washington DC, Maryland, Virginia, Delaware, New Jersey, Main, Connecticut, Rhode Island, etc?
Or by "cheating" are you specifically referring to the lying cheating treasonous fraud from New York who was just found guilty of 34 felonies to cover up cheating on his wife with a porn star to inflence the election?
This only works if because I'm western and white I must ALWAYS be scrutinized.
I agree with you, but I don't really see how this invalidates the style of interviews where you're presented with some timeboxed coding problem (of reasonably scaled difficulty) and are asked to solve it.
There will be bad actors regardless of the interview style, thats why companies have multiple interview types/styles/rounds to sus out a candidate, as you probably know.
If they BSed their way through a leetcode interview, then they probably won't make it past a behavioral interview where they have to go in depth on some past project. And if they BSed that as well as every other round, then hey maybe they are crafty enough to succeed at the actual job.
I think this is where our different opinions come from, while we agree on the other aspects.
In my personal experience, I have never felt that the hire/no-hire decision relied exclusively on my ability of solving the presented problem; I have passed interviews where I did not solve the LC-style problem optimally but I communicated clearly, picked up on hints, was aware of when I hit "walls" and provided working but less than ideal alternatives when I could not figure out the neat tricks.
Reading through the thread it seems that my experience is not universal, and the majority here have had less pleasant interviews, so I understand where you are coming from.
it changes immensely based on the job market. I've defintiely tanked some inerviews hard, stumbling on softball questions that shoulda been a bullet point. But I get pretty far or even gotten offers.
The last 12-18 months though? I've had interviews that felt like a dream but got zero follow up to. Been ghosted after seemingly final rounds where I spent 5+ hours on technical tests. It's not even enough to "understand the problem and communicate steps". You gotta be flawless, and you still might be cut compared to 3 years ago where a "C" performance could still land multiple offers as long as your experience made up for the quiz questions.
I dodged the .com bust because I worked for the U.S. DoD at the time.
But I got laid off for the first time during last year's "15% bloodbath".
If I compare my current job search vs. all of my job searches in the past:
(1) As parent comment said, the bar seems to be much higher. I've thought that I did really well on some interviews, only to not get an offer.
(2) Some interview processes are way more rigorous. For a DevTech role within nVidia, I had 12 interviews + 2 take-home problems. (BTW, the take home problems were incredibly fun. Well done nVidia!)
(3) I've finally accepted a job offer from a large, established tech company, and the pre-onboarding process is amazingly slow. I accepted the offer a month ago and still don't have a start date. In a better job market either (a) they'd probably work harder to be good about this stuff, or (b) I'd just take a different job because of the delay.
That's ridiculous, tho.
Did they really not have enough information on the 11th interview to know whether or not they wanted to hire you?
I forgot to mention another:
(4) Ghosting candidates seems common now. I'd never experienced it before now.
I have had all possible experiences. Sometimes I feel like genius and ace some leetcode with an almost novel solution. Sometimes I missunderstand the question/scope and mud myself into the hole of despair.
I have been rejected for one mediocre interview among many good ones. Or the other way around accepted even though I didn't perform well.
Sometimes the interviewer works with me. Sometimes against me. Sometimes a war story impresseses positively, sometimes raises suspicions.
At this point it feels like gambling.
I have also ran almost 400 interviews on the other side over the years, and to me it seems quite clear when somebody cannot write code at all. I like to think I am not biased. But who knows.
I take issue with it because Leetcode gives tech people a fake feel good, “yeah but at least it’s fair” illusion, when really it’s probably just as biased as any other hiring method.
This is ironic considering these companies have forced mandatory DEI seminars (which I have no problem with btw), inclusive language, #EveryoneCanCode, and so on.
But despite all this, you end up with teams and organizations that are 99% of the same X somehow. Replace X with race, school, even state from home country sometimes.
You know there are websites where people share all the interview questions to these hard interviews / referrals exclusively in their language and behind a pay or rep wall? And there are Telegram groups where international students leaks the questions or do interviews in place for one another.
It’s inevitable these types of issues arise when there’s so much at stake, ex: in just 5 years, a $200k TC advantage at a top company, becomes $1m or more with appreciation.
I just really dislike the veneer of “fairness” when there are so many problems with the process, even beyond the questions that have nothing to do with the job.
I really do wonder what those sanctimonious sermons are meant to accomplish. People who are already ideologically aligned with them won't learn anything new and may just resent it, while people who aren't aligned won't become aligned as a result of that "training".
But you're talking about it as though you expect it to have a constructive impact, resulting in irony when it doesn't. I don't see the irony because I don't expect any benefit from those struggle sessions in the first place.
struggle sessions usually involve public torture or executions. it's laughable to compare "being mildly inconvenienced" with that.
Struggle sessions in their basic form used social coercement to extract confessions of guilt against some collective cause. This describes the DEI training sessions I've been in well. Admit that you have bias (effectively confessing guilt to a "crime" that gives your employer leverage over you) or get dogpiled and have your refusal to admit guilt cast as evidence of your guilt anyway.
They're supposed to absolve the corporation and its leadership of responsibility and liability. By giving that training they get to claim they did all they could.
I think that's precisely right.
You'd be surprised how much self-perception and reality can differ. Lots of types that think they're the best allies ever and then show a total lack of empathy (especially if someone's different in their immediate family) that'd need a reality check.
Though usually the intersection of people running DEI initiatives and those people make a large set. Assimilated gay people love talking shit about how trans people make "us" look bad.
It's also a great way to filter out anyone who has any kind of commitment like kids, aging parents to take care of, or have health issues themselves that makes it hard to cram leet code non stop on top of a busy career.
For companies this is very rational, you want non-distracted employees who can work over time.
It's really not that rational unless you want someone only familiar with being the top code monkey.
In all my years of sw dev the number of times that would have helped vs being able to communicate and manage expectations across a swath of people is like 1:1000.
To be fair, you need at least one person on the team to do the actual work, while everyone else is “communicating and managing expectations”.
That statement is not untrue, but I think it over-estimates how much of the "actual work" is banging out code, vs. making sure the correct code is being written.
Yea, I've worked in a place where it was just "coders coding" and nobody was communicating and managing expectations, and that was its own unique form of awful. You need both.
It’s actually possible for members of the team to be capable of doing “actual work” in addition to being good communicators.
Naive questions ahead, if hires weren’t made scarce by some absurd filter, why would they pay $200-300k extra? Feels like the whole idea of stellar salaries must be based on something stellar. Afair, before Google(?) made it normal, developers were sort of dirt cheap. Weren’t developers in abundance at all times?
The reason they pay $200-300k extra is to attract the best they can. Say you got the same salary working in an ethical company than in a FAANG, would you go for the FAANG?
The absurd filter is just some kind of lottery. They could have a different one: at the end of the day, it's only when the person starts working that you can actually see what they are worth.
The thing with a filter like this one is that it filters out a whole category of people who may actually be good. And it reinforces itself: you hire people who are good at leetcode, who will themselves possibly be good at hiring people who are good at leetcode. Does a company of leetcoders perform better than a company made of a diversity of good engineers? Not clear.
Very much a tangent, but what do you mean by a company being "ethical"?
I have a few concerns about how that term is often used in these discussions:
(1) it's treated as a binary quality, rather than some kind of continuum, and
(2) there's rarely a recognition that different persons have different conceptions of what's ethical, let alone a justification for what the writer's preferred definition is uniquely superior.
The beauty of it is that my question is still valid :-). Let me rephrase it:
Say you got the same salary working in a company that you found more ethical, would you go for the FAANG?
Everyone is different, but it's amazing how many excuses we can find to justify doubling/tripling our salary :-).
I agree with this. But the root commenter talks about it as something these people are worth, when it’s just that - a lottery. A company that has so much money that they don’t know which barrier to put there to stop the flood. You jump through absurd hoops and join the club. Kinda like being accepted into a cool kids league, but activities are the same.
If this was just FAANG I wouldn't even mind it. Bigger compensation means bigger hoops to jump through, artificial or otherwise. At least those bigger companies come with interview guides.
But I've been dropped Leetcode mediums on some Unity game dev jobs barely paying 40/hr with minimal benefits. Game interviews already have an absurd number of topics to try and guess you'll be quizzed on. (last interview wanted low level C memory/bit manipulation question... the one before that did Unreal Engine trivia... then the one before that decided to give me vector math and ray intersection questions), adding in random string manipulation questions when you will barely use more than concatenation on the job doesn't help with studying.
If you are applying for game dev jobs, C memory management, the gnarly parts of Unreal engine, vector math, and ray intersection are all things you could be doing on the job.
Game dev is lower paying, less structured, and more involved than typical web dev jobs.
I’d really only want to get into game dev early in my career or as an independent creator. It’s a touch exploitative.
It sure is. I could also be questionied on:
- CPU/GPU architecture
- graphics/shader programming ( I do enjoy graphics programming, but I have been asked these questions despite not applying for graphics roles)
- GPGPU
- traditional software engineering (programming patterns, architecture)
- netcode (I never applied to a network engineering position. I in fact actively avoid that part of the stack)
- general industry questions (what kinds of bugs can appear on shipping build that may not appear on debug builds?)
- a myriad of gameplay programming paradigms
- coding tests in Unity, despite applying for an unreal engine position (this has in fact happened twice now. I know Unity, but come on: can you imagine getting a javascript test for a c++ position?).
I've been asked all of these in some capacity over the years. At what point is it on me for not being some sort of omni net/graphics/engine gamedev that can answer any question on the fly, and not on the company for being paranoid about professionals studying for the skills they are clearly seeking? If someone can "study for your interview", why wouldn't you want that person? That's what they do on the job after all.
I'm already 8 years in, so a bit late for that. My plan is eventually to go indie, but I need a bit more time in my career before making that jump.
Despite the common advice c. 2022, getting a boring cushy tech job right now isn't very viable, so may as well stick to what I'm actually getting interviews for.
Fair enough man. I've done hobbyist game development, independent game development, and applied to some shops myself early on. But eventually, you get a job and your path diverges, etc, etc. Like, I'm not anywhere I'd imagine I'd be when I was in school.
I think at least half of all programmers got into it because of video games. They know it's desirable, so they can filter almost as hard as they want.
Like, the industry is just way more jacked up than general software development. I understand your pain.
Although, if you're currently between jobs right now, you could crank out a prototype of something and see where it takes you while job hunting. Because you will always feel like you could use just a bit more time. Sometimes you just gotta do it.
I interviewed at a company known for consistently asking one of the same four questions in a specific interview round. These questions were widely shared on forums like Blind, Leetcode, and Glassdoor. The recruiters also provided strong guidance on the type of problems to expect.
I prepared thoroughly for all four main questions and any other plausible ones I could think of. I practiced writing solutions to ensure I was fast enough for the interview. Additionally, I pre-prepared ideal answers for each question in case I got stuck.
When the interview came, I got a total curveball: a question that was significantly harder than the usual ones. It didn't fit the round's theme (it was a DSA question, but I'd already aced the DSA round), was obscure enough not to be on LeetCode, and required writing a solver for a hard variant of a known algorithm. I panicked, copied the prompt into ChatGPT (despite being instructed not to use it), transcribed the result, and pretended I had recently studied the relevant algorithm.
I passed the round, nailed the other interviews, got the offer, and accepted. Later, I found out that interviewers are instructed to pick one of four specific questions for that round, and the one I got wasn't in the list.
I'm left wondering if the interviewer was trying to sink me or was just bored with the usual questions. The whole experience raised several questions for me:
Is it cheating if I already had pre-prepared answers for the questions they were supposed to ask? What's the difference between using pre-prepared answers and using Google or ChatGPT during the interview?
If the interview had gone according to plan, what was I actually demonstrating? My ability to use Google?
When the interviewer asked an impossibly difficult question, I would have failed if I answered it legit, even though I'm a good engineer. Failing such an unfair interview round doesn't serve the company's interests.
What is this interview process meant to demonstrate? My true value as an engineer lies in my ability to communicate clearly, think outside the box, identify and address technical tradeoffs, mentor juniors, and propose technical solutions that meet requirements while minimizing risks. Yet, I'm expected to solve a hard variant of the Traveling Salesman Problem in 45 minutes or I don't get the job? Why?
The whole process seems broken, but I'm not sure how to fix it.
How did you do it without them noticing, you weren't sharing your screen ?
IIUC, the interview deviated from the company's interview policies.
Could the answer simply be that the company has no intentions regarding the aberrant interview process that you experienced?
with all due respect I believe this is about as far from the truth as it gets. I believe the issue is that people think they have to work at FAANGs in order to be paid REALLY well which is just nuts.
I know many people making (very) high 6 figures working at places most have never heard of. instead of looking for FAANGs people should be looking at companies that have been in business for 20+ years (if publicly-traded company the easiest "measure" would be whether you would invest your own dough into that company) and then make your self indispensable there (this is much easier than it sounds) to a point where you are more valuable to the company they company is to you. This is un-achievable at FAANG but this should be everyone's career goal. once you are worth more to a company then company is to you, compensation-wise sky is the limit :)
You know people making ~800-900k at places most people have never heard of? That is what "high 6 figures" means. I have to assume you mean more like ~$180-190k?
Not op, but FWIW, I know people who make high six figures (explicitly, $750k>) at companies that aren't household names or FAANG(ish) companies (or AI companies for that matter), though I also know my share of people that are at such companies that most people, or maybe just people here have heard of, that also make > $750k. Some of them aren't even software engineers! Most (all?) of them aren't paid that much on their W2 though and have lucked out with stock options/RSUs/related compensation (again, at non-FAANG companies, some private).
Not trying to brag that I have well-paid friends, though it might come across that way; just corroborating the fact that it's possible for late-career professionals to make that kind of money at non-FAANG companies.
What about a type of blind interview where neither of the two know the problem or solution in advance. After, their solution is blindly reviewed/graded by a third party.
- Input from the interviewer reflects communication ability, cultural fit, and so on of the candidate.
- Input from the blind grader reflects ability of them as a team.
- Low grades count against the interviewer as well.
Bingo; now both are stakeholders in success.
This is very much like the "real world" situation. OTOH, I think this leaves the interviewee even more vulnerable to the unfortunate situation where they correctly solve the problem but the interviewer is convinced that a different, wrong solution is correct. Live long enough and you'll have that happen to you.
OTOH, seeing how the interviewer reacts to that situation will tell you a lot about whether you really want to work there.
wait, can Lisa Khan at the FTC attack this too?
Nah, this will be an EEOC bogeyman.
This is a good assessment. 1 and 2 are why the system won't change, but I don't think they were intentionally designed with that in mind. I think it's a hang over from academia, bearing in mind how many of the top engineers at FAANG are PhDs.
Well, that and how almost nobody who successfully finds employment after "grinding" leetcodes wants to remove the barriers for entry.
I think Leetcoders can't envisage a better way to assess someone than by subjecting them to the same kind of hoop-jumping you get made to do in university. They're not interviewing you for a job as there's no module on interviewing candidates on the CS curriculum, and don't have much professional experience outside of academics or software engineering. They're simulating a dissertation defence, because that's how they were assessed for their competence.
That's my charitable interpretation. If I'm being cynical, it's elitism - a way of making sure you're "one of us" (read: obnoxiously academic, Type-A personality, "logic over feelings").
Not that many, but its interesting how many are from "elite" universities. I'm in a research org for a FAANG and we often get to see all the handwringing about how we can't recruit more of people type x.
Well, if you only hire from MIT, Stanford, Oxford etc, then they are all going to look the same.
For those outside the research org, its a bit better, but its still the most uneven place I've worked.
If you want to earn top money, you gotta play the game. This is true in every profession, you think lawyers that want to work at top law firms have it any easier?
There's a lot of valid criticism with respect to today's tech hiring practices (especially because it doesn't only affect top paying companies), but I don't feel like "I can't get ridiculously wealthy without selling my soul" is a particularly compelling one.
Although I find it disappointing, this is the truth. It is a function of supply and demand. There is always a gating function when you have an abundance of supply; sometimes it is leetcode, sometimes it is school, take home work, resume formatting, a mixture, all of the above, etc.
Everything is a proxy because, in most scenarios, there is no way to understand effectiveness of an employee until they have worked for you for a while.
Of course there are other ways, but they all carry their own risks.
That’s why referrals, imperfect as they are, carry so much sway. In theory they reduce uncertainty and require some expenditure of political capital.
I don't think this is an issue of bad luck, its quite probably to get difficult interviews especially if you're applying to the popular names.
The proof of some of this is that these companies could come up with a standardized test maybe with, say, smaller-scale refreshers every few years (this is normal in some professional certs) and save a ton of money if it were really about ensuring a base level of competence using these sorts of questions as the yardstick.
But that would dramatically increase worker mobility, for one thing. So they don’t do that.
(“No it’s because they don’t want you to be able to test prep” that makes no sense because 99% of the way to prep for these is… exactly like test prep)
Why don't you list them here to illustrate your point?
If what Google search tells me is true the average SWE tenure is only a couple of years or so.
I might need a month or two of study (or more!) to be able to handle the harder Leetcode problems, but if I was expecting to have to do this every couple of years or so I'd consider taking some steps to maintain that knowledge between jobs.
That would probably be considerably less overall effort than forgetting it and having to cram for months every couple of years.
That depends very much on your world view. If you get the job, it would imply you've just cost a lot of other people $100ks. That simply can't be true, because there's a small number of such jobs.
The only way this can possibly hold is if you hold that only the "best" candidate should get the job, and by "best" you mean: the one that gets most of the leetcode questions right. But then there's no us.
Edit: I'm not in favor of leetcode interviews, and I do understand that there's a bias in interviews (which won't go away when you drop the leetcode).
There is no grand conspiracy. People try to develop what they feel is a fair interview with good signals. It simply turns out that many people are not good at that and have biases that they may not even be aware of. Tackle on some cargo culting and "best practices" and you have your typical broken process.
For those wondering what the alternative is to leetcode: a work sample test. Take a real problem your team solved recently, distill it down to remove internal-specific knowledge, and give candidates 3x the time to solve it compared to what it takes an internal dev to solve. Bonus points for having a rubric guide the scoring.
I totally agree with A and B and have further suspected they use leetcode as a means to support a H1B candidates employment prospects (for those who do pass)
part of the problem is saturation in the job market.
higher education, even job experience is no longer considered a sufficient barrier to entry.
reminds me when the guy who wrote homebrew failed the apple interview. if i recall.
ironic because they use brew so much internally.
This reads very similarly to consulting case interview questions. 1) It requires a lot of free time beforehand to study the frameworks so it filters intent but also removes those who need to work and 2) it seems objective on the surface but you can essentially pick winners by choosing difficulty and nudging along.
I think you’re overthinking it. Hanlon’s razor applies - the industry as a whole is incompetent at interviewing. And to be clear I’m not being smug here - every time I’ve ran recruitment rounds the interview process was a mess. It’s an extremely difficult equation to balance and as an industry we have never made the commitment to figure out a golden standard for how to do it.
yeah i agree that "if you know how to problem solve you will pass" statement is a joke. you absolutely need to memorize most of these problems as you'll never encounter them out in the real world. i think we need to get better at the behavioral side of interviewing - this should be the juice of getting at whether or not an engineer is good. and if they're really good at lying... CEO material? lol.
3) It's also a form of hazing to make the engineers conducting the interview feel better about their current station.
See, this is true eventually. But it's _definitely_ not true in 45 minutes.
Engineers typically optimize for their own learning, and the false negative rate can reflect more about how aggressively the interviewing panel wants specific peers versus how much they actually want the job done well.
Suppose there was a SWE union and the union has to create their own reference compensation bands. Would the union again lean towards leetcode or some other standardized test? Or just use years of experience and maybe past projects?
When weighing the “effectiveness” of leetcode interviews, it’s important to weigh that thus far SWEs have failed to effectively unionize despite e.g. the past class-action no-poaches clearly showing C-suites should be paying engineers a larger share.
We’re hiring for an entry level position where I work. So entry level, that their first month is going to be dedicated to simply learning the framework we use.
However, we do require a base level of competency. We give out a 40-ish minute assessment. Two multiple choice and three coding problems.
And they’re easy. One of them is essentially “John has a 5 gallon bucket and 3 gallon bucket, how many buckets does he have?”. All you have to do is rewrite the clearly labeled circuit diagram as a Boolean expression.
Not a single person this round has passed. Several have failed the simple ones.
So while grilling people on the minutiae of a language or asking them to solve the traveling salesman is not beneficial, neither is nothing.
We should be testing floors, not ceilings
I'm not sure there's so much agency in any of this. Most of it is probably cargo-culting. Every software shop wants to be Google, so they do what Google does (or what they think Google is doing). It doesn't matter if they do it well, or understand (or agree with) the ultimate purpose of the process.
If Google engineers all wore pink robes and top hats while interviewing then we'd see that everywhere.
I do not know how hiring is done at megacorps; but on the few occasions I participated in tech interviews, we were looking for candidates to join the team that I was on. So, what I was interested about the candidates was how well they were prepared for the kind of work our team does; and how much I would like to work with them. I am sure this second question introduces all sorts of subjective biases; but then, hey, aren't you going to spend countless future hours with that person? At least the first concern incentivised us to make sure candidates were sufficiently competent. BTW, we didn't ask leetcode-type questions.
I am not sure I am following the argument. If there is only a handful of companies that pay well, and if you only consider companies out of that list for employment, isn't it reasonable to predict that there will be lots of other candidates who want the same, far more than the number of available positions at those companies. How, then, should those companies select from such a pool of candidates?
I don't buy the analysis on 1). Leetcode interviews restrict the pool of SWEs. You have a lot of SWEs who fail to qualify any more, but the few that do are more coveted and get to command higher wages
Leetcode feels meritocratic. I suspect many people (engineers perhaps more so) don't trust their ability to "size someone up" from 45 minutes of casual conversation alone. Or worse, they worry that they'll have to defend their hire/donthire based on what amounts to a casual conversation whereby they were trying to judge character, affability, confidence, drive, motivation....
Leetcode is lazy, easy.
And you think without leetcode questions the despicable biased interviewer from your example would not be able to hire someone from their Alma mater over someone else?