The Engineering Interview is Broken
Gearloose Jones
Posted on June 28, 2017
Yes. Another one of these posts. I've lost count of how many have crossed this site, and I'm sure you have too! Meanwhile, I've counted several sites that have been built specifically to help job applicants skip all or part of this process. If you read between the lines, there's a compelling argument to be made that something needs to be changed. Not just for the sake of eager people who'd love to work for a particular company, but also for managers who, meanwhile, are stuck with an empty desk instead of someone making potentially great contributions to the codebase.
Coding assessments, often the first filter for coding jobs of any stripe, usually come in two flavors.
- Build something for us based on [a/an often loose] set of requirements
- Log into a shared screen and solve a coding puzzle
The most common defense of either of these approaches is roughly the same, with small variations here and there, and it reads "How else are we supposed to filter out people from the overwhelming number of applicants?"
And you know what? On it's face that's fair. But in practice it's often just a really bad filter.
Some of the CoderPad exercises I've been asked to complete while on a Google Hangout/Skype session include, but are not limited to:
- Deduce if a string is a palindrome
- Parse an (overly nested) object and only count the nested objects
- Build a Roman Numeral translator
- Build a new object based on relationship data that lives in two other objects
Some of the time-limited, take-home tests include:
- Build a lightbox that can scale its size based on scroll position (~3 days given)
- Build a working game of Minesweeper (~3 hours given)
- Add functionality to an app built in HackerRank (~90 minutes)
None of these are terribly difficult. With some exceptions, they've been completed. The problem is the inherent disadvantage that comes with being the interviewee. You don't know the expectations about coding standards at the company, and often the answer regarding that is intentionally answered vaguely.
And that's the problem.
If I can vent for a brief moment (you knew this was coming), the worst part of both of these assessment types is that there's never feedback when you don't make it to the next around. The closest I got to "cracking" the code was with a first-round screener who, in some extra time at the end, I asked quite honestly what the expectations were and how my submission would be graded.
He effectively said that it's almost arbitrary; that it comes to down to the person "grading" and if they just like the style you chose at the moment. "Some came back way, way over-engineered," he said of some hiring he did last month. "Others leaned too much on libraries, and even though we didn't prohibit their use, we passed on that candidate."
What is someone supposed to do? A one-and-done approach can't possibly assess the breadth of someone's talent and needlessly punishes a candidate if the particular random question or assignment they get doesn't play to their strengths. It keeps that open seat I mentioned before empty way longer than it should be; to be frank: It doesn't help anyone.
At this point, you're probably asking "So what's the answer?" Especially if you're a hiring manager trying to figure out what to do with a pile of resumes sitting on your desk.
What's worked for me - and I'm speaking from experience on both sides of the interview table - is that you put the coding test away; you put the whiteboard marker down. Probe the candidate for general knowledge about the language in question. The more knowledge they demonstrate, dig one level deeper until you hit their "wall." If you must use the whiteboard, break out the trivia and ask the difference between certain reserved keywords, variable scope, inheritence, etc. Walk through a problem and see how they think it out. Just speaking out loud, how are they architecting a first-pass solution?
A candidate with a strong foundation in the core rules of a language and who can think logically is probably a really safe bet - I haven't recommended a bad one yet with this approach - and no one's time is wasted building functions that solve problems that don't exist anywhere but in an assessment packet.
Posted on June 28, 2017
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.