Hiring is broken! Let's fix it with empathy.

david_whitney

David Whitney

Posted on September 30, 2021

Hiring is broken! Let's fix it with empathy.

Hiring technical people is difficult, and doubly so if you want to get people who are a good fit for you and the teams you're working with, yet repeatedly we seem to get it awfully wrong as an industry.

The tropes are real – and we're now in our second iteration of "hiring terribly". Where the 80s and early 90s were characterised by mystery puzzle hiring ("how would you work out how many cars you can fit into three cruise ships?"), the 2010s are defined by the tired trope of the interview that is orders of magnitude more difficult to pass and bares increasingly less resemblance to the job you do once you get the role.

Over fifteen years of hiring people for coding jobs, a few things still seem to hold:

  1. The ability to talk fluently about what you like and don't like about code for an hour or so is the most reliable indicator of a good fit.
  2. It's a bad idea to hire someone if you have never seen code they have written.
  3. Interview processes are stressful, unnatural, and frequently don't get the best from people.

We're faced with the quandary – how do we find people from a pool of unknowns, who will quickly be able to contribute, work in relative harmony, and enjoy being a part of your team.

The kind of people who will fit best in your organisations is inevitably variable – as it's driven by the qualities you desire in your team's – but personally, I value kind people who are clear communicators who are a pleasure to work with. Those are not everyone's values, but I want to speak to how I've tried to cultivate those kinds of teams.

You're going to need to know how to write an excellent job spec, construct a good interview process, evaluate technical performance, and give meaningful feedback. Let's cover each of those topics in turn.

How to construct a kind interview process

A good interview process respects everyone's time.

Set amongst the hellscape of FAANG multi-stage interview processes with one hundred asinine divisional directors, it's simple to put together an interview process that isn't hell on earth for everyone involved.

  1. Write a job spec that captures your cultural values.
  2. Have an hour-long conversation with them about themselves, their experiences, and their opinions.
  3. See some code they've written.
  4. Have the team they would join, or someone else representative, talk to them about code.

There's no reason for this process to take any longer than three hours end-to-end, and ideally shouldn't be a chore for anybody involved.

The first bit is all on you, the interviewer. It's important that a job spec contains concrete information on the work that the role involves, that the only skills listened as mandatory are skills used in the actual role, and that you are clear about constraints and salary conditions.

The conversation is what most people are used to as an interview. Be kind. Understand people are humans and might be nervous, make sure they know that the best outcome is that you both "win" – don't be there to get a rise out of someone.

How to be a good interviewer

The first and most important thing about being a good interviewer is that you're not there to trip people up or catch people out. If that's what you feel an interview should be, I implore you to pass on interviewing.

Interviews are not meant to be hostile environments, and as a candidate, if you encounter one, do not under any circumstances take the job.

You're in an interview to verify someone's experience, understand their communication style, and discuss the expectations of the role you're hiring for.

You're there to sell the position, hopefully stimulating enthusiasm in the candidate, and to set expectations of what the job is like, day-to-day, so that neither you nor the candidate is surprised if you both choose to work together.

You need to be honest – both about the problem space, and the work. You need to be clear about where you need to grow as a team or organisation. There is nothing worse, as a candidate, than being sold a lie. Much rather articulate your challenges up front lest you ruin your own reputation.

You need to ask clear and relevant questions – learn from the mistakes of a thousand poor "balance a binary tree" style interview questions and leave that stuff at home.

Ask candidates questions about their relevant experience. Ask them how they would solve problems that you have already solved in the course of your work, or how they would approach them. Don't ask meaningless brain teasers.

You need to give them space to talk about broad topics – I love asking candidates what they think makes good code. I love to ask the question because everyone will say "readable" or "maintainable" and then we get to have a conversation on what they think satisfies those qualities in a codebase.

As an interviewer, I don't care that you learnt to say, "it follows the solid principles", I'd much rather a candidate has the floor to talk about how code makes them feel and why. Nice big broad questions are good at opening the floor to a discussion once you've talked about experience.

Take notes. Don't interrupt the candidate.
Give them time to speak, and actively listen.

Seeing some code

You're going to want to see some code for technical roles – this is an absolute minefield, but the thing that I've settled on after trying all sorts of techniques here is to offer the candidates choice.

My standard process here is to offer candidates any of the following:

  • Bring me some code you have written that you're comfortable talking about
  • Do a well-known kata, in your own time, and send it across
  • Set up a one-hour session and I will pair program the kata with you

I ask the candidates to "please pick whichever is less stressful for you".

People perform differently under different types of assessment, and qualitatively, I get the same outcome from a candidate regardless of the path they pick. I like to hope that this opens the door for more neurodiversity in applicants and protects me from only hiring people that share my exact mental model. Choice is good, it doesn't hurt to be kind, it costs nothing.

Each approach has subtle pros and cons – their own arbitrary code might not quite give me the same high-quality signal, but it's a great way for people who are unquestionably competent to avoid wasting their own time. The take-home kata is a nice happy medium, though could potentially accidentally have a candidate thrashing around trying to complete something that doesn't need to be complete. The pairing session requires a little bit more of the interviewer's time and is probably the more high-stress option as people sometimes don't perform well when they feel like they're being actively evaluated, but you know precisely how someone works in those conditions.

Technical tests are intimidating to all but the most confident of candidates, this choice lets them wrestle a little bit of confidence and control back to at least feel like they're not being ambushed by something with which they cannot reckon.

It's the right thing to do.

How to set a good technical test

I've been involved in setting a lot of technical tests over the years – and I'm extremely sensitive to the ire that tech tests often cause in people. I've seen so many borderline abusive practices masquerading as technical tests that I'm not even remotely surprised.

The commandments of good tech tests:

  • A test should take no longer than one hour
  • It should be completable by a junior to the most senior, senior
  • It should not be in your problem domain
  • It should not be unpaid work
  • The answer should be provided in the question

There are a couple of potentially controversial points here.

The tech tests should respect a candidate's time.

You are not the only place they are applying, and the candidate does not owe you their time. Anything more than thirty minutes to an hour can act as implicit discrimination against people that don't have unlimited time, or have families, or other social constraints.

Using the same test for your most junior developers to your most senior allows you to understand the comparative skill of candidates who are applying, on a level playing field. You might not expect the same level of assessment or scrutiny between submissions, but that baseline is a powerful way of removing the vast discrepancies between titles and pay and focusing on a candidate's capability.

The test should be synthetic, and not part of your domain. For years I believed the opposite of this and was a fan of making tests look like "real work", but this often fails because it expects that the candidate often must understand a whole set of new concepts that doesn't help you assess their capability for the job.

And finally, providing the answer in the question deliberately reinforces that it's not a "puzzle", but an interview aid.

If a tech test contains the answer, and isn't domain specific, then what is it really for?

A tech test exists to verify, at the most basic level, that a candidate can code at all. The extremely non-zero number of people I have interviewed that couldn't so much as add new classes to an application is real, and it's why FizzBuzz is a good traditional screening question – it does little more than "test" if you can write an if-statement.

Once you've established a candidate can code, you're looking to see how they approach problem solving.

Do they write tests?
Do they write code that is stylistically alike to your team's preferences?
Can they clearly articulate why they made the choices they made, however small?

A technical test isn't there to see if a candidate can complete a problem under exam conditions, it's just an indicator as to the way they approach a problem.

A good technical test is the quickest shortcut to providing you these signals. I've come to value well known code katas as recruitment tests as they tend to fulfil most of these criteria trivially, without having to be something of my own invention.

I tend to use the Diamond Kata –

Given a character from the alphabet, print a diamond of its output with that character being the midpoint of the diamond. Write appropriate tests.

Example of the Diamond Kata - find it on github davidwhitney Code Katas

Giving feedback

If a candidate has given you an hour of their time, it's responsible to give them meaningful feedback as notes. It doesn't have to be much, and you don't need to review them – just a few hints as to what they could have done in future to be more successful ("we didn't feel like you had enough experience in Some Framework" or "We didn't feel confident in the tests you were writing") is absolutely fine.

Be kind. Hope they take the feedback away and think about it.

There are hundreds of examples of "failed interview candidate later the hiring manager" out there – being nice to people even if they don't get the job is a good precedent for when you inevitably meet them in the future.

An unfortunate majority of job roles won't contact unsuccessful candidates at all – and there is a balance to be struck. You're certainly not obligated to everyone that applies to a CV screen funnel, but anyone you talk to deserves the courtesy of feedback for their time spent.

Adapt to fit

The best interview processes accurately reflect your own personal values and set the stage for the experience your new team members are going to have when they join your organisation. Because of this, it's an absolute truth that no one way will work for everyone – it's impossible.

I hope that the pointers in here will stimulate a little bit of thought as to how you can re-tool your own interview process to be simpler, kinder, and much quicker.

Below is an appendix about marking technical recruitment tests that may be useful in this process.

Appendix: How to mark a technical test

Because I tend to use the same technical tests for people across the entire skill spectrum, I've come to use a standard marking sheet to understand where a particular candidate fits in the process. I expect less from candidates earlier on in their careers than more experienced individuals – this grading sheet isn't the be all and end all, but as you scale out your process and end up with different people reviewing technical tests and seeing candidates, it's important that people are assessing work they see through the same lens.

Feel free to use this if it is helpful for you understanding what good looks like.

Problem domain and understanding of question

  1. Submitter suggested irrelevant implementation / entirely misunderstood domain
  2. Submitter modelled single concept correctly
  3. Submitter modelled a few concepts in domain
  4. Submitter modelled most concepts in domain
  5. Submitter modelled all concepts in domain

Accuracy of solution

  1. Code does not compile
  2. Code does not function as intended, no features work
  3. Code builds and functions, but only some of the acceptance criteria are met
  4. ~90% of the acceptance criteria are met. Bugs outside of the scope of the acceptance criteria allowed
  5. All acceptance criteria met. Any "hidden" bugs found and solved.

Simplicity of solution

  1. Is hopeless spaghetti code, illegible, confusing, baffling
  2. An overdesigned mess, or nasty hacky code - use of large frameworks for simple problems, misusing DI containers, exceptions as flow control, needless repetition, copy-­pasting of methods, lack of encapsulation, overuse of design patterns to show off, excess of repetitive comments, long methods
  3. Code is concise, size of solution fits the size of the problem, no surprises. Maybe a few needless comments, the odd design smell, but nothing serious
  4. Code is elegant, minimalist, and concise without being code-golf, no side effects, a good read. Methods and functions are descriptive and singular in purpose
  5. Perfect, simple solution. Absolutely no needless comments, descriptive method names. Trivial to read, easy to understand

Presentation of solution

  1. Ugly code, regions, huge comment blocks, inconsistent approach to naming or brace style, weird amounts of whitespace
  2. Average looking code. No regions, fewer odd comment blocks, no bizarre whitespace
  3. Nice respectable code. Good code organisation, no odd comment blocks or lines (no stuff like //======= etc), internally consistent approach to naming and brace style
  4. Utterly consistent, no nasty comment blocks, entirely consistent naming and brace style, effective use of syntactic sugar (modern language features in the given language etc)
  5. Beautiful code. Great naming, internally consistent style. Follows conventions of language of test. Skillful use of whitespace / stanzas in code to logically group lines of code and operations. Code flows well and is optimised for the reader.

Quality of unit tests

  1. No test coverage, tests that are broken, illegible, junk
  2. Tests that don't test the class that's supposed to be under test, some tests test some functionality. Vaguely descriptive naming. AAA pattern in unit tests.
  3. Descriptive, accurate names. AAA in unit tests. Use of test setup to DRY out tests if appropriate. Reasonable coverage.
  4. Complete test coverage to address all acceptance criteria, setup if appropriate, good descriptive names. BDD style tests with contexts are appreciated.
  5. Full coverage, all acceptance criteria covered, great naming that represents the user stories accurately, little to no repetition, no bloated repetitive tests, effective use of data driven tests if appropriate, or other framework features.
💖 💪 🙅 🚩
david_whitney
David Whitney

Posted on September 30, 2021

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related