Absolutely nothing changes about how I interview. I care whether you are “smart and gets things done”.
“Tell me about the project that you are most proud of?” And then dig in and asking them about their challenges decision making processes, and gauge the level of scope, impact and ambiguity they know how to work at.
“I see you have been working for $x years. Knowing what you know now, what would you do differently?”
“Say you are in a meeting with myself, the CEO and other senior developers who have been at the company for awhile and we all agree on an idea that in your experience you know is a bad idea, what would you do?”
Follow up question: “What would you do if after we listened to you, we decided to go in another direction?”
“Tell me about a time when you had unclear requirements , how did you handle it?” - gets back to ambiguity. There is a lot of that with startups.
We binned any form of coding questions or take home task. Our preference now is to focus on architectural concepts and ability to apply those to a problem.
So, as happened last week, if I’m interviewing for an Elixir dev I’m going to be interested in your knowledge of the BEAM and how it’s features can be used to solve common architectural problems.
> 1. Does testing a candidate's ability to "steer" and debug AI-generated code make more sense to you than traditional algorithms?
Testing the candidate's ability to "steer" agents seems to be like testing their ability to know the Java API or to recite SOLID by heart.
> 2. How are you currently preventing these "prompt-only" developers from slipping through your own interview loops?
We don't ask anymore leetcode. We keep the usual systems design interview in which usage of AI is not needed (or at least we don't allow it because in this kind of interview we are more interested in seeing how the candidate thinks and so on)
We have a new stage in our job interview, though: generic Q/A about the fundamental of software engineering/computer science. Again, we don't care anymore how candidates produce code. We care about what they know, and what they don't know. What's the scope of their knowledge, and when do they need to rely on AI to come up with an answer. Silly (non-real) example: "Can you write a program that detects if another program halts?". The people we want are the ones who would say something about the Halting Problem but also perhaps be practical and perhaps ask more questions about such a program requirements.
You get the point: we look for people with a good breadth of knowledge, who can communicate well and know their shit. Whether they can use tool x or y (including LLMs), comes for granted for such people
This is a fantastic perspective, thank you. You hit the nail on the head: the ultimate goal is testing fundamental engineering breadth and systems thinking, not tool usage.
I should definitely clarify my use of the word steering — I completely agree that testing prompt engineering is just the new API memorization, which is useless.
By steering, I mean putting them in a situation where the AI generates a plausible but architecturally flawed solution, and seeing if they have the fundamental knowledge to spot the BS, understand the scope of the problem, and fix it.
Basically, an automated way to test the exact critical thinking you mentioned.
I love your approach of dropping LeetCode for fundamentals Q/A and Systems Design. But out of curiosity, how do you scale that at the top of the funnel? Doing deep, manual 1-on-1 assessments gives the best signal by far, but doesn't that burn a massive amount of your senior engineers' time?
Absolutely nothing changes about how I interview. I care whether you are “smart and gets things done”.
“Tell me about the project that you are most proud of?” And then dig in and asking them about their challenges decision making processes, and gauge the level of scope, impact and ambiguity they know how to work at.
“I see you have been working for $x years. Knowing what you know now, what would you do differently?”
“Say you are in a meeting with myself, the CEO and other senior developers who have been at the company for awhile and we all agree on an idea that in your experience you know is a bad idea, what would you do?”
Follow up question: “What would you do if after we listened to you, we decided to go in another direction?”
“Tell me about a time when you had unclear requirements , how did you handle it?” - gets back to ambiguity. There is a lot of that with startups.
We binned any form of coding questions or take home task. Our preference now is to focus on architectural concepts and ability to apply those to a problem.
So, as happened last week, if I’m interviewing for an Elixir dev I’m going to be interested in your knowledge of the BEAM and how it’s features can be used to solve common architectural problems.
> 1. Does testing a candidate's ability to "steer" and debug AI-generated code make more sense to you than traditional algorithms?
Testing the candidate's ability to "steer" agents seems to be like testing their ability to know the Java API or to recite SOLID by heart.
> 2. How are you currently preventing these "prompt-only" developers from slipping through your own interview loops?
We don't ask anymore leetcode. We keep the usual systems design interview in which usage of AI is not needed (or at least we don't allow it because in this kind of interview we are more interested in seeing how the candidate thinks and so on)
We have a new stage in our job interview, though: generic Q/A about the fundamental of software engineering/computer science. Again, we don't care anymore how candidates produce code. We care about what they know, and what they don't know. What's the scope of their knowledge, and when do they need to rely on AI to come up with an answer. Silly (non-real) example: "Can you write a program that detects if another program halts?". The people we want are the ones who would say something about the Halting Problem but also perhaps be practical and perhaps ask more questions about such a program requirements.
You get the point: we look for people with a good breadth of knowledge, who can communicate well and know their shit. Whether they can use tool x or y (including LLMs), comes for granted for such people
This is a fantastic perspective, thank you. You hit the nail on the head: the ultimate goal is testing fundamental engineering breadth and systems thinking, not tool usage.
I should definitely clarify my use of the word steering — I completely agree that testing prompt engineering is just the new API memorization, which is useless.
By steering, I mean putting them in a situation where the AI generates a plausible but architecturally flawed solution, and seeing if they have the fundamental knowledge to spot the BS, understand the scope of the problem, and fix it.
Basically, an automated way to test the exact critical thinking you mentioned.
I love your approach of dropping LeetCode for fundamentals Q/A and Systems Design. But out of curiosity, how do you scale that at the top of the funnel? Doing deep, manual 1-on-1 assessments gives the best signal by far, but doesn't that burn a massive amount of your senior engineers' time?