Because the LLMs are useless unless you already know the answer to what you're looking for. At least if you're trying to be a serious, competent programmer.
And if you're using an LLM to do your job for you in an interview, then you extremely probably are not smart enough to notice its constant, inevitable mistakes.
Same argument goes for letting LLM to do my job for me at the job. Plenty of smart people use LLMs to help code at work, and they fix the mistakes. Using an LLM doesn't mean accept its output without verifying that it works.
I'm doing the rounds on interviews this week, and noticed that companies were not adapting to agentic coding that rewrote the software engineer's job description.
> data structure & algorithms questions
My understanding is that these questions are a proxy for intelligence, and they were asked because Google and the like want smart employees.
If that's why you're asking these questions, than testing using agents in the interview process doesn't make sense, because:
1. If you're hiring smart people, you can probably teach them how to use agents. (training someone to use agents isn't very hard)
2. If someone is smarter, you might in general expect them to be more capable of finding bugs in AI code.
Thanks for reading and the thoughtful reply!
> proxy for intelligence
you're right, but it's too easy to gamify, or cheat on.
> can probably teach them how to use agents
I've met smart people that were anti-agentic coding. It's not a direct translation.
> finding bugs in AI code
I think this would be a better proxy for job performance than DS&A.
Because the LLMs are useless unless you already know the answer to what you're looking for. At least if you're trying to be a serious, competent programmer.
And if you're using an LLM to do your job for you in an interview, then you extremely probably are not smart enough to notice its constant, inevitable mistakes.
> LLM to do your job for you in an interview
Same argument goes for letting LLM to do my job for me at the job. Plenty of smart people use LLMs to help code at work, and they fix the mistakes. Using an LLM doesn't mean accept its output without verifying that it works.
I'm doing the rounds on interviews this week, and noticed that companies were not adapting to agentic coding that rewrote the software engineer's job description.
You need programmers and software engineers to program to remove bugs the AI made that it couldn't detect.
then removing those bugs should be the skill to be assessed
I learned debugging back in college. I am a Super Debugger and can find bugs in someone else's code, my code, or AI code.
Will there be a market for artisanal, hand-crafted code?
For low-level or custom hardware interfaces, hand-crafted code won't go away.