The core argument is right. Domain expertise is irreplaceable.
But there is a separate, orthogonal point: prompt structure affects output quality independently of domain knowledge. A typed-block prompt (role, objective, constraints, output format as distinct units) gets different results from the same model than prose asking for the same thing. Not because of expertise, but because the model parses labeled structure differently than it parses narrative.
I built github.com/Nyrok/flompt around this, a visual canvas that decomposes prompts into 12 semantic blocks and compiles to Claude-optimized XML. Structure at the input layer is a real variable, separate from whether you understand the domain you're working in.
you are right. If you don't understand subject you will fail
Giving someone AI and convincing them that they are now an engineer is just asking for a huge disaster.
As an experienced engineer and architect, I know that achieving something that works is not just about writing code, but about understanding the issue and understanding which decisions will cause damage in the long term and what will be appropriate for the solution to be flexible, not prone to errors, and capable of development.
I tell AI what to do, because otherwise it does something else, not what it should be doing, and it will never do what I want. Ask someone to paint something for you, and they will paint it, but it will not be what you wanted or imagined.
The core argument is right. Domain expertise is irreplaceable.
But there is a separate, orthogonal point: prompt structure affects output quality independently of domain knowledge. A typed-block prompt (role, objective, constraints, output format as distinct units) gets different results from the same model than prose asking for the same thing. Not because of expertise, but because the model parses labeled structure differently than it parses narrative.
I built github.com/Nyrok/flompt around this, a visual canvas that decomposes prompts into 12 semantic blocks and compiles to Claude-optimized XML. Structure at the input layer is a real variable, separate from whether you understand the domain you're working in.
you are right. If you don't understand subject you will fail
Giving someone AI and convincing them that they are now an engineer is just asking for a huge disaster.
As an experienced engineer and architect, I know that achieving something that works is not just about writing code, but about understanding the issue and understanding which decisions will cause damage in the long term and what will be appropriate for the solution to be flexible, not prone to errors, and capable of development.
I tell AI what to do, because otherwise it does something else, not what it should be doing, and it will never do what I want. Ask someone to paint something for you, and they will paint it, but it will not be what you wanted or imagined.