I’m an AI Security Engineer, and I've been working with the team at DevSecAI on a new VS Code/Cursor extension called Arko. We wanted to share it here to get your feedback.
With the recent shift towards AI-assisted development ("vibe coding" in Cursor/Windsurf), development velocity has obviously skyrocketed. But we found that traditional security tooling hasn't adapted.
Recently, both Anthropic and OpenAI announced agentic security scanners (Claude Code Security and Codex Security). They are genuinely impressive at the repository level, but they run asynchronously. Waiting hours for a pipeline to finish or a PR to be generated completely breaks the flow state. By the time the scanner flags an exposed API key or a prompt injection flaw, you are already five files deep into something else.
We built Arko to bring that context into the editor, evaluating the architecture synchronously as you type.
Here is how it works under the hood:
Contextual Stack Mapping: Instead of just running generic regex rules, Arko actively maps your framework and integrations (e.g., spotting React, Supabase, and the Vercel AI SDK). It calculates a real-time "Hackable Score" (0-100%) based on open vulnerabilities, implemented security controls, and data sensitivity (like PII handled under GDPR).
Dynamic Threat Modelling: It evaluates the code for logic flaws specific to modern AI architectures. For example, it will flag "Prompt Injection via Assessment Inputs" or "AI Model Poisoning" while you are building the endpoint, rather than waiting for a retroactive scan.
In-IDE Remediation: When it spots a missing control (like missing output sanitisation for an LLM response), it provides context-aware fixes directly in the sidebar so you don't have to context-switch to a browser.
We are trying to democratise this level of DevSecOps so solo builders and small teams don't have to choose between development speed and application security.
You can check out a deeper technical breakdown of how we are shifting security natively into the IDE here: [Link to your Medium or LinkedIn article]
The extension is available in the VSX marketplace. We would genuinely love for the HN community to try it out on your current AI projects. We want to know where it breaks, what false positives you encounter, and how we can improve the threat modelling logic.
Happy to answer any questions about the architecture or how we are handling the local context parsing!
Hi HN,
I’m an AI Security Engineer, and I've been working with the team at DevSecAI on a new VS Code/Cursor extension called Arko. We wanted to share it here to get your feedback.
With the recent shift towards AI-assisted development ("vibe coding" in Cursor/Windsurf), development velocity has obviously skyrocketed. But we found that traditional security tooling hasn't adapted.
Recently, both Anthropic and OpenAI announced agentic security scanners (Claude Code Security and Codex Security). They are genuinely impressive at the repository level, but they run asynchronously. Waiting hours for a pipeline to finish or a PR to be generated completely breaks the flow state. By the time the scanner flags an exposed API key or a prompt injection flaw, you are already five files deep into something else.
We built Arko to bring that context into the editor, evaluating the architecture synchronously as you type.
Here is how it works under the hood:
Contextual Stack Mapping: Instead of just running generic regex rules, Arko actively maps your framework and integrations (e.g., spotting React, Supabase, and the Vercel AI SDK). It calculates a real-time "Hackable Score" (0-100%) based on open vulnerabilities, implemented security controls, and data sensitivity (like PII handled under GDPR).
Dynamic Threat Modelling: It evaluates the code for logic flaws specific to modern AI architectures. For example, it will flag "Prompt Injection via Assessment Inputs" or "AI Model Poisoning" while you are building the endpoint, rather than waiting for a retroactive scan.
In-IDE Remediation: When it spots a missing control (like missing output sanitisation for an LLM response), it provides context-aware fixes directly in the sidebar so you don't have to context-switch to a browser.
We are trying to democratise this level of DevSecOps so solo builders and small teams don't have to choose between development speed and application security.
You can check out a deeper technical breakdown of how we are shifting security natively into the IDE here: [Link to your Medium or LinkedIn article]
The extension is available in the VSX marketplace. We would genuinely love for the HN community to try it out on your current AI projects. We want to know where it breaks, what false positives you encounter, and how we can improve the threat modelling logic.
Happy to answer any questions about the architecture or how we are handling the local context parsing!