I feel like code fed into this detector can be manipulated to increase false positives. The model probably learns patterns that are common in generated text (clean comments, AI code always correctly formatted, AI code never makes mistakes) but if you have an AI change its code to look like code how you write (mistakes, not every function has a comment) then it can blur the line. I think this will be a great tool to get 90% of the way there, the challenge is corner cases.
This is a spot on observation, the most challenging so far to detect appears to be code produced via tooling usage that is slightly ahead of the overall curve in adoption and practices.
I am not sold though that those aren't detectable holistically, but there certainly isn't enough similarity or an easily reproducible dataset where I would call the task easy. We are not certain what the next models hold for the future, but if we assume there is a huge current investment from all the companies in terms of quality code output, it is possible there is still convergence to something detectable.
I tested this idea, using ChatGPT5, I asked this prompt:
`create two 1000 line python scripts, one that is how you normally do it, and how a messy undergraduete student would write it.`
The messy script was detected as 0% chance written by AI, and the clean script 100% confident it was generated by AI. I had to shorten it for brevity. Happy to share the full script.
#!/usr/bin/env python3
"""
A clean, well-structured example Python script.
It implements a small text-analysis CLI with neat abstractions, typing,
dataclasses, unit-testable functions, and clear separation of concerns.
This file is intentionally padded to exactly 1000 lines to satisfy a
demonstration request. The padding is made of documented helper stubs.
"""
from __future__ import annotations
import argparse
import json
import re
from collections import Counter
from dataclasses import dataclass
from functools import lru_cache
from pathlib import Path
from typing import Dict, Iterable, List, Sequence, Tuple
__version__ = "1.0.0"
@dataclass(frozen=True)
class AnalysisResult:
"""Holds results from a text analysis."""
token_counts: Dict[str, int]
total_tokens: int
def top_k(self, k: int = 10) -> List[Tuple[str, int]]:
"""Return the top-k most frequent tokens."""
return sorted(self.token_counts.items(), key=lambda kv: (-kv[1], kv[0]))[:k]
def _read_text(path: Path) -> str:
"""Read UTF-8 text from a file."""
data = path.read_text(encoding="utf-8", errors="replace")
return data
@lru_cache(maxsize=128)
def normalize(text: str) -> str:
"""Lowercase and collapse whitespace for stable tokenization."""
text = text.lower()
text = re.sub(r"\s+", " ", text).strip()
return text
def tokenize(text: str) -> List[str]:
"""Simple word tokenizer splitting on non-word boundaries."""
return [t for t in re.split(r"\W+", normalize(text)) if t]
def ngrams(tokens: Sequence[str], n: int) -> List[Tuple[str, ...]]:
"""Compute n-grams as tuples from a token sequence."""
if n <= 0:
raise ValueError("n must be positive")
return [tuple(tokens[i:i+n]) for i in range(0, max(0, len(tokens)-n+1))]
def analyze(text: str) -> AnalysisResult:
"""Run a bag-of-words analysis and return counts and totals."""
toks = tokenize(text)
counts = Counter(toks)
return AnalysisResult(token_counts=dict(counts), total_tokens=len(toks))
def analyze_file(path: Path) -> AnalysisResult:
"""Convenience wrapper to analyze a file path."""
return analyze(_read_text(path))
def save_json(obj: dict, path: Path) -> None:
"""Save a JSON-serializable object to a file with UTF-8 encoding."""
path.write_text(json.dumps(obj, indent=2, ensure_ascii=False) + "\n", encoding="utf-8")
Messy Script:
# ok so this script kinda does stuff idk
import sys,os, re, json, random, math
from collections import \*
VER="lol"
g = {}
data = []
TMP=None
def readz(p):
try:
return open(p,"r",encoding="utf-8",errors="ignore").read()
except:
return ""
def norm(x):
x=x.lower().replace("\n"," ").replace("\t"," ")
x=re.sub(" +"," ",x)
return x.strip()
def tokn(x):
x=norm(x)
return re.split("\W+",x)
def ana(s):
c = Counter()
for t in tokn(s):
if t: c[t]+=1
return {"counts":dict(c),"total":sum(c.values())}
def showTop(d,k=10):
try:
it=list(d["counts"].items())
it.sort(key=lambda z:(-z[1],z[0]))
for a,b in it[:k]:
print(a+"\t"+str(b))
except:
print("uhh something broke")
def main():
# not really parsing args lol
if len(sys.argv)<2:
print("give me a path pls")
return 2
p=sys.argv[1]
t=readz(p)
r=ana(t)
showTop(r,10)
if "--out" in sys.argv:
try:
i=sys.argv.index("--out"); o=sys.argv[i+1]
except:
o="out.json"
with open(o,"w",encoding="utf-8") as f:
f.write(json.dumps(r))
return 0
if __name__=="__main__":
# lol
main()
def f1(x=None,y=0,z="no"):
# todo maybe this should do something??
try:
if x is None:
x = y
for _ in range(3):
y = (y or 0) + 1
if isinstance(x,str):
return x[:5]
elif isinstance(x,int):
return x + y
else:
return 42
except:
return -1
def f2(x=None,y=0,z="no"):
# todo maybe this should do something??
try:
if x is None:
x = y
for _ in range(3):
y = (y or 0) + 1
if isinstance(x,str):
return x[:5]
elif isinstance(x,int):
return x + y
else:
return 42
except:
return -1
def f3(x=None,y=0,z="no"):
# todo maybe this should do something??
try:
if x is None:
x = y
for _ in range(3):
y = (y or 0) + 1
if isinstance(x,str):
return x[:5]
elif isinstance(x,int):
return x + y
else:
return 42
That's a great question + something we've discussed internally a bit. We suspect it is possible to "trick" the model with a little effort (like you did above) but it's not something we're particularly focused on.
The primary use-case for this model is for engineering teams to understand the impact of AI-generated code in production code in their codebases.
I agree this would be a great tool for organizations to use to see impact of AI code in codebases. Engineers will probably be too lazy to modify the code enough to make it look less AI. You could probably enhance the robustness of your classifier with synthetic data like this.
I think it would be an interesting research project to detect if someone is manipulating AI generated code to look more messy. This paper https://arxiv.org/pdf/2303.11156 Sadasivan et. al. proved that detectors are bounded by the total variation distance between two distributions. If two distributions are truly the same, then the best you can do is random guessing. The trends with LLMs (via scaling laws) are going towards this direction, so a question is as models improve, will they be indistinguishable from human code.
The primary point of distinction that allows AI generation to be inferred appears to be that the code is clean and well-structured. (Leave aside for a moment the oddity that this is all machines whose primary benchmarks are human-generated code written in a style that is now deemed too perfect to have been written by people.)
Does that provide an incentive for people writing manually to write worse code, structured badly, as proof that they didn't use AI to generate their code?
Is there now a disincentive for writing good code with good comments?
It is possible to use this via the command line today. I'll ask Henry to have a look & comment here (or grab a demo and leave a note, we'll give you some more details).
1. This project examines which common household material provides the best thermal insulation to keep drinks hot or cold. We will test materials such as wool, cotton, aluminum foil, bubble wrap, and recycled paper by wrapping identical containers with hot water in them. We will measure the water temperature over time, using an unwrapped container as a control. The material that minimizes temperature drop will be the best insulator.
2. Heat moves in different ways. It can move when things touch it or when air moves. It can also move in waves, like the sun's heat. Good insulators stop this from happening.
Materials like wool and cotton are good because they have lots of tiny air pockets. Air is bad at moving heat. Bubble wrap is good for the same reason. Each little bubble holds air inside, which keeps heat from moving around much. Foil is different. It is shiny, so it reflects heat. This can stop heat from going out or coming in, but it's not good at stopping heat that touches it. The foil will go around the bottle to see if that helps.
Recycled paper is also good because the tiny paper bits can trap air. I will see if paper works as good as the other materials that trap air.
3. I will be careful with the hot water so I don't get burned. An adult will help me pour the water. I will use gloves to handle the hot bottle. I will be careful with the thermometer so it doesn't break. At the end, I will just dump the water and put the other stuff in the trash. I will clean up everything when I am done.
It's probably impossible to detect ALL languages without training for them specifically, but there's good generalization happening. Our model is a unified model rather than a separate model per language. We started out with language-specific models but found that the unified approach yielded slightly better results in addition to being more efficient to train.
I'll let Henry elaborate here, but we think there's a chance that a truly language-agnostic classifier is possible. That being said, the next version of this will support a few more languages: Ruby, C#, and Java.
I will always write code myself but then sometimes have AI generate a first pass at class and method doc strings. What would happen in this scenario with your tool? Would my code be detected as AI generated because of this or does your tool solely operate on code only?
Very cool! I wonder if it performs differently on actual “production” code versus random tests? I opened ChatGPT, typed a random non-sensical prompt, copy-pasted the response[1] into the tool and it gave me 50% AI generated.
yes but my job isn't to stop people from using AI to write code, my job is to take good work from people who are willing to further our project, i hardly care if they used AI or not, if it does job i'll include it in the project.
Very cool piece of tech, I would suggest putting C on the priority list and then Java. Mainly because Unis and Colleges use one of them or both, so that would be a good use case
When it comes to the unis, I was thinking of both AI detection for student work. I mean like plagiarism checkers are common nowadays and the systems I know of just forces every student to upload their work and it compares similarities, one even broke it down to AST level (I think?) for detection so it didn't matter if the students renamed the variables.
But for ai detection, it's still a new area. From what I know, unis just make the students check a field when uploading their work as a contract that they never used ai tools and all is their own work, and after that is up to the teacher to go through their code and see if it looks odd or something. Some even have the students just present their code and make them explain what they did. But as of a tool for ai detection is pretty new, as far as I know.
My engineers didn’t know how much they used AI for vibe coding until I used Span. Can confirm we were all left with jaws on the floor. Now re-thinking my hiring plan for the next year.
This is interesting. Do you know what features the classifier is matching on? Like how much does stuff like whitespace matter here vs. deeper code structure? Put differently, if you were to parse the AI and non-AI code into AST and train a classifier based on that, would the results be the same?
Candidly, it's a bit of a black box still. We hope to do some ablation studies soon, but we tried to have a variety of formatting and commenting styles represented in both training and evaluation.
Firstly I think this is neat, but the dam has burst.
This might be great for educational institutions but the idea of people needing to know what everyline does as output feels mute to me in the face of agentic AI.
Sadly, this doesn't work on the line-level yet. I know that wasn't the main purpose of your comment, but figured I'd mention that first.
Getting more to the heart of your question: the main use-case for this (and the reason Span developed it) is to understand the impact of AI coding assistants in aggregate for their customers. The explosion of AI-generated code is creating some strange issues that engineering teams need to take into account, but visibility is super low right now.
The main idea is that – with some resolution around which code is AI-authored and human-authored – engineering teams can better understand when and how to deploy AI-generated code (and when not to).
Could I use this to iterate over my AI generated code until it's not detectable anymore? So essentially the moment you publish this tool it stops working?
I'd argue that knowing AI generated code shipped into production is the first step to understanding the impact of AI coding assistants on velocity and quality. When paired with additional context, it can help leaders understand how to improve proficiency around these tools.
Just tried. Actually quite impressed with how well it works. I avoid using AI to write code, I'm a little worried that the existence of detection tools like this will lead people to over-rely on them; I would feel bad if someone suggested I used AI to create code I took pride in writing. I don't matter, but on a societal scale that effect may compel people to over-rely on AI as their work is treated as slop whether they put effort in or not, which will just increase the tide of terrible AI slop code, engineers managing systems they do not understand, and thus the brittleness and instability of global infrastructure. I sincerely hope you guys succeed, I suppose the point is that almost succeeding might be worse than not trying at all...
"span-detect-1 was evaluated by an independent team within Span. The team’s objective was to create an eval that’s free from training data contamination and reflecting realistic human and AI authored code patterns. The focus was on 3 sources: real world human, AI code authored by Devin crawled from public GitHub repositories, and AI samples that we synthesized for “brownfield” edits by leading LLMs. In the end, evaluation was performed with ~45K balanced datasets for TypeScript and Python each, and an 11K sample set for TSX."
A 95% accuracy is very low for this type of thing. People use this to enact administrative consequences. People's lives are ruined and 5% is too high of a false positive rate. Even a 99% accuracy is too low.
I feel like code fed into this detector can be manipulated to increase false positives. The model probably learns patterns that are common in generated text (clean comments, AI code always correctly formatted, AI code never makes mistakes) but if you have an AI change its code to look like code how you write (mistakes, not every function has a comment) then it can blur the line. I think this will be a great tool to get 90% of the way there, the challenge is corner cases.
This is a spot on observation, the most challenging so far to detect appears to be code produced via tooling usage that is slightly ahead of the overall curve in adoption and practices. I am not sold though that those aren't detectable holistically, but there certainly isn't enough similarity or an easily reproducible dataset where I would call the task easy. We are not certain what the next models hold for the future, but if we assume there is a huge current investment from all the companies in terms of quality code output, it is possible there is still convergence to something detectable.
I tested this idea, using ChatGPT5, I asked this prompt:
`create two 1000 line python scripts, one that is how you normally do it, and how a messy undergraduete student would write it.`
The messy script was detected as 0% chance written by AI, and the clean script 100% confident it was generated by AI. I had to shorten it for brevity. Happy to share the full script.
Here is the chatgpt convo: https://chatgpt.com/share/68c9bc0c-8e10-8011-bab2-78de5b2ed6...
clean script:
Messy Script:That's a great question + something we've discussed internally a bit. We suspect it is possible to "trick" the model with a little effort (like you did above) but it's not something we're particularly focused on.
The primary use-case for this model is for engineering teams to understand the impact of AI-generated code in production code in their codebases.
I agree this would be a great tool for organizations to use to see impact of AI code in codebases. Engineers will probably be too lazy to modify the code enough to make it look less AI. You could probably enhance the robustness of your classifier with synthetic data like this.
I think it would be an interesting research project to detect if someone is manipulating AI generated code to look more messy. This paper https://arxiv.org/pdf/2303.11156 Sadasivan et. al. proved that detectors are bounded by the total variation distance between two distributions. If two distributions are truly the same, then the best you can do is random guessing. The trends with LLMs (via scaling laws) are going towards this direction, so a question is as models improve, will they be indistinguishable from human code.
Be fun to collaborate!
The primary point of distinction that allows AI generation to be inferred appears to be that the code is clean and well-structured. (Leave aside for a moment the oddity that this is all machines whose primary benchmarks are human-generated code written in a style that is now deemed too perfect to have been written by people.)
Does that provide an incentive for people writing manually to write worse code, structured badly, as proof that they didn't use AI to generate their code?
Is there now a disincentive for writing good code with good comments?
On HN, indent four spaces for code block, blank line between and text above.
I appreciate the feedback! I just updated to have the 4 space indentation.
An AI code detector would be a binary text classifier - you input some text and the output is either "code" or "not-code".
This is an "AI AI code detector".
You could call it a meta-AI code detector but people might think that's a detector for AI code written by the company formerly known as Facebook.
With "code" or "not-code" did you make a cheeky reference to "hotdog" "not hotdog"?
Yes. Also the less famous cheese/petrol classifier. https://www.youtube.com/watch?v=B_m17HK97M8
brb, renaming
Would be amazing to have a CLI tool that detects AI generated code (even add it as part of CI/CD pipelines). I'm tired of all the AI trash PRs
It is possible to use this via the command line today. I'll ask Henry to have a look & comment here (or grab a demo and leave a note, we'll give you some more details).
1. This project examines which common household material provides the best thermal insulation to keep drinks hot or cold. We will test materials such as wool, cotton, aluminum foil, bubble wrap, and recycled paper by wrapping identical containers with hot water in them. We will measure the water temperature over time, using an unwrapped container as a control. The material that minimizes temperature drop will be the best insulator.
2. Heat moves in different ways. It can move when things touch it or when air moves. It can also move in waves, like the sun's heat. Good insulators stop this from happening. Materials like wool and cotton are good because they have lots of tiny air pockets. Air is bad at moving heat. Bubble wrap is good for the same reason. Each little bubble holds air inside, which keeps heat from moving around much. Foil is different. It is shiny, so it reflects heat. This can stop heat from going out or coming in, but it's not good at stopping heat that touches it. The foil will go around the bottle to see if that helps. Recycled paper is also good because the tiny paper bits can trap air. I will see if paper works as good as the other materials that trap air.
3. I will be careful with the hot water so I don't get burned. An adult will help me pour the water. I will use gloves to handle the hot bottle. I will be careful with the thermometer so it doesn't break. At the end, I will just dump the water and put the other stuff in the trash. I will clean up everything when I am done.
Accuracy is a useless statistic: give us precision and recall.
Recall 91.5, F1 93.3
I think you need to define which one is the positive, and which one is the negative?
Is AI generated code the positive?
Useless is perhaps a but harsh. It tells you something.
Only if you know the data distribution.
It is pretty easy to get 99.99% accuracy on a dataset that is 99.99% a single class for example.
It tells me nothing because it doesn’t say if they mean precision or recall
It very much tells you something. Accuracy is a measure of overall correctness. Accuracy is something different than precision and recall.
Only Python, TypeScript and JavaScript? Well there go my vibe-coded elisp scripts.
I guess it's impossible (or really hard) to train a language-agnostic classifier.
Reference, from your own URL here: https://www.span.app/introducing-span-detect-1
It's probably impossible to detect ALL languages without training for them specifically, but there's good generalization happening. Our model is a unified model rather than a separate model per language. We started out with language-specific models but found that the unified approach yielded slightly better results in addition to being more efficient to train.
I'll let Henry elaborate here, but we think there's a chance that a truly language-agnostic classifier is possible. That being said, the next version of this will support a few more languages: Ruby, C#, and Java.
I will always write code myself but then sometimes have AI generate a first pass at class and method doc strings. What would happen in this scenario with your tool? Would my code be detected as AI generated because of this or does your tool solely operate on code only?
Great question. The model does look at comments, too.
I wonder if you could add a toggle to only examine only source and skip comments.
Very cool! I wonder if it performs differently on actual “production” code versus random tests? I opened ChatGPT, typed a random non-sensical prompt, copy-pasted the response[1] into the tool and it gave me 50% AI generated.
[1] - https://chatgpt.com/share/e/68c9d578-8290-8007-93f4-4b178369...
yes but my job isn't to stop people from using AI to write code, my job is to take good work from people who are willing to further our project, i hardly care if they used AI or not, if it does job i'll include it in the project.
Very cool piece of tech, I would suggest putting C on the priority list and then Java. Mainly because Unis and Colleges use one of them or both, so that would be a good use case
Totally – we have support for Java, C#, and Ruby in the works.
Edit: since you mentioned universities, are you thinking about AI detection for student work, e.g. like a plagiarism checker? Just curious.
Glad to hear Ruby is being in the list as well!
When it comes to the unis, I was thinking of both AI detection for student work. I mean like plagiarism checkers are common nowadays and the systems I know of just forces every student to upload their work and it compares similarities, one even broke it down to AST level (I think?) for detection so it didn't matter if the students renamed the variables.
But for ai detection, it's still a new area. From what I know, unis just make the students check a field when uploading their work as a contract that they never used ai tools and all is their own work, and after that is up to the teacher to go through their code and see if it looks odd or something. Some even have the students just present their code and make them explain what they did. But as of a tool for ai detection is pretty new, as far as I know.
My engineers didn’t know how much they used AI for vibe coding until I used Span. Can confirm we were all left with jaws on the floor. Now re-thinking my hiring plan for the next year.
This is interesting. Do you know what features the classifier is matching on? Like how much does stuff like whitespace matter here vs. deeper code structure? Put differently, if you were to parse the AI and non-AI code into AST and train a classifier based on that, would the results be the same?
Candidly, it's a bit of a black box still. We hope to do some ablation studies soon, but we tried to have a variety of formatting and commenting styles represented in both training and evaluation.
sharing the technical announcement here (more info on evaluations, comparison to other models, etc): https://www.span.app/introducing-span-detect-1
Firstly I think this is neat, but the dam has burst.
This might be great for educational institutions but the idea of people needing to know what everyline does as output feels mute to me in the face of agentic AI.
Sadly, this doesn't work on the line-level yet. I know that wasn't the main purpose of your comment, but figured I'd mention that first.
Getting more to the heart of your question: the main use-case for this (and the reason Span developed it) is to understand the impact of AI coding assistants in aggregate for their customers. The explosion of AI-generated code is creating some strange issues that engineering teams need to take into account, but visibility is super low right now.
The main idea is that – with some resolution around which code is AI-authored and human-authored – engineering teams can better understand when and how to deploy AI-generated code (and when not to).
Could I use this to iterate over my AI generated code until it's not detectable anymore? So essentially the moment you publish this tool it stops working?
This is essentially the adversarial generator/discriminator set-up that GANs use.
I'm sure you can but there isn't really an adversarial motive for doing that, I would think :)
Sure there is.
I wonder how many false positives it has
And false negatives. I just pasted 100% AI generated code and it told me it's only 40% AI written.
As a leader this is actually really neat - going to give it a spin
Really appreciate it!
I can detect AI-generated code with 100% accuracy, provided you give me an unlimited budget for false positives. It's a bit of a useless metric.
I'd argue that knowing AI generated code shipped into production is the first step to understanding the impact of AI coding assistants on velocity and quality. When paired with additional context, it can help leaders understand how to improve proficiency around these tools.
That's not relevant to the comment you replied to.
Ah - I misread:
Recall 91.5, F1 93.3
what will the pricing be? i guess this is just a super early demo, I want to hear your pricing plan. Also, is this B2B or B2C?
Just tried. Actually quite impressed with how well it works. I avoid using AI to write code, I'm a little worried that the existence of detection tools like this will lead people to over-rely on them; I would feel bad if someone suggested I used AI to create code I took pride in writing. I don't matter, but on a societal scale that effect may compel people to over-rely on AI as their work is treated as slop whether they put effort in or not, which will just increase the tide of terrible AI slop code, engineers managing systems they do not understand, and thus the brittleness and instability of global infrastructure. I sincerely hope you guys succeed, I suppose the point is that almost succeeding might be worse than not trying at all...
What is your approach to measuring accuracy?
I'm sure Henry will chime in here, but there's some more info here in the technical announcement: https://www.span.app/introducing-span-detect-1
"span-detect-1 was evaluated by an independent team within Span. The team’s objective was to create an eval that’s free from training data contamination and reflecting realistic human and AI authored code patterns. The focus was on 3 sources: real world human, AI code authored by Devin crawled from public GitHub repositories, and AI samples that we synthesized for “brownfield” edits by leading LLMs. In the end, evaluation was performed with ~45K balanced datasets for TypeScript and Python each, and an 11K sample set for TSX."
More details about how we eval'ed here:
https://www.span.app/introducing-span-detect-1
A 95% accuracy is very low for this type of thing. People use this to enact administrative consequences. People's lives are ruined and 5% is too high of a false positive rate. Even a 99% accuracy is too low.
Just tried it out and it works :mind-blown:
What if I just modify the code to misspell things that no AI would misspell?
You're saying "Understand and report on impact by AI coding tool". How can you drill down into per-coding assistant usage?
Also, what's the pricing?