As someone who just set up mitmproxy to do something very similar, I wish this would've been a plugin/add-on instead of a standalone thing.
I know and trust mitmproxy. I'm warier and less likely to use a new, unknown tool that has such broad security/privacy implications. Especially these days with so many vibe-coded projects being released (no idea if that's the case here, but it's a concern I have nonetheless).
Agee! This was a fun project that I build because it is so hard to understand what "really" is in you context window... What do you mean by plugin/add-on? Add-on to what? Thinking of what to add to it next... Maybe security would be a good direction, or at least visibility of what is happening to the proxy's traffic.
You don't need to mess with certificates - you can point CC at a HTTP endpoint and it'll happily play along.
If you build a DIY proxy you can also mess with the prompt on the wire. Cut out portions of the system prompt etc. Or redirect it to a different endpoint based on specific conditions etc.
When I work with AI on large, tricky code bases I try to do a collaboration where it hands off things to me that may result in large number of tokens (excess tool calls, unprecise searches, verbose output, reading large files without a range specified, etc.).
This will help narrow down exactly which to still handle manually to best keep within token budgets.
Note: "yourusername" in install git clone instructions should be replaced.
I've been trying to get token usage down by instructing Claude to stop being so verbose (saying what it's going to do beforehand, saying what it just did, spitting out pointless file trees) but it ignores my instructions. It could be that the model is just hard to steer away from doing that... or Anthropic want it to waste tokens so you burn through your usage quickly.
Hahahah just fixed it, thank you so much!!!! Think of extending this to a prompt admin, Im sure there is a lot of trash that the system sends on every query, I think we can improve this.
This is fantastic. Claude doesn't make it easy to inspect what it's sending - which would actually be really useful for refining the project-specific prompts.
Love you like it!! Let me know any ideas to improve it... I was thining in the direction of a file system and protocol for the md files, or dynamic context building. But would love to hear what you think.
Nice work! I'm sure the data gleaned here is illuminating for many users.
I'm surprised that there isn't a stronger demand for enterprise-wide tools like this. Yes, there are a few solutions, but when you contrast the new standard of "give everyone at the company agentic AI capabilities" with the prior paradigm of strong data governance (at least at larger orgs), it's a stark difference.
I think we're not far from the pendulum swinging back a bit. Not just because AI can't be used for everything, but because the governance on widespread AI use (without severely limiting what tools can actually do) is a difficult and ongoing problem.
I had to vibe code a proxy to hide tokens from agents (https://github.com/vladimirkras/prxlocal) because I haven’t found any good solution either. I planned to add genai otel stuff that could be piped into some tool to view dialogues and tool calls and so on, but I haven’t found any good setup that doesn’t require lots of manual coding yet. It’s really weird that there are no solutions in that space.
Dang how will Tailscale make any money on its latest vibe coded feature [0] when others can vibe code it themselves? I guess your SaaS really is someones weekend vibe prompt.
It's interesting because if you're into computers it's more accessible than ever and there are more things you can mess with more cheaply than ever. I mean we have some real science fiction stuff going on. At the same time it's probably different for the newer generations. Computers were magical to me and a lot of that was because they were rare. Now they are everywhere, they are just a backdrop to everything else going on.
I agree, I remember when the feed forward NN were the shit! And now the LLMs are owning, I think this adoption pattern will start pulling a lot of innovations on other computer science fields. Networking, for example. But the ability to have that peer programer next to you makes it so much more fun to build, when before you had to spend a whole day debugging something, Claude now just helps you out and gives you time to build. Feels like long roadtrips with cruise control and lane keeping assist!
I created something similar months ago [*] but using Envoy Proxy [1], mkcert [2], my own Go (golang) server, and Little Snitch [3]. It works quite well. I was the first person to notice that Codex CLI now sends telemetry to ab.chatgpt.com and other curiosities like that, but I never bothered to open-source my implementation because I know that anyone genuinely interested could easily replicate it in an afternoon with their favourite Agent CLI.
[*] In reality, I created this something like 6 years ago, before LLMs were popular, originally as a way to inspect all outgoing HTTP(s) traffic from all the apps installed in my macOS system. Then, a few months ago, when I started using Codex CLI, I made some modifications to inspect Agent CLI calls too.
It was working with mitmproxy for a very brief period, then the TLS handshake started failing and it kept requesting for re-authentication when proxied.
You can get the whole auth flow and initial conversation starters using Burp Suite and its certificate, but the Gemini chat responses fail in the CLI, which I understand is due to how Burp handles HTTP2 (you can see the valid responses inside Burp Suite).
Gemini CLI is open source. Don't need to intercept at the network when you can just add inspectGeminiApiRequest() in the source. (I suggest it because I've been maintaining a personal branch with exactly that :)
Tried with gemini and gave more headaches than anything else, would love if you can help me adding it to sherlock... I use claude and gemini, claude mainly for coding, so wanted to set it up first. With gemini, ran into the same problem that you did...
Kind of yes... But with a nice cli so that you don't have to set it up just run "sherlock claude" and "sherlock start" on two terminals and everything that claude sends in that session then it will be stored. So no proxy set up or anything, just simple terminal commands. :)
Nope... You just run "sherlock claude" and that sets up the proxy for you. So you dont have to think about it... And just use claude normally, every prompt you send in that session will be stored in the files.
As someone who just set up mitmproxy to do something very similar, I wish this would've been a plugin/add-on instead of a standalone thing.
I know and trust mitmproxy. I'm warier and less likely to use a new, unknown tool that has such broad security/privacy implications. Especially these days with so many vibe-coded projects being released (no idea if that's the case here, but it's a concern I have nonetheless).
Agee! This was a fun project that I build because it is so hard to understand what "really" is in you context window... What do you mean by plugin/add-on? Add-on to what? Thinking of what to add to it next... Maybe security would be a good direction, or at least visibility of what is happening to the proxy's traffic.
You don't need to mess with certificates - you can point CC at a HTTP endpoint and it'll happily play along.
If you build a DIY proxy you can also mess with the prompt on the wire. Cut out portions of the system prompt etc. Or redirect it to a different endpoint based on specific conditions etc.
Have you tried this with Gemini? or Codex?
Have tried with gemini-cli and claude-code both, it works, honestly, it should work with most if not all cli clients
This is great.
When I work with AI on large, tricky code bases I try to do a collaboration where it hands off things to me that may result in large number of tokens (excess tool calls, unprecise searches, verbose output, reading large files without a range specified, etc.).
This will help narrow down exactly which to still handle manually to best keep within token budgets.
Note: "yourusername" in install git clone instructions should be replaced.
I've been trying to get token usage down by instructing Claude to stop being so verbose (saying what it's going to do beforehand, saying what it just did, spitting out pointless file trees) but it ignores my instructions. It could be that the model is just hard to steer away from doing that... or Anthropic want it to waste tokens so you burn through your usage quickly.
Would you mind sharing more details about how you do this? What do you add to your AI prompts to make it hand those tasks off to you?
Hahahah just fixed it, thank you so much!!!! Think of extending this to a prompt admin, Im sure there is a lot of trash that the system sends on every query, I think we can improve this.
This is fantastic. Claude doesn't make it easy to inspect what it's sending - which would actually be really useful for refining the project-specific prompts.
Love you like it!! Let me know any ideas to improve it... I was thining in the direction of a file system and protocol for the md files, or dynamic context building. But would love to hear what you think.
Nice work! I'm sure the data gleaned here is illuminating for many users.
I'm surprised that there isn't a stronger demand for enterprise-wide tools like this. Yes, there are a few solutions, but when you contrast the new standard of "give everyone at the company agentic AI capabilities" with the prior paradigm of strong data governance (at least at larger orgs), it's a stark difference.
I think we're not far from the pendulum swinging back a bit. Not just because AI can't be used for everything, but because the governance on widespread AI use (without severely limiting what tools can actually do) is a difficult and ongoing problem.
I had to vibe code a proxy to hide tokens from agents (https://github.com/vladimirkras/prxlocal) because I haven’t found any good solution either. I planned to add genai otel stuff that could be piped into some tool to view dialogues and tool calls and so on, but I haven’t found any good setup that doesn’t require lots of manual coding yet. It’s really weird that there are no solutions in that space.
Dang how will Tailscale make any money on its latest vibe coded feature [0] when others can vibe code it themselves? I guess your SaaS really is someones weekend vibe prompt.
[0]https://news.ycombinator.com/item?id=46782091
That's what LLMs enabled. Faster prototyping. Also lots of exposed servers and apps. It's never been more fun to be a cyber security researcher.
I think it just has been more fun being into computers overall!
It's interesting because if you're into computers it's more accessible than ever and there are more things you can mess with more cheaply than ever. I mean we have some real science fiction stuff going on. At the same time it's probably different for the newer generations. Computers were magical to me and a lot of that was because they were rare. Now they are everywhere, they are just a backdrop to everything else going on.
I agree, I remember when the feed forward NN were the shit! And now the LLMs are owning, I think this adoption pattern will start pulling a lot of innovations on other computer science fields. Networking, for example. But the ability to have that peer programer next to you makes it so much more fun to build, when before you had to spend a whole day debugging something, Claude now just helps you out and gives you time to build. Feels like long roadtrips with cruise control and lane keeping assist!
So is it just a wrapper around MitM Proxy?
> So is it just a wrapper around MitM Proxy?
Yes.
I created something similar months ago [*] but using Envoy Proxy [1], mkcert [2], my own Go (golang) server, and Little Snitch [3]. It works quite well. I was the first person to notice that Codex CLI now sends telemetry to ab.chatgpt.com and other curiosities like that, but I never bothered to open-source my implementation because I know that anyone genuinely interested could easily replicate it in an afternoon with their favourite Agent CLI.
[1] https://www.envoyproxy.io/
[2] https://github.com/FiloSottile/mkcert
[3] https://www.obdev.at/products/littlesnitch/
[*] In reality, I created this something like 6 years ago, before LLMs were popular, originally as a way to inspect all outgoing HTTP(s) traffic from all the apps installed in my macOS system. Then, a few months ago, when I started using Codex CLI, I made some modifications to inspect Agent CLI calls too.
Curious to see how you can get Gemini fully intercepted.
I've been intercepting its HTTP requests by running it inside a docker container with:
-e HTTP_PROXY=http://127.0.0.1:8080 -e HTTPS_PROXY=http://host.docker.internal:8080 -e NO_PROXY=localhost,127.0.0.1
It was working with mitmproxy for a very brief period, then the TLS handshake started failing and it kept requesting for re-authentication when proxied.
You can get the whole auth flow and initial conversation starters using Burp Suite and its certificate, but the Gemini chat responses fail in the CLI, which I understand is due to how Burp handles HTTP2 (you can see the valid responses inside Burp Suite).
Gemini CLI is open source. Don't need to intercept at the network when you can just add inspectGeminiApiRequest() in the source. (I suggest it because I've been maintaining a personal branch with exactly that :)
Tried with gemini and gave more headaches than anything else, would love if you can help me adding it to sherlock... I use claude and gemini, claude mainly for coding, so wanted to set it up first. With gemini, ran into the same problem that you did...
Kind of yes... But with a nice cli so that you don't have to set it up just run "sherlock claude" and "sherlock start" on two terminals and everything that claude sends in that session then it will be stored. So no proxy set up or anything, just simple terminal commands. :)
Nice work! Do i need to update Claude Code config after start this proxy service?
Nope... You just run "sherlock claude" and that sets up the proxy for you. So you dont have to think about it... And just use claude normally, every prompt you send in that session will be stored in the files.
What about SSL/certificates ?
I didn't understand the quesion I am sorry.
I also assumed Claude Code would need some kind of cert nudging to accept a proxy.
But it's in the README:
Prompt you to install it in your system trust store