I have an old Django site I'm maintaining for a long-time customer of mine. They often want to make small changes - things that are only a few lines of code, but would take an hour to just spin up the system, remind myself how it works, commit, push, update the server and all that.
Last week I've moved the whole infrastructure to Railway, and taught the customer to use Jules. They make their own PRs now, and Railway spins up an environment with the changes, so the customer can check it themselves. It works like 75% of the time, and when it doesn't, the customer see that it doesn't before it even reaches me. Only if they're happy with the changes, I step in to review the code and press merge. It's been a such a huge time saver so far.
How expensive are the API charges? Seems like it might be a bit too easy for a customer to rack up a big bill testing out minor changes if things weren't configured correctly.
Literally free. No API - the reason I went for Jules instead of Claude Code / Gemini CLI for example is specifically because of it's relatively polished web-interface, which I assumed that my customer would appreciate. They're using their own Google account and the daily tasks free limit seem to be more than enough for them.
There is a free plan with 15 tasks/sessions. It doesn’t count tokens AFAIK. There would obviously be a runtime limit of some sorts for sure. But it’s not the same as API keys and token situation
The free tier is 15 tasks per day (of gemini-2.5-pro) which is EXTREMELY generous. I've had plenty of tasks run for 1-2 hours. I do think that after 1 or 2 hours it's told it needs to wrap up and just present what it's done; I couldn't get it to keep going longer than 2 hours. But Jules is very slow as it seems to be batch processing on spare capacity, so 15+ hours a day is not quite as absurd as it sounds.
I haven't tried Jules in a couple weeks, but the UI/UX had a lot of issues such as not being given any progress updates for very long times. The worst thing was not being able to see what it was doing and correct it: you only see the state of files (without a usable diff viewer, WTF) at the last point that the agent decided to show you anything (the last time it completed a todo list item I think, and I couldn't get it to update the state when asked, though it will send a PR if you ask), and gemini-2.5-pro can often try really stupid things as it tries to debug. I've also been impressed at its debugging abilities a number of times.
Still, I found Jules far more usable than Gemini CLI (free tier), where Gemini just constantly stops for no reason and needs to be told to continue, and I exhausted the usage limit in minutes.
Aside from the unlimited free tier, probably the best part of Jules are its automated code reviews. Once, I was writing up some extensive comments on its code and then unexpectedly a code review was dropped in the conversation which gave exactly the same feedback I was writing. Unfortunately if it never reaches the point of submitting for review, it doesn't get an automated review. It does often ask for feedback before it's done, which is nice. So probably I needed to prompt better.
My experience with coding agents leads me to believe using something like this will end up being more noise and work than ROI
I think that depends on how far out your horizon is. If you're only looking one task out, or maybe a few weeks out, then it's not worth investing the time yet. On the other hand, if you're looking at how your engineering team will work in 3 years time it's definitely worth starting to look at it now.
An example that comes to mind: having a bot that automatically spins up an environment when a library is updated, runs through the tests, and identifies why a codebase doesn't work with the update, and fixes it then opens an appropriate PR that passes all the tests for humans to review would be incredibly useful.
The LLMs are a crapshoot, and probably always will be, for reliable automatic fixing of anything. They save me time 50% of the time. The other 50% they just can’t put enough together to grok what the existing code does, but damn if their code doesn’t look like it should work.
Would anyone at Google be willing to tell me how many people are working on this project? I’ve been building something functionally similar for my employer, but it’s a nights and weekends project with only one contributor (me).
I think they are doing both (in true Google fashion), there is an open source Gemini cli with a generous free tier that more directly competes with Claude code.
https://github.com/google-gemini/gemini-cli
It was pretty rough at launch but has gotten a lot better. So has Claude code though, so I’ve never really switched over.
I've been using AI coding agents since the very early days of Aider and I think this is not quite true.
There's a place for async agents. There's a place for collaborative agents. Collaborative agents may even soon be delegating off to multiple async agents and picking best results. There's so much complexity here and we haven't even begun to explore a corner of the possible design space. We're still trying to plug AIs into human-shaped holes instead of building around their interesting/weird capabilities.
Would you be willing to point me to a primer of how I can get started with building agents?
This week I experimented with building a simple planner/reviewer “agentic” iterative pipeline to automate an analysis workflow.
It was effectively me dipping my toes into this field, and I am so floored to learn more. But I’m unsure of where to start, since everything seems so fast paced.
I’m also unsure of how to experiment, since APIs rack up fees pretty quickly. Maybe local models?
There are a number of free and cheap LLM options to experiment with. Google offers a decent free plan for Gemini (get some extra Google accounts). Groq has a free tier including some good open weight models. There's also free endpoints on OpenRouter that are limited but might be useful for long running background agents. DeepSeek v3.2, Qwen3, Kimi K2, and GLM 4.6 are all good choices for cheap and capable models.
Local models are generally not a shortcut to cheap and effective AI. It's a fun thing to explore though.
I am so sick of these anthropomorphized names that have nothing to do with anything that we’re all supposed to remember now. Why are we giving products first names? The worst offender is probably Amazon Rufus. It’s all so dumb and I hate it. At least attempt to be clever and name it something that relates to the product itself. Even Google Wave, despite its shortcomings, made sense as a product name.
Jules can add all it wants and I will still not use it simply because it's a Google product and Google doesn't know how to make products in the past 20 years.
Also, why the heck are Google's offerings so fragmented?! We have `gemini`, `jules`, and we also have two sets of different Gemini APIs (one is more limited than the other), and no API is entirely OpenAI-compatible.
I really hope Google discontinues this project soon (that’s kind of their specialty). I find it frustrating when chatbots/LLMs adopt real names as their brand identities.
I have an old Django site I'm maintaining for a long-time customer of mine. They often want to make small changes - things that are only a few lines of code, but would take an hour to just spin up the system, remind myself how it works, commit, push, update the server and all that.
Last week I've moved the whole infrastructure to Railway, and taught the customer to use Jules. They make their own PRs now, and Railway spins up an environment with the changes, so the customer can check it themselves. It works like 75% of the time, and when it doesn't, the customer see that it doesn't before it even reaches me. Only if they're happy with the changes, I step in to review the code and press merge. It's been a such a huge time saver so far.
Do they still pay you the same amount?
How expensive are the API charges? Seems like it might be a bit too easy for a customer to rack up a big bill testing out minor changes if things weren't configured correctly.
Literally free. No API - the reason I went for Jules instead of Claude Code / Gemini CLI for example is specifically because of it's relatively polished web-interface, which I assumed that my customer would appreciate. They're using their own Google account and the daily tasks free limit seem to be more than enough for them.
There is a free plan with 15 tasks/sessions. It doesn’t count tokens AFAIK. There would obviously be a runtime limit of some sorts for sure. But it’s not the same as API keys and token situation
The free tier is 15 tasks per day (of gemini-2.5-pro) which is EXTREMELY generous. I've had plenty of tasks run for 1-2 hours. I do think that after 1 or 2 hours it's told it needs to wrap up and just present what it's done; I couldn't get it to keep going longer than 2 hours. But Jules is very slow as it seems to be batch processing on spare capacity, so 15+ hours a day is not quite as absurd as it sounds.
I haven't tried Jules in a couple weeks, but the UI/UX had a lot of issues such as not being given any progress updates for very long times. The worst thing was not being able to see what it was doing and correct it: you only see the state of files (without a usable diff viewer, WTF) at the last point that the agent decided to show you anything (the last time it completed a todo list item I think, and I couldn't get it to update the state when asked, though it will send a PR if you ask), and gemini-2.5-pro can often try really stupid things as it tries to debug. I've also been impressed at its debugging abilities a number of times.
Still, I found Jules far more usable than Gemini CLI (free tier), where Gemini just constantly stops for no reason and needs to be told to continue, and I exhausted the usage limit in minutes.
Aside from the unlimited free tier, probably the best part of Jules are its automated code reviews. Once, I was writing up some extensive comments on its code and then unexpectedly a code review was dropped in the conversation which gave exactly the same feedback I was writing. Unfortunately if it never reaches the point of submitting for review, it doesn't get an automated review. It does often ask for feedback before it's done, which is nice. So probably I needed to prompt better.
I hope they don't store any user data in their app. Trusting LLMs blindly is a bad idea.
There is a human being (GP) reviewing the proposed code before merging. I wouldn't describe that as trusting the LLM blindly.
No, there is not
Jules has access to the codebase, not the database. It doesn't see any user data.
Do people trust these kinds of things to effectively work async and unsupervised?
My experience with coding agents leads me to believe using something like this will end up being more noise and work than ROI
I suppose it could be effectively the same loop I use in VS Code, but then why would I want an external tool over an integration?
My experience with coding agents leads me to believe using something like this will end up being more noise and work than ROI
I think that depends on how far out your horizon is. If you're only looking one task out, or maybe a few weeks out, then it's not worth investing the time yet. On the other hand, if you're looking at how your engineering team will work in 3 years time it's definitely worth starting to look at it now.
An example that comes to mind: having a bot that automatically spins up an environment when a library is updated, runs through the tests, and identifies why a codebase doesn't work with the update, and fixes it then opens an appropriate PR that passes all the tests for humans to review would be incredibly useful.
The LLMs are a crapshoot, and probably always will be, for reliable automatic fixing of anything. They save me time 50% of the time. The other 50% they just can’t put enough together to grok what the existing code does, but damn if their code doesn’t look like it should work.
Yeah, in my experience you have to babysit them
VS Code is not a coding agent as much as it is code generation and completion
Was able to build a personal MCP server that connects to the Jules API, letting me dispatch tasks to Jules, from Copilot Chat in VS Code.
Video here: https://www.youtube.com/watch?v=RIjz9w77h1Q
Would anyone at Google be willing to tell me how many people are working on this project? I’ve been building something functionally similar for my employer, but it’s a nights and weekends project with only one contributor (me).
Is there any price comparison between Jules and Claude code?
Recently I moved from repl.it to Claude max to save costs.
It’s a shame Google picked the wrong system design for Jules. Claude Code’s system design is clearly superior at this point.
Jules is going to simply be another vendor locked walled garden play.
I think they are doing both (in true Google fashion), there is an open source Gemini cli with a generous free tier that more directly competes with Claude code. https://github.com/google-gemini/gemini-cli
It was pretty rough at launch but has gotten a lot better. So has Claude code though, so I’ve never really switched over.
I've been using AI coding agents since the very early days of Aider and I think this is not quite true. There's a place for async agents. There's a place for collaborative agents. Collaborative agents may even soon be delegating off to multiple async agents and picking best results. There's so much complexity here and we haven't even begun to explore a corner of the possible design space. We're still trying to plug AIs into human-shaped holes instead of building around their interesting/weird capabilities.
Would you be willing to point me to a primer of how I can get started with building agents?
This week I experimented with building a simple planner/reviewer “agentic” iterative pipeline to automate an analysis workflow.
It was effectively me dipping my toes into this field, and I am so floored to learn more. But I’m unsure of where to start, since everything seems so fast paced.
I’m also unsure of how to experiment, since APIs rack up fees pretty quickly. Maybe local models?
Personally I found the mini SWE-agent to be a very approachable introduction to building agents: https://github.com/SWE-agent/mini-swe-agent
There are a number of free and cheap LLM options to experiment with. Google offers a decent free plan for Gemini (get some extra Google accounts). Groq has a free tier including some good open weight models. There's also free endpoints on OpenRouter that are limited but might be useful for long running background agents. DeepSeek v3.2, Qwen3, Kimi K2, and GLM 4.6 are all good choices for cheap and capable models.
Local models are generally not a shortcut to cheap and effective AI. It's a fun thing to explore though.
You can directly use Claude Code via its scriptable API (things like --verbose --output-format json --input-format json --include-partial-messages
and then use your existing Anthropic plan. Otherwise yeah you'll have to start using API tokens:
https://www.anthropic.com/engineering/building-agents-with-t...
I fail to see how comparing Jules to Claude Code is relevant. They’re completely different.
A good Jules comparison would be OpenAI Codex.
For a Claude Code Google equivalent there’s Gemini Code Assist CLI
Exactly. As the sibling comments point out. Async and collaborative are different ways to work. Both have its place.
I am so sick of these anthropomorphized names that have nothing to do with anything that we’re all supposed to remember now. Why are we giving products first names? The worst offender is probably Amazon Rufus. It’s all so dumb and I hate it. At least attempt to be clever and name it something that relates to the product itself. Even Google Wave, despite its shortcomings, made sense as a product name.
Url with the anchor in case it moves: https://jules.google/docs/changelog/#introducing-the-jules-a...
Jules can add all it wants and I will still not use it simply because it's a Google product and Google doesn't know how to make products in the past 20 years.
Also, why the heck are Google's offerings so fragmented?! We have `gemini`, `jules`, and we also have two sets of different Gemini APIs (one is more limited than the other), and no API is entirely OpenAI-compatible.
Come on Google...
I really hope Google discontinues this project soon (that’s kind of their specialty). I find it frustrating when chatbots/LLMs adopt real names as their brand identities.