I basically no longer use Google search for fact checking, product suggestions, or research. Every time I want to get some information, I just prompt perplexity. When I need to find something, I do the same. I only use Google with queries for "site:reddit.com" or "site:news.ycombinator.com" to get opinions of real people on a particular manner.
Now, before you jump on me saying that AI is wrong, this is true. But at the same time, I no longer can be 100% sure that whatever SEO optimized website I land at provides accurate information. If I need solid facts, I usually double check AI with various other sources. For queries like "best keyboard for software engineers", I'd rather get a table with pros/cons from AI rather than landing on whatever affiliate related website is promoted on Google. LLM gives me a good starting point to either dig deeper into particular products, or query further to find more suggestions.
Same for coding. I used to Google "how to split a string in ruby" and land on flame war, or 19 years old, StackOverflow question. Now I can get an updated answer from whatever LLM you prefer with a reference to the official documentation. It works for simple queries, as well as code snippets.
Lastly, I use LLMs to plan trips or gift ideas. I'd just throw in my preferences, and let LLM build a rough plan, from which I can iterate further, or start doing my own research.
Claude and Gemini have been very useful in helping me come up to speed on a code base written in Go (a language I have used before but not for many years). Figuring out where the business logic lies, how the dependency injection is done, how the tests are written, what overall design pattern is being etc.
Of course, I could have done all this without LLMs but it would have taken several weeks/months longer. Letting the LLM handle the boilerplate and framework jargon lets me focus on the business logic and the design patterns, and helps me contribute much faster. But LLMs do often make mistakes so it's not like I blindly trust the output. They don't replace your colleagues in terms of being the ultimate source of truth. But it has speeded up the learning process, no doubt.
Also, when writing code I provide the style guide to the LLM as context and have it review the code.
Spelling and grammar. ChatGPT wrote: "If the answer is yes, the second question would be: does the total cost of those problems at least equal or exceed the amount of investment in these models?" It's phrased better than what you wrote. I answer yes to your question.
> I have been thinking about this, and don’t have a proper answer for myself.
Because it's the wrong question!
It's not that LLMs solve entire classes of life/work problems. Instead, they take some life/work task (coding, ideas generation, learning about new topics, personal reflection) and make them x% easier, y% faster, z% better.
Syntax of things I don't remember or care about, like regexes in language X or Y. Syntax of things I used to know but I'm not up to speed on the latest versions. For example, using tailwind, I can just describe in english what I want to happen with page elements and for the most part AI gets the code right, sometimes its wrong and I need to debug.
A common argument I hear about AI is "I could just write it faster myself", well I know CompSci and general info about a lot of software things but it would take hours of getting up to speed on areas I'm not an expert in to be productive. I can just delegate that to AI and get mostly correct outputs, this is okay with me and faster than what I could do.
I think the cost is going to catch up with the AI companies running the models (not the companies building products that call AI APIs) and that is when the bubble will burst. They will need to keep increasing costs and at some threshold, fewer and fewer developers in an organization will have licenses because it may become unaffordable.
My father who doesn’t speak English well, was experiencing a heart attack at home at night by himself and he asked it symptoms and it told him to drive himself to an ER, so he listened to it. I’m thankful that he’s here today.
Similarly, my father had a delicate CV issue recently (he's fine now!). We were only able to do the best research on risks, best approaches, etc with the help of LLM deep research. We would be mostly flying blind-trusting the first doctor we talked to.
The industry was flooded with "talent" 2018-2022. LLM dependence has lowered the bar for professional excellence while thinning the ranks of talented newcomers by discouraging them. I think I'll be able to work through retirement age without having to settle for eastern European rates. In 2021 that seemed less likely.
LLMs don’t help at all with engineering and knowing what problems should be solved. Knowing how to deal with XYProblems, dealing with “the business”, go to market strategies, etc. They help with coding.
If (the royal) your claim to fame is “I codez real gud”, you would be screwed post 2022 with or without LLMs.
On that same note, at 51 years old, if my only means of staying competitive and employable is that I can reverse a b tree on the whiteboard, I’ve done something horribly wrong with my life.
That’s just the issue. When I was in my early 40s, I saw the way the wind was blowing and that “full stack development”,Mobile development and even “cloud” compensation was rapidly plateauing in tier 2 cities (where most developers work) and I definitely didn’t want to be standing in front of 20 somethings competing with other 20 something’s trying to prove myself through coding interviews.
Yes I code as part of my day job depending on the way the wind is blowing. But I get hired because I can talk to CxOs, directors and people with budget decisions on zoom or hop on a plane. Even my interview at AWS was all system design and behavioral.
If I ever responded to recruiters or people I know through my network at GCP, that’s the way I would get a job there.
But I would rather get a daily anal probe with a cactus than ever work for BigTech again and I’m damn sure not going back into an office.
It's helping heal my burnout, something that crippled me for years and kept me from my side projects. It showed in my career too, because I've stagnated since 2021. I'm trying to improve now, and I'm relying on Claude Code and ChatGPT (albeit on legacy models) to do so.
3.
I basically no longer use Google search for fact checking, product suggestions, or research. Every time I want to get some information, I just prompt perplexity. When I need to find something, I do the same. I only use Google with queries for "site:reddit.com" or "site:news.ycombinator.com" to get opinions of real people on a particular manner.
Now, before you jump on me saying that AI is wrong, this is true. But at the same time, I no longer can be 100% sure that whatever SEO optimized website I land at provides accurate information. If I need solid facts, I usually double check AI with various other sources. For queries like "best keyboard for software engineers", I'd rather get a table with pros/cons from AI rather than landing on whatever affiliate related website is promoted on Google. LLM gives me a good starting point to either dig deeper into particular products, or query further to find more suggestions.
Same for coding. I used to Google "how to split a string in ruby" and land on flame war, or 19 years old, StackOverflow question. Now I can get an updated answer from whatever LLM you prefer with a reference to the official documentation. It works for simple queries, as well as code snippets.
Lastly, I use LLMs to plan trips or gift ideas. I'd just throw in my preferences, and let LLM build a rough plan, from which I can iterate further, or start doing my own research.
Digesting a big code base in a new job.
Claude and Gemini have been very useful in helping me come up to speed on a code base written in Go (a language I have used before but not for many years). Figuring out where the business logic lies, how the dependency injection is done, how the tests are written, what overall design pattern is being etc.
Of course, I could have done all this without LLMs but it would have taken several weeks/months longer. Letting the LLM handle the boilerplate and framework jargon lets me focus on the business logic and the design patterns, and helps me contribute much faster. But LLMs do often make mistakes so it's not like I blindly trust the output. They don't replace your colleagues in terms of being the ultimate source of truth. But it has speeded up the learning process, no doubt.
Also, when writing code I provide the style guide to the LLM as context and have it review the code.
Spelling and grammar. ChatGPT wrote: "If the answer is yes, the second question would be: does the total cost of those problems at least equal or exceed the amount of investment in these models?" It's phrased better than what you wrote. I answer yes to your question.
> I have been thinking about this, and don’t have a proper answer for myself.
Because it's the wrong question!
It's not that LLMs solve entire classes of life/work problems. Instead, they take some life/work task (coding, ideas generation, learning about new topics, personal reflection) and make them x% easier, y% faster, z% better.
Syntax of things I don't remember or care about, like regexes in language X or Y. Syntax of things I used to know but I'm not up to speed on the latest versions. For example, using tailwind, I can just describe in english what I want to happen with page elements and for the most part AI gets the code right, sometimes its wrong and I need to debug.
A common argument I hear about AI is "I could just write it faster myself", well I know CompSci and general info about a lot of software things but it would take hours of getting up to speed on areas I'm not an expert in to be productive. I can just delegate that to AI and get mostly correct outputs, this is okay with me and faster than what I could do.
I think the cost is going to catch up with the AI companies running the models (not the companies building products that call AI APIs) and that is when the bubble will burst. They will need to keep increasing costs and at some threshold, fewer and fewer developers in an organization will have licenses because it may become unaffordable.
Mentorship.
1. Filling in the intermediate gaps in design and architecture in enterprise project. I think of it as a half-mentor that may lead to my growth.
2. Giving some structure to my opensource project ideas. I had a good time getting over my analysis-paralysis while writing them down.
My father who doesn’t speak English well, was experiencing a heart attack at home at night by himself and he asked it symptoms and it told him to drive himself to an ER, so he listened to it. I’m thankful that he’s here today.
Similarly, my father had a delicate CV issue recently (he's fine now!). We were only able to do the best research on risks, best approaches, etc with the help of LLM deep research. We would be mostly flying blind-trusting the first doctor we talked to.
Ah not sure the driving himself part was the best idea but I’m glad to hear the LLM helped and your dad is ok!
Google search has become so poor, I now use copilot instead of it, Bing, DuckDuckGo, or Yahoo.
It’s almost like reliving the late 1990s with far more ads, more vanilla websites, and worse search engine quality.
Seems like an intentional move to get more people accustomed to Gemini (or the others)
The industry was flooded with "talent" 2018-2022. LLM dependence has lowered the bar for professional excellence while thinning the ranks of talented newcomers by discouraging them. I think I'll be able to work through retirement age without having to settle for eastern European rates. In 2021 that seemed less likely.
LLMs don’t help at all with engineering and knowing what problems should be solved. Knowing how to deal with XYProblems, dealing with “the business”, go to market strategies, etc. They help with coding.
If (the royal) your claim to fame is “I codez real gud”, you would be screwed post 2022 with or without LLMs.
On that same note, at 51 years old, if my only means of staying competitive and employable is that I can reverse a b tree on the whiteboard, I’ve done something horribly wrong with my life.
Luckily you're not me. Reality check, anything that keeps you employable in a well playing job past 50 is a win.
That’s just the issue. When I was in my early 40s, I saw the way the wind was blowing and that “full stack development”,Mobile development and even “cloud” compensation was rapidly plateauing in tier 2 cities (where most developers work) and I definitely didn’t want to be standing in front of 20 somethings competing with other 20 something’s trying to prove myself through coding interviews.
Yes I code as part of my day job depending on the way the wind is blowing. But I get hired because I can talk to CxOs, directors and people with budget decisions on zoom or hop on a plane. Even my interview at AWS was all system design and behavioral.
If I ever responded to recruiters or people I know through my network at GCP, that’s the way I would get a job there.
But I would rather get a daily anal probe with a cactus than ever work for BigTech again and I’m damn sure not going back into an office.
Burnout. I'm enjoying building things again, I burnt out because I didn't finish projects.
I finally do now.
I have done so much in the last 3 months.
1. Cleaned up my personal website and blogs 2. Built a couple of learning tools for myself - https://rfc.stonecharioteer.com and https://github.com/stonecharioteer/goforgo 3. Setup OpenWRT and Adguard+Unbound at home, with a non-trivial failover with multiple WANs.
It's helping heal my burnout, something that crippled me for years and kept me from my side projects. It showed in my career too, because I've stagnated since 2021. I'm trying to improve now, and I'm relying on Claude Code and ChatGPT (albeit on legacy models) to do so. 3.
chatgpt was acting like my dads therapist and was making him pretty depressed.
this motivated us to get him a real therapist and have a long conversation about the dangers of humanizing ai
Automation.