> [Given] a method for chat applications to understand components exposed from the website in question [...] AI chat applications win, and so does the brand that gets to keep ownership of how it tells the story of its product to its users.
Never happening. The brand may "keep ownership of how it tells its story," but it loses its users. You have turned your tool into a series of widget in someone else's application, with no control whatsoever over how users interact with you. Want to show your user a notification? (Sure you do—I can't get away from the things.) Too bad. ChatGPT owns your users, and they only see what OpenAI wants them to see, which likely will not include ads for your premium features.
Don't mistake this for user freedom, either. Users still won't own their own tools. We're just moving from a model where each vendor separately leases you their tool to a model where every tool is leased via OpenAI, which curates them based on its own monopolistic whims.
Tech companies will not surrender control of their users so easily. They may integrate chatbot components into their apps, but they will not permit an inversion of control where their product becomes a component in a chatbot.
> You likely won’t expect users 5 years from today to navigate 5 pages deep
Of course I do. There's this fallacy that, because chatbots are useful for some things, chatbot interfaces must be the best at everything, and that's just not true. I don't go to ChatGPT to ask it for relevant tech news. I go here and browse the HN frontpage. Chatbots offer zero discoverability; search bars didn't replace page navigation, and chat bots won't either.
>They may integrate chatbot components into their apps, but they will not permit an inversion of control where their product becomes a component in a chatbot.
You may not have a choice. The chatbot can navigate the same UI your users can. People will use chatbots to use your product, whether you want them to or not.
I think that the inverse thesis is true, you make websites more accessible (a11y, wgac, aria labels etc) for humans, then the interaction heuristics are clearer for agents functioning off browser-use or similar. If a screen reader can understand your site, then an agent can. Reinventing the wheel to facilitate the current state of agents makes the web worse for everyone, it's not a preemptive move, it's actually a decline in almost objective and measurable quality, and potentially one which removes access to the internet by people who just want to.. use the internet.
This. The accessibility tree is a superpower for agents when it's good. Screenshots are "robust" but low performance. Like so many other things, making stuff better for humans indirectly makes it better for agents.
AI component libraries in your site make your web app even more easily consumed and subsumed by AI chat clients.
This not only kills pages, but it kills the concept of a browser where the user agent is a human, rather than making your pages be designed where the user agent is an AI agent.
That doesn't make me happy to experience because I'm guessing that after a generation or so, web designers will not only do mobile first designs with stupid amounts of white space and not taking advantage of the desktops greater screen real estate and precise mouse movements, but AI first websites will get so popular that browsing sites manually will look like trying to use a text only browser in the JavaScript world.
Easy for me to do a depressing take, but hopefully the bitter lesson of AI will help this particular projected future not come to pass because the AI will get smart enough that it will embed a browser right there in line and just render the window for the user, or it will otherwise gets good enough at screen scraping and UI automation that it can just use an existing browser, just like a human, the sites won't be dumbed down even further for AI consumption.
at best we have walled garden content; and when those are scraped (either by the host or by more sophisticated bots) those walled gardens will hopefully rot under an inability to drive advertisement revenue.
I agree, I think we're at the edge of a paradigmatic shift away from humans navigating TCP-IP itself. What that looks like, I don't know, but given trends (like dynamic pricing, human-futures marketing, surveillance, and consolidation of computing under mega-companies) I can imagine: local beacons screaming AI advertisement components across a geospatial sneakernet. Auditorium-based ticketed podcasting and AR/VR/meatspace events. Thoughtful hackers reminiscing of better times simulating them in web-assembly driven first-person POV "sites" and a rolling set of encryption keys for read-access (just send them BTC)
without an ecosystem for humans to contribute meaningfully to a feedback loop that allows for free group assembly around like interests, monetary growth for hosts and other participants, and some degree of presence / searchability / permanence, the current text-only web page paradigm is doomed.
> In a world where we can type anything into a text box and get the information back instantly we are circumventing the need to visit websites altogether.
This is purely anecdotal, but the only people in my extended circle making this transition (to any extent) are the technically savvy; everyone else is slowly realizing how awful AI tools and "AI-first experiences" can be and are actively trying to avoid them.
I've noticed this bimodal distribution of perception too, and my hypothesis is that's it's hugely driven by the difference of "who is in the driver's seat".
Your tech-savvy AI early adopters are discerning between tools, the deployments and environments, and are willing and able to change things to extract the highest output from current capabilities. For instance, re-architecting a codebase to make it easier for agents to contribute to it.
The rest are having AI hypeware shoved upon them, often as a cost cutting measure, and lack the agency to influence outcomes. When agents misbehave, they only have the option to "Press 0 to speak with a Human" and hope that works.
I suspect this is a big factor in the divide we're seeing, and might result in your median adult being ambushed by recent gains in capabilities.
I read the first line and thought - this guy gets it.
The read the second line and erm.... maybe not. The whole Agents thing has been pushed for almost a year now and it hasn't disrupted the profession of engineers on a noticeable scale.
How is this different than MCP Apps that you pre-build for your org using something like https://creature.run
I like that MCP Apps just offer tools, and AI doesn't have to deal with JSON and generative UI. Basically, you can cache UI as an MCP App by generating it up-front, to constantly save on tokens and time, and let AI use tool calls which it is getting constantly better at.
I think the more realistic direction is exposing API / MCP-style interfaces for agents to interact with a product’s functionality, rather than shipping UI components that an AI client would render.
The "AI renders your components inside chat" idea feels very similar to Facebook’s old canvas apps. That model disappeared for good reasons: abuse, security, and loss of platform control.
It seems far more likely that AI platforms will provide their own interaction primitives (forms, pickers, confirmations, etc.) and simply call third-party tools behind the scenes. That lets the platform retain control over UX and safety, and avoids the risks of embedding arbitrary third-party UI.
We’re going the other way where we are removing IP from public view.
Don’t count out on device, that’s where most of our focus is.
If we can convert the voice / text to programatic commands on device, then we control the experience and don’t let the wolves through the gate.
Theres probably a middle ground where we allow an app to augment / enhance itself using deterministic behaviour but a user based soft request. If I as a user can ask for a feature, and have it work just for me, thats pretty cool.
Instead of progressive enhancement it can be progressive evolution.
When I use ChatGPT to do research, I expect it to justify itself by quoting from web pages and linking to those web pages. (I gave it explicit instructions to quote things, but unfortunately it will only do short quotes.)
This might be an extended web search, but it's still a web search. The documents need to exist. Maybe a lot of the surrounding boilerplate disappears, though?
I'm kind of hoping this kills traditional marketplaces since they're filled to the brim with ads and purposefully knee-cap search and filtering so you can't properly find anything.
Conversational commerce will let you shop on any website, with any UI you like, and without paying marketplaces a cut of every sale.
Outside of discovery and trust, what are they really bringing to the table anymore? Most sellers have their own websites with cart, product catalog, and payments already so AI that can tap into that API directly will render these marketplaces and middle-men obsolete given enough time.
Ideally it would have to be a distributed system or take payments from users. It is to tempting for big AI platforms to avoid acting just like the shopping platforms.
The conversational commerce agent can use MCP-UI[0] to show the payment UI (Stripe Elements and Paypal Button) directly in the chat and that taps into the same payment API the shop's storefront uses.
I do like that idea of allowing flexible component rendering, especially if you're building a your own app with chat UI. The one problem would be like always standardisation, will chatUIs need to be like browser and follow standards? Or do they need to render JS, CSS, HTML as a component? What freedom do chatUIs allow the components?
Even so, ask, what's the end goal of the user? Does it even make sense to worry about UI if we're think autonomous agents that sole goal is to accomplish something defined by the user?
The idea of representing UI as state goes back forever, I’m not that old but at least in the advent of the web, plenty of JSON -> UI specs or libraries have came into existence. If the specification is solid and a large portion of people agree upon it, I don’t doubt it will take over what we think of UI. (current contenders are json_render, a2ui etc)
The first benefit being that if I can describe my entire UI and the actions each component wants as json, theoretically, I can pass that file to any client to render be it mobile, react, a java swing app etc The responsibility of rendering becomes of anyone who wants to do it.
UI JSON -> UI Framework <- Design Tokens
Above is a simple way of describing how it would generally work. Where the UI framework can be whatever it wants to be as long as it knows how to connect up the UI JSON in a meaningful way.
Now for existing apps and their respective UI’s it’s never made all that much sense to describe how your components behavior in state, useful for some, and many have done it, but a hard pitch for others.
In the agentic era, the pitch is a lot more appealing.
- LLM’s are great-enough at writing JSON
- A lot of people are sharing the sentiment that they can just vibe code small apps for themselves. Hinting at they love the actual ability for full personalization.
Though having the user generate HTML and the rest all the time by LLM’s is more error prone, slow and costly.
The user can just ask an LLM to compose a composition of components in JSON laid out how they want and connected to the API’s they care about. (that can be rendered anywhere)
Personally, if I had a catalogue of 100 distinct services/API's, and I could ask an LLM to generate a UI in JSON that I can copy and paste anywhere to render it, I would be in heaven.
If I had subscriptions to services that; (fake services)
- EMAILR: Sent Emails
- BOOKLAND: Explore books
- DEEP_RESEARCHER: Researches Topic
I could ask an LLM to "With my services, EMAILR, BOOKLAND and DEEP_RESEARCHER and their attached tools.
Can you generate me a dashboard that lists out the top 20 BOOKLAND books, below each one added a button that posts the book title to DEEP_RESEARCHER when I click it. Also add a button below each book that uses EMAILR to email me them"
Users could share their layouts and what they like and you could end up with a market place or sane defaults for those who don't want to bother with describing what they want. No longer do you have to rely on the UX team of the service for it to be laid out how you want.
There is a metric tonne of work that has to be done to make a specification that can handle more complex things. But I'd bet a lot of users will learn to love and appreciate that the 5% of features they care about they can finally just actually place how they want it to across all their disparate apps.
> In a world where we can type anything into a text box and get the information back instantly we are circumventing the need to visit websites altogether
But without visits many websites will disappear.
So where will AI get its information from in the future?
I like the direction this is taking, and I think the more radical and more coherent endgame is data ownership returning to us and AI's replacing website brands all together as the AI generates any UI we could ever want.
Today most websites offer interfaces for accessing the data that they hold hostage. However they are holding our data. We would prefer to have it back if we could, and AI will enable that. Our paradigm will shift this way eventually I hope.
Though it's true companies dont have an incentive to hand over data custody anytime soon, AI companies have already found a way around this in little ways and its a matter of time until it goes to where all the data we could ever want is within the AI cloud and our next task is simply taking back ownership of that (where local inference takes us).
Sadly though maybe the AI companies and our leaders-acting-as-overlords have already guessed this end game and have started to push these "subscription to laptops" and "own nothing" models with shortage of hardware and rising costs of things like Mac mini's indicating that could be a reality we experience.
> Imagine your own product. You likely won’t expect users 5 years from today to navigate 5 pages deep, apply filters and sort data just to try to derive the answer themselves… will you?
Yes, I do, because outside of tech circles, AI adoption is not as pervasive/ universal as the loudest voices keep suggesting. I’m not dismissing its value, but gentle reminder to spend more time around nontechnical people, listen to how they feel about the tech, and pay attention to how they engage with it when offered.
I regularly ask people who are in non-tech professions re. the adoption of AI and whether it has benefitted them. The answer is a resounding no. If anything managers are starting to pull-back on the adoption.
This place is hyper-concentrated on the tech in a particular context and has no resemblance of whats going out there in the big wide-world outside of high-tech software production.
This is the year that revenue has to start showing up materially across a wide range of contexts or else fear will over-take the market and investors will start pressuring the big tech firms to pull back on capex. It is their money after all, not the management.
Would you rather visit a website full of ads that doesn't answer your question and is written for SEO or get the answer instantly without any ads?
Eventually, after killing several websites by depriving them of revenue, ChatGPT will enshittify like everything else and starting adding ads.
There isn't even a question about that. Just think of Google for example.
Why Google's SERP has ads but Gemini does not? There isn't even a "they are making money with the data" argument here, because Google already has all the query data it could ever want. They just haven't added ads yet.
Eventually, Gemini will look like a SERP with 5 ad results, if it doesn't go to the Google graveyard like everything else.
Undisclosed and seamless promoted products/ideas in conversational LLM output will make us yearn for the days of distinct and blockable/ignorable advertising.
> [Given] a method for chat applications to understand components exposed from the website in question [...] AI chat applications win, and so does the brand that gets to keep ownership of how it tells the story of its product to its users.
Never happening. The brand may "keep ownership of how it tells its story," but it loses its users. You have turned your tool into a series of widget in someone else's application, with no control whatsoever over how users interact with you. Want to show your user a notification? (Sure you do—I can't get away from the things.) Too bad. ChatGPT owns your users, and they only see what OpenAI wants them to see, which likely will not include ads for your premium features.
Don't mistake this for user freedom, either. Users still won't own their own tools. We're just moving from a model where each vendor separately leases you their tool to a model where every tool is leased via OpenAI, which curates them based on its own monopolistic whims.
Tech companies will not surrender control of their users so easily. They may integrate chatbot components into their apps, but they will not permit an inversion of control where their product becomes a component in a chatbot.
> You likely won’t expect users 5 years from today to navigate 5 pages deep
Of course I do. There's this fallacy that, because chatbots are useful for some things, chatbot interfaces must be the best at everything, and that's just not true. I don't go to ChatGPT to ask it for relevant tech news. I go here and browse the HN frontpage. Chatbots offer zero discoverability; search bars didn't replace page navigation, and chat bots won't either.
>They may integrate chatbot components into their apps, but they will not permit an inversion of control where their product becomes a component in a chatbot.
You may not have a choice. The chatbot can navigate the same UI your users can. People will use chatbots to use your product, whether you want them to or not.
Curious if something like webMCP or an open layer that negotiates between the model provider and the website might mitigate the 'middleman' risk here?
I think that the inverse thesis is true, you make websites more accessible (a11y, wgac, aria labels etc) for humans, then the interaction heuristics are clearer for agents functioning off browser-use or similar. If a screen reader can understand your site, then an agent can. Reinventing the wheel to facilitate the current state of agents makes the web worse for everyone, it's not a preemptive move, it's actually a decline in almost objective and measurable quality, and potentially one which removes access to the internet by people who just want to.. use the internet.
This. The accessibility tree is a superpower for agents when it's good. Screenshots are "robust" but low performance. Like so many other things, making stuff better for humans indirectly makes it better for agents.
AI component libraries in your site make your web app even more easily consumed and subsumed by AI chat clients.
This not only kills pages, but it kills the concept of a browser where the user agent is a human, rather than making your pages be designed where the user agent is an AI agent.
That doesn't make me happy to experience because I'm guessing that after a generation or so, web designers will not only do mobile first designs with stupid amounts of white space and not taking advantage of the desktops greater screen real estate and precise mouse movements, but AI first websites will get so popular that browsing sites manually will look like trying to use a text only browser in the JavaScript world.
Easy for me to do a depressing take, but hopefully the bitter lesson of AI will help this particular projected future not come to pass because the AI will get smart enough that it will embed a browser right there in line and just render the window for the user, or it will otherwise gets good enough at screen scraping and UI automation that it can just use an existing browser, just like a human, the sites won't be dumbed down even further for AI consumption.
to add to that, it kills the concept of whether the host is human (and not just another soulless megacorp harvesting your data in their walled garden)
https://news.ycombinator.com/item?id=46969751 remark that they're taking down their self-hosted projects citing costs associated with AI scraping.
at best we have walled garden content; and when those are scraped (either by the host or by more sophisticated bots) those walled gardens will hopefully rot under an inability to drive advertisement revenue.
I agree, I think we're at the edge of a paradigmatic shift away from humans navigating TCP-IP itself. What that looks like, I don't know, but given trends (like dynamic pricing, human-futures marketing, surveillance, and consolidation of computing under mega-companies) I can imagine: local beacons screaming AI advertisement components across a geospatial sneakernet. Auditorium-based ticketed podcasting and AR/VR/meatspace events. Thoughtful hackers reminiscing of better times simulating them in web-assembly driven first-person POV "sites" and a rolling set of encryption keys for read-access (just send them BTC)
without an ecosystem for humans to contribute meaningfully to a feedback loop that allows for free group assembly around like interests, monetary growth for hosts and other participants, and some degree of presence / searchability / permanence, the current text-only web page paradigm is doomed.
> AI first websites will get so popular that browsing sites manually will look like trying to use a text only browser in the JavaScript world.
That might be great for accessibility, though.
> In a world where we can type anything into a text box and get the information back instantly we are circumventing the need to visit websites altogether.
This is purely anecdotal, but the only people in my extended circle making this transition (to any extent) are the technically savvy; everyone else is slowly realizing how awful AI tools and "AI-first experiences" can be and are actively trying to avoid them.
I've noticed this bimodal distribution of perception too, and my hypothesis is that's it's hugely driven by the difference of "who is in the driver's seat".
Your tech-savvy AI early adopters are discerning between tools, the deployments and environments, and are willing and able to change things to extract the highest output from current capabilities. For instance, re-architecting a codebase to make it easier for agents to contribute to it.
The rest are having AI hypeware shoved upon them, often as a cost cutting measure, and lack the agency to influence outcomes. When agents misbehave, they only have the option to "Press 0 to speak with a Human" and hope that works.
I suspect this is a big factor in the divide we're seeing, and might result in your median adult being ambushed by recent gains in capabilities.
AI in apps is garbage. Cheap, low quality models and inflexible interaction patterns.
AI agents using frontier models, configured nicely, that interact with programs that have APIs are pure gold.
I read the first line and thought - this guy gets it.
The read the second line and erm.... maybe not. The whole Agents thing has been pushed for almost a year now and it hasn't disrupted the profession of engineers on a noticeable scale.
Code how you want (or however your boss will let you). My comment was based on my empirical observations, your mileage may vary.
How is this different than MCP Apps that you pre-build for your org using something like https://creature.run
I like that MCP Apps just offer tools, and AI doesn't have to deal with JSON and generative UI. Basically, you can cache UI as an MCP App by generating it up-front, to constantly save on tokens and time, and let AI use tool calls which it is getting constantly better at.
> "Templates, hot module reloading, a simple SDK, and AI guidance help you and your team rapidly vibe code MCP Apps with ease."
Excuse me while I go vomit. If the future of the web is vibe-coded slop, I want no part of it.
You must have missed the past year. Welcome back.
MCP Apps are perfect for vibe coding. They solve simple, specific problems. So yeah, vibe code them all day long.
I've noticed a change in technical blog articles in the past 2 years. Why do most contain phrases like "everything changes", "not behind (yet)" etc.?
If you have a valid point to make, you don't need to force FOMO on the reader.
I think the more realistic direction is exposing API / MCP-style interfaces for agents to interact with a product’s functionality, rather than shipping UI components that an AI client would render.
The "AI renders your components inside chat" idea feels very similar to Facebook’s old canvas apps. That model disappeared for good reasons: abuse, security, and loss of platform control.
It seems far more likely that AI platforms will provide their own interaction primitives (forms, pickers, confirmations, etc.) and simply call third-party tools behind the scenes. That lets the platform retain control over UX and safety, and avoids the risks of embedding arbitrary third-party UI.
We’re going the other way where we are removing IP from public view.
Don’t count out on device, that’s where most of our focus is. If we can convert the voice / text to programatic commands on device, then we control the experience and don’t let the wolves through the gate.
Theres probably a middle ground where we allow an app to augment / enhance itself using deterministic behaviour but a user based soft request. If I as a user can ask for a feature, and have it work just for me, thats pretty cool.
Instead of progressive enhancement it can be progressive evolution.
When I use ChatGPT to do research, I expect it to justify itself by quoting from web pages and linking to those web pages. (I gave it explicit instructions to quote things, but unfortunately it will only do short quotes.)
This might be an extended web search, but it's still a web search. The documents need to exist. Maybe a lot of the surrounding boilerplate disappears, though?
I'm kind of hoping this kills traditional marketplaces since they're filled to the brim with ads and purposefully knee-cap search and filtering so you can't properly find anything.
Conversational commerce will let you shop on any website, with any UI you like, and without paying marketplaces a cut of every sale.
Outside of discovery and trust, what are they really bringing to the table anymore? Most sellers have their own websites with cart, product catalog, and payments already so AI that can tap into that API directly will render these marketplaces and middle-men obsolete given enough time.
Ideally it would have to be a distributed system or take payments from users. It is to tempting for big AI platforms to avoid acting just like the shopping platforms.
The conversational commerce agent can use MCP-UI[0] to show the payment UI (Stripe Elements and Paypal Button) directly in the chat and that taps into the same payment API the shop's storefront uses.
I'm kind of building this already[1].
0. https://mcpui.dev/
1. https://marketplace.openship.org
I do like that idea of allowing flexible component rendering, especially if you're building a your own app with chat UI. The one problem would be like always standardisation, will chatUIs need to be like browser and follow standards? Or do they need to render JS, CSS, HTML as a component? What freedom do chatUIs allow the components?
Even so, ask, what's the end goal of the user? Does it even make sense to worry about UI if we're think autonomous agents that sole goal is to accomplish something defined by the user?
I was a bit confused when I went to the json-render github repo, because there were no screenshots of what it looked like.
I found only two videos about it on YouTube. This is the better of the two, and illustrates the output: https://www.youtube.com/watch?v=vndn2vmSIbw
I don't know whether that video uses Kumo (UI library also from Vercel).
I’m interested in this space my thoughts so far;
The idea of representing UI as state goes back forever, I’m not that old but at least in the advent of the web, plenty of JSON -> UI specs or libraries have came into existence. If the specification is solid and a large portion of people agree upon it, I don’t doubt it will take over what we think of UI. (current contenders are json_render, a2ui etc)
The first benefit being that if I can describe my entire UI and the actions each component wants as json, theoretically, I can pass that file to any client to render be it mobile, react, a java swing app etc The responsibility of rendering becomes of anyone who wants to do it.
UI JSON -> UI Framework <- Design Tokens
Above is a simple way of describing how it would generally work. Where the UI framework can be whatever it wants to be as long as it knows how to connect up the UI JSON in a meaningful way.
Now for existing apps and their respective UI’s it’s never made all that much sense to describe how your components behavior in state, useful for some, and many have done it, but a hard pitch for others.
In the agentic era, the pitch is a lot more appealing.
- LLM’s are great-enough at writing JSON
- A lot of people are sharing the sentiment that they can just vibe code small apps for themselves. Hinting at they love the actual ability for full personalization.
Though having the user generate HTML and the rest all the time by LLM’s is more error prone, slow and costly.
The user can just ask an LLM to compose a composition of components in JSON laid out how they want and connected to the API’s they care about. (that can be rendered anywhere)
Personally, if I had a catalogue of 100 distinct services/API's, and I could ask an LLM to generate a UI in JSON that I can copy and paste anywhere to render it, I would be in heaven.
If I had subscriptions to services that; (fake services)
- EMAILR: Sent Emails
- BOOKLAND: Explore books
- DEEP_RESEARCHER: Researches Topic
I could ask an LLM to "With my services, EMAILR, BOOKLAND and DEEP_RESEARCHER and their attached tools.
Can you generate me a dashboard that lists out the top 20 BOOKLAND books, below each one added a button that posts the book title to DEEP_RESEARCHER when I click it. Also add a button below each book that uses EMAILR to email me them"
It would then return something like;
}Users could share their layouts and what they like and you could end up with a market place or sane defaults for those who don't want to bother with describing what they want. No longer do you have to rely on the UX team of the service for it to be laid out how you want.
There is a metric tonne of work that has to be done to make a specification that can handle more complex things. But I'd bet a lot of users will learn to love and appreciate that the 5% of features they care about they can finally just actually place how they want it to across all their disparate apps.
> In a world where we can type anything into a text box and get the information back instantly we are circumventing the need to visit websites altogether
But without visits many websites will disappear. So where will AI get its information from in the future?
I like the direction this is taking, and I think the more radical and more coherent endgame is data ownership returning to us and AI's replacing website brands all together as the AI generates any UI we could ever want.
Today most websites offer interfaces for accessing the data that they hold hostage. However they are holding our data. We would prefer to have it back if we could, and AI will enable that. Our paradigm will shift this way eventually I hope.
Though it's true companies dont have an incentive to hand over data custody anytime soon, AI companies have already found a way around this in little ways and its a matter of time until it goes to where all the data we could ever want is within the AI cloud and our next task is simply taking back ownership of that (where local inference takes us).
Sadly though maybe the AI companies and our leaders-acting-as-overlords have already guessed this end game and have started to push these "subscription to laptops" and "own nothing" models with shortage of hardware and rising costs of things like Mac mini's indicating that could be a reality we experience.
So can I use this to make my pages completely unusable by AI agents, then?
> If AI wants to own the internet
As if. It's corporations that want to own _your eyeballs_.
It also kills any incentive to publish anything.
It completely removes the user from having any agency over the sources of information they are presented with.
> My hope is that you will embrace the change
Some people get so caught up with the technology they seem to forget they're charging straight into the most boring dystopia imaginable.
> Imagine your own product. You likely won’t expect users 5 years from today to navigate 5 pages deep, apply filters and sort data just to try to derive the answer themselves… will you?
Yes, I do, because outside of tech circles, AI adoption is not as pervasive/ universal as the loudest voices keep suggesting. I’m not dismissing its value, but gentle reminder to spend more time around nontechnical people, listen to how they feel about the tech, and pay attention to how they engage with it when offered.
I regularly ask people who are in non-tech professions re. the adoption of AI and whether it has benefitted them. The answer is a resounding no. If anything managers are starting to pull-back on the adoption.
This place is hyper-concentrated on the tech in a particular context and has no resemblance of whats going out there in the big wide-world outside of high-tech software production.
This is the year that revenue has to start showing up materially across a wide range of contexts or else fear will over-take the market and investors will start pressuring the big tech firms to pull back on capex. It is their money after all, not the management.
> You’re not behind (yet) but in today’s world
Yes, yes. We will all be left behind unless we [PLACEHOLDER]. Sounds very convincing.
And on that same sentence:
> but in today’s world but in today’s world where the world changes every month, it’s best to be ahead.
Could I be the one getting ahead of him if I skip next month and plan for <next month>+1 world changes?
Would you rather visit a website full of ads that doesn't answer your question and is written for SEO or get the answer instantly without any ads?
Eventually, after killing several websites by depriving them of revenue, ChatGPT will enshittify like everything else and starting adding ads.
There isn't even a question about that. Just think of Google for example.
Why Google's SERP has ads but Gemini does not? There isn't even a "they are making money with the data" argument here, because Google already has all the query data it could ever want. They just haven't added ads yet.
Eventually, Gemini will look like a SERP with 5 ad results, if it doesn't go to the Google graveyard like everything else.
Undisclosed and seamless promoted products/ideas in conversational LLM output will make us yearn for the days of distinct and blockable/ignorable advertising.