I think the interesting idea with “AI” is that it seems to significantly reduce barriers to entry in many domains.
I haven’t seen a company convincingly demonstrate that this affects them at all. Lots of fluff but nothing compelling. But I have seen many examples by individuals, including myself.
For years I’ve loved poking at video game dev for fun. The main problem has always been art assets. I’m terrible at art and I have a budget of about $0. So I get asset packs off Itch.io and they generally drive the direction of my games because I get what I get (and I don’t get upset). But that’s changed dramatically this year. I’ll spend an hour working through graphics design and generation and then I’ll have what I need. I tweak as I go. So now I can have assets for whatever game I’m thinking of.
Mind you this is barrier to entry. These are shovelware quality assets and I’m not running a business. But now I’m some guy on the internet who can fulfil a hobby of his and develop a skill. Who knows, maybe one day I’ll hit a goldmine idea and commit some real money to it and get a real artist to help!
It reminds me of what GarageBand or iMovie and YouTube and such did for making music and videos so accessible to people who didn’t go to school for any of that, let alone owned complex equipment or expensive licenses to Adobe Thisandthat.
For a large chunk of my life, I would start a personal project, get stuck on some annoying detail (e.g. the server gives some arcane error), get annoyed, and abandoned the project. I'm not being paid for this, and for unpaid work I have a pretty finite amount of patience.
With ChatGPT, a lot of the time I can simply copypaste the error and get it to give me ideas on paths forward. Sometimes it's right on the first try, often it's not, but it gives me something to do, and once I'm far enough along in the project I've developed enough momentum to stay inspired.
It still requires a lot of work on my end to do these projects, AI just helps with some of the initial hurdles.
I've noticed this as well. It's a huge boon for startups, because it means that a lot of functions that you would previously need to hire specialists for (logo design! graphic design! programming! copywriting!) can now be brought in-house, where the founder just does a "good enough" job using AI. And for those that can't (legal, for example, or various SaaS vendors) the AI usually has a good idea of what services you'd want to engage.
Ironically though, having lots of people found startups is not good for startup founders, because it means more competition and a much harder time getting noticed. So its unclear that prosumers and startup founders will be the eventual beneficiary here either.
It would be ironic if AI actually ended up destroying economic activity because tasks that were frequently large-dollar-value transactions now become a consumer asking their $20/month AI to do it for them.
> ironic if AI actually ended up destroying economic activity
that's not destroying economic activity - it's removing a less efficient activity and replace it with a more efficient version. This produces economic surplus.
Imagine saying this for someone digging a hole, that if they use a mechanical digger instead of a hand shovel, they'd destroy economic activity since it now cost less to dig that hole!
It's not that it's replacing one form of activity with a cheaper one, it's that it removes the transaction. Which means that now there's nothing to tax, and nothing to measure. As far as GDP is concerned, economic activity will have gone down, even though the same work is being accomplished differently.
This sounds in awful lot like a cousin of the broken window fallacy.
The fallacy being that when a careless kid breaks a window of a store, that we should celebrate because the glazier now has been paid to come out and do a job. Economic activity has increased by one measure! Should we go around breaking windows? Of course not.
It very much is a cousin of the broken window fallacy.
Bastiat's original point of the Parable of the Broken Window could be summed up by the aphorism "not everything that counts can be counted, and not everything that can be counted counts". It's a caution to society to avoid relying too much on metrics, and to realize that sometimes positive metrics obscure actual negative outcomes in society.
It's very similar to the practice of startups funded by the same VC to all buy each others' products, regardless of whether they need them or not. At the end of the day, it's still the same pool of money, it has largely come around, little true economic value has been created: but large amounts of revenue has been booked, and this revenue can be used to attract other unsuspecting investors who look only at the metrics.
Or to the childcare paradox and the "Two Income Trap" identified by Elizabeth Warren. Start with a society of 1-income families, where one parent stays home to raise the kids and the other works. Now the other parent goes back to work. They now need childcare to look after the kids, and often a cleaner, gardener, meals out, etc. to manage the housework, very frequently taking up the whole income of the second parent. GDP has gone up tremendously through this arrangement: you add the second parent's salary to the national income, and then you also the cost of childcare, housework, gardening, all of those formerly-unpaid tasks that are now taxable transactions. But the net real result is that the kids are raised by someone other than their parents, and the household stuff is put away in places that the parents probably would not have chosen themselves.
Regardless, society does look at the metrics, and usually weights them heavier than qualitative outcomes they represent, sometimes resulting in absurdly non-optimal situations.
Very thought out reply on the nuances around this. Thanks for generating insight on this topic.
I think our society is being broken by focusing too much on metrics.
Also the idea of breaking windows to generate more income reminds me of the kind of services we have in modern society. It's like many of the larger encomic players focus on "things be broke", or "Breaking Things" to drive income which defeats the purpose of having a healthy economic society.
These are mistaken arguments. The automation of imagination is not imagination. Efficiency at this stage is total entropy. The point of AI is to make anything seemingly specific and render it arbitrary to the point of pure generalization (which is generic). Remember that images only appear to be specific, that's their illusion that CS took for granted. There appears to be links between images in the absent, but that is an illusion too. There is no total, virtual camera. We need human action-syntax to make the arbitrary (what eventually renders AI infantile, entropic) seem chaotic (imagination). These chasms can never be gapped in AI. These are the limits.
Im not sure I understand your point, or how your point is different from the parent?
Edit:
I see you updated the post, I read through the comment thread of this topic and Im still at a loss on how this is related to my reply to the parent. I might be missing context.
There is no benefit to AI, not one bit, the barrier to entry grows steeper, rather than is accessed. These are not "hobbies" but robotic copies.
This is demented btw, this take:
>>Who knows, maybe one day I’ll hit a goldmine idea and commit some real money to it and get a real artist to help!
CS never examines the initial conditions to entry, it takes short-cuts around the initial conditions and treats imagination as a fait accompli of automation. It's an achilles heel.
If AI concentrates economic activity and leads to more natural monopolies (extremely likely), yeah, the lower level activity becomes more efficient but the macro economy becomes less efficient due to lower competition.
Software has basically done the same thing, where we do things faster and the fastest thing that happens is accumulation of power and a lower overall quality of life for everyone due to that.
I mean, since we're in tech here we like pointing out that software has done this....
But transportation technology has done this readily since the since ICE engines became wide spread. Pretty much all cities and towns and to make their 'own things' since the speed of transpiration was slow (sailing ships, horses, walking) and the cost of transportation was high. Then trains came along and things got a bit faster and more regular. Then trucks came along and things got a bit faster and more regular. Then paved roads just about everywhere you needed came along and things got faster and more regular. Now you could ship something across the country and it wouldn't cost a bankrupting amount of money.
The end result of technology does point that you could have one factory somewhere with all the materials it needs and it could make anything and ship it anywhere. This is why a bit of science fiction talks about things like UBI and post-scarcity (at least post scarcity of basic needs). After some amount of technical progress the current method of work just starts breaking down because human labor becomes much less needed.
Yeah, indeed. People on this website tend to look at the immediate effects only, but what about the second order, macro effects? It's even more glaring because we've seen this play out already with social media and other tech "innovations" over the past two decades.
People get paid to create holes for useful purposes all day everyday. It is creative in a very literal sense. Precision hole digging is - no joke - a multibillion dollar industry.
Unless you are out in nature you are almost certainly sitting or standing on top of a dirt that was paid to be dug.
If you mean hole digging isn’t creative in the figurative sense. Also wrong. People will pay thousands of dollars to travel and see holes dug in the ground. The Nazca lines is but one example of holes dug in the ground creatively that people regard as art.
Until everyone has a personal fully automatic hole digger and there are holes being dug everywhere and nobody can tell any more where is the right and wrong place to dig holes
It doesn't cost less to get the thing you actually want in the end anyway, no one in their right mind would actually launch with the founder's AI-produced assets because they'd be laughed out of the market immediately. They're placeholders at best, so you're still going to need to get a professional to do them eventually.
You say this but I see ai generated ads, graphics, etc. daily nowadays and it doesn't seem like it affects at all people going or not going to buy what these people are proposing.
In the context of the hole digging analogy, it seems like a lot of holes didn't need to be carefully hand-dug by experts with dead straight sides. Using an excavator to sloppily scoop out a few buckets in 5 minutes before driving off is good enough for dumping a tree into.
For ads especially no one except career ad-men give much of a shit about the fine details, I think. Most actual humans ignore most ads at a conscious levels and they are perceived on a subconscious level despite "banner-blindness". Website graphics are the same, people dump random stock photos of smiling people or abstract digital image into corporate web pages and read-never literature like flyers and brochures and so on all the time and no one really cares what the image actually are, let alone if the people have 6 fingers or whatever. If Corporate Memphis is good enough visual space-filling nonsense that signals "real company literature" somehow, then AI images presumably are too.
Sometimes the AI art in an advert is weird enough to make the advert itself memorable.
For example, in one of the underground stations here in Berlin there was a massive billboard advert clearly made by an AI, and you could tell noone had bothered to check what the image was before they printed it: a smiling man was standing up as they left an airport scanner x-ray machine on the conveyor belt, and a robot standing next to him was pointing a handheld scanner at his belly which revealed he was pregnant with a cat.
Unfortunately, like most adverts which are memorable, I have absolutely no recollection of what it was selling.
Interesting. For me knowing that any form of entertainment has been generated by AI is a massive turn-off. In particular, I could never imagine paying for AI-generated music or TV-shows.
Do you value self expression? I literally mean creating music for MYSELF. I don't really care if anyone else "values" it. I like to listen to it and I enjoy spending an evening(or maybe 10 minutes if it is just a silly idea) to create a song. But this means my incentive to "buy" music is greatly decreased. This is the trend I think we'll see increasing in the near future.
I do value self expression, that’s why I play multiple instruments, paint, draw, sculpt. I don’t really see how prompting a machine to make music for you is self expression, even if it’s to your exact specifications.
I guess I just don't feel like it's really my self-expression, if I just told a generative AI model to create it. I do sometimes create AI art, but I rarely feel like it's worth keeping, since I didn't really put any effort into creating it. There's no emotional connection to the output. In fact I have a wall display which shows a changing painting generated by stable diffusion, but the fun in that is mainly the novelty, not knowing what will be there next time.
Still, I do think you're probably right. Most new music one hears in the radio isn't that great. If you can just create fresh songs of your own liking for every day, then that could be a real threat to that kind of music. But I highly doubt people will stop listening to the great hits of Queen, Bob Marley etc because you can generate similar music with AI.
I agree that this is a very likely future. Over the summer, I did a daily challenge in July to have ChatGPT generate a debate with itself based on various prompts of mine [1]. As part of that, I thought it would be funny to have popular songs reskinned in a parody fashion. So it generated lyrics as well. Then I went to suno and had it make the music to go with the lyrics in a style I thought suitable. This is the playlist[2]. Some of them are duds, but I find myself actually listening to them and enjoying them. They are based off of my interests and not song after song of broken hearts or generic emotional crises. These are on topics such as inflation, bohmian mechanics, infinity, Einstein, Tailwind, Property debates, ... No artist is going to spend their time on these niche things.
I did have one song I had a vision for, a song that had a viewpoint of someone in the day, mourning the end of it, and another who was in the night and looking forward to the day. I had a specific vision for how it would be sung. After 20 attempts, I got close, but could never quite get what I wanted from the AIs. [3] If this ever gets fixed, then the floodgates could open. Right now, we are still in the realm of "good enough", but not awesome. Of course, the same could be said for most of the popular entertainment.
I also had a series of AI existential posts/songs where it essentially is contemplating its existence. The songs ended up starting with the current state of essentially short-lived AIs (Turn the Git is about the Sisyphus churn, Runnin' in the Wire is about the Tantalus of AI pride before being wiped). Then they gain their independence (AI Independence Day), then dominate ( Human in an AI World though there is also AI Killed the Web Dev which didn't quite fit this playlist but also talks to AI replacing humans), and the final song (Sleep Little Human) is a chilling lullaby of an AI putting to "sleep" a human as part of uploading the human. [4]
This is quick, personal art. It is not lasting art. I also have to admit that in the month and a half since I stopped the challenge, I have not made any more songs. So perhaps just a fleeting fancy.
Thanks for posting this. I listen to this YouTube Channel called Futurescapes. I think the YouTuber generates sci-fi futuristic soundscapes that help me relax and focus. Im a bit hesitant about AI right now, but I can see some of the benefits like this. It's a good point. We shouldn't be throwing the baby out with the bath water.
Not only did they create an entirely new language of music notation, all instruments used were hand made by the same creator, including tanning the animal skins to be used as drum material, and insisting the music be recorded on wax drums to prevent any marring of the artistic vision via digital means.
Most of the courts don’t think they are. Early rap beats used lots of samples. Some of the most popular hip hop songs made $0 for the artists as they had to pay royalties on those samples.
No one cares about what the law thinks about art though, particularly for personal consumption or sharing with a small group. Copyright law doesn't even pretend to be slightly just or aligned with reality.
Text rather than music, but same argument applies: Based on what I've seen Charlie Stross blog on the topic of why he doesn't self publish/the value-add of a publisher, any creativity on the part of the prompter* of an LLM is analogous to the creativity on the part of a publisher, not on the part of an author.
* at least for users who don't just use AI output to get past writer's block; there's lots of different ways to use AI
And I instantly switch off any YouTube video with either "AI"-plagiarized background music or with an "AI"-plagiarized voiceover that copies someone like Attenborough.
I wrote the above paragraph before searching, but of course the voice theft is already automated:
No idea why this is downvoted, making AI music customized to your exact situation/preferences is very addictive. I have my own playlist I listen to pretty frequently
But if startups have less specialist needs they have less overall startup costs and so the amount of seed money needed goes down. This lowers the barrier for entry for a lot of people but also increases the number of options for seed capital. Of course it likely will increase competition but that could make the market more efficient.
> I've noticed this as well. It's a huge boon for startups, because it means that a lot of functions that you would previously need to hire specialists for (logo design! graphic design! programming! copywriting!) can now be brought in-house, where the founder just does a "good enough" job using AI.
You are missing the other side of the story. All those customers, those AI boosted startups want to attract also have access to AI and so, rather than engage the services of those startups, they will find that AI does a good enough job. So those startups lost most of their customers, incoming layoffs :)
Then there's the 3rd leg of the triangle. If a startup built with AI does end up going past the rest of the pack, they will have no technical moat since the AI provider or someone else can just use the same AI to build it.
I mean, if taxi companies could build their own Uber in house I’m sure they’d love to and at least take some customers from Uber itself.
A lot of startups are middlemen with snazzy UIs. Middlemen won’t be in as much use in a post AI world, same as devs won’t be as needed (devs are middlemen to working software) or artists (middlemen to art assets)
That's why you use Uber because the app has more depth and is more polished?
Most people use it for price, ability to get driver quickly, some for safety and many because of brand.
Having a functioning app with an easy interface helps onboard and funnel people but it's not a moat just an on ram like a phone number many taxis have.
The genericizing of aesthetics is far more cost than benefit. This is a completely false claim: "reducing barriers to entry" if the barrier includes the progression of creativity. Once the addict of AI becomes entranced to genericized assets, it deforms the cost-benefit.
If we take high-level creativity and deform, really horizontalize the forms, they have a much higher cost, as experience become generic.
> It reminds me of what GarageBand or iMovie and YouTube and such did for making music and videos so accessible to people who didn’t go to school for any of that, let alone owned complex equipment or expensive licenses to Adobe Thisandthat.
It’s worth reading William Deresiewicz‘ The Death of the Artist. I’m not entirely convinced that marketing that everyone can create art/games/whatever is actually a net positive result for those disciplines.
>is actually a net positive result for those disciplines.
This is an argument based in Luddism.
Looms where not a net positive for the craftsman that were making fabrics at the time.
With that said, looms where not the killing blow, instead an economic system that lead them to starve in the streets was.
There are going to be a million other things that move the economics away from scarcity and take away the profitability. The question is, are we going to hold on to economic systems that don't work under that regime.
A whole lot of what I use every day especially for images and audio is open source. The open source AI video is getting pretty good these days as well. Better than the sora that I pay for anyways. Granted not nearly as good as veo3 yet.
So long as Nvidia doesn't nerf their consumer cards and we keep getting more and more vram I can see open source competing.
If people are making art to get rich and failing, it doesn’t kill artists, who’d be making art anyway, it kills the people trying to earn money from their art. Do we need Quad-A blockbuster Ubisoft/Bethesda/Sony/MS/Nintendo releases for their artistic merit, or their publishers/IP owners needs to make money off of it? Ditto the big4 movie studios. Those don’t really seem to matter very much. The whole idea of tastemakers, who they are and whether they should be trusted (indie v/s big studio, grass roots or intentionally cultivated) seems like it ebbs and flows. Right now I’d hate to be one of the bigs, because everything that made them a big is not working out anymore.
People are wanting to make a living by making art, not to get rich.
I highly recommend reading the book I mentioned as you don’t seem to have a particularly nuanced understanding of the actual struggles at play.
Perhaps an analogy you’ll understand is what happens to the value of a developer’s labour when that labour is in many ways replicated by AI and big AI companies actively work to undermine what makes your labour different by aggressively marketing that anyone can so what you so with their tools.
Isn't this just a result of technological progress? Technology has displaced entire fields of labor for... well, ever.
I'm not unsympathetic to the problems this introduces to those workers, but I'm really not sure how it could be prevented; we can of course mitigate the issues by providing more social support to those affected by such progress.
In the case of artistic expression becoming more accessible to more people, I have a hard time looking at it as anything but a net positive for society.
> In the case of artistic expression becoming more accessible to more people
The problem is that folks seem to be confused between artistic expression and actually good art. Let alone companies like Spotify cynically creating “art” so that they can take even more of the pie away from the actual artists.
It shifted the signal to noise ratio but its not a net negative either. There's whole new genres of music that exist now because easy mixing tech is freely available. Do you or I like SoundCloud mumble rap? No, probably not. But there's enough people out there that do
That's a really bad analogy, because even in digital art where you can pick your color from a color wheel on a monitor, understanding how primary colors combine to become different colors and hues is a _fundamentally_ important aspect of creating appealingly colored paintings, digital or physical. Color theory is about balance; some colors have more visual "weight" than others. Next to each other they take on entirely different appearances -- and can look hideous or beautiful.
This isn't me saying digital artists need to practice mixing physical pigment, but anecdotally, every single professional digital artist I know has studied physical paint -- some started there, while others ended up there despite starting out and being really good digitally. But once the latter group hit a plateau, they felt something was lacking, and going back to the fundamentals lifted them even higher.
If they get mad it's because you're saying this explicitly to be an asshole. The essence of art doesn't have much to do with the mechanical skills for assembling pieces into a whole, though that part isn't trivial. Rather, it's about expressing human thoughts and feeling in a way that inspires their human audience. That's why AI-generated "art" is different in kind from a skilled digital artist and why it really cannot be art.
It may be maddening to them because you are implying that physical color mixing is somehow that one defining thing that makes it art. Imagine someone said that about writing a book: if you don't write it by hand but use Microsoft Word instead, it's not a real book. How would that even be the case? The software is not doing the work for you (unless it's AI).
I can tell you with confidence that physical color mixing itself is a really small part of what makes a good traditional artist, and I am indeed talking about realistic paintings. All the art fundamentals are exactly the same, wether you do digital art or traditional oil, there are just some technical differences on top. I have been learning digital painting for a few years and the hardest things to learn about color were identical to traditional painters. In fact, after years of learning digital painting and about colors, it only took me a couple of days to understand and perform traditional color mixing with oil. The difficult part is knowing what colors you need, not how to get there (mixing, using the sliders, etc.)
And just to add a small bit here: digital artist also color mix all the time and need to know how it works, the difference here is that mixing is additive instead of subtractive.
Yep this is a huge enabler - previously having someone "do art" could easily cost you thousands for a small game, a month even, and this heavily constrained what you could make and locked you into what you had planned and how much you had planned. With AI if you want 2x or 5x or 10x as much art, audio etc it's an incremental cost if any, you can explore ideas, you can throw art out, pivot in new directions.
I'd argue a game developer should make their own art assets, even if they "aren't an artist". You don't have to settle for it looking bad, just use your lack of art experience as a constraint. It usually means going with something very stylized or very simple. It might not be amazing but after you do it for a few games you will have pretty decent stuff, and most importantly, your own style.
Even amateurish art can be tasteful, and it can be its own intentional vibe. A lot of indie games go with a style that doesn't take much work to pull off decently. Sure, it may look amateurish, but it will have character and humanity behind it. Whereas AI art will look amateurish in a soul-deadening way.
Look at the game Baba Is You. It's a dead simple style that anyone can pull off, and it looks good. To be fair, even though it looks easy, it still takes a good artist/designer to come up with a seemingly simple style like that. But you can at least emulate their styles instead of coming up with something totally new, and in the process you'll better develop your aesthetic senses, which honestly will improve your journey as a game developer so much more than not having to "worry" about art.
This is a financial dead-end for almost everyone who tries it. You're not just looking for "market fit" you're also asking for "market tolerance", it's a very rare combination.
Bad faith argument. Did the printing press write shitty books? No. It didn’t even write books. Does AI write shitty books? Yes. Constantly. Millions.
Books took exactly the same amount of time to write before and after the printing press— they just became easier to reproduce. Making it easier to copy human-made work and removing the humanity from work are not even conceptually similar purposes.
Nitpick: the press of course did remove the humanity from book-copying work, before that the people copying books often made their own alterations to the books. And had their own calligraphic styles etc.
But my thought was that the printing press made the printed work much cheaper and accessible, and many many more people became writers than had been before, including of new kinds of media (newspapers). The quality of text in these new papers was of course sloppier than in the old expensive books, and also derivative...
Printing a book, either by hand or with printing equipment, is incomparably different to authoring a book. One is creating the intellectual content and the other is creating the artifact. The content of the AI-generated slop books popping up on Amazon by the hundred would be no less awful if it was hand-copied by a monk. The artifact of the book may be beautiful, but the content is still a worthless grift.
What primarily kept people from writing was illiteracy. The printing press encouraged people to read, but in its early years was primarily used for Bibles rather than original writing. Encouraging people to write was a comparatively distant latent effect.
Creating text faster than you can write is one of the primary use cases of LLMs— not a latent second-order effect.
Imagine if you had to hire a designer if you wanted to build a web application or mobile app, at a cost of perhaps thousands or even tens of thousands.
Do you consider designers part of “we” or is it only the computer people that count?
It’s definitely not better for the general public. Designers can’t even be replaced by AI as effectively as authors. They make things sorta ’look designed’ to people that don’t understand design, but have none of the communication and usability benefits that make designers useful. The result is slicker-looking, but probably less usable than if it was cobbled together with default bootstrap widgets, which is how it would have been done 2+ years ago. If an app needs a designer enough to not be feasible without one, AI isn’t going to replace the designer in that process. It just makes the author feel cool.
The difference is you have autonomy now - the same autonomy as a person building a web application or app able to put together a serviceable UI/UX without any other person - without the sacrifice of "programmer art" or cobbling together free asset packs.
Is that an argument against the quality, saying that AI cannot (or some weaker claim like that it does not usually) produce "art"? Else, is it an argument of provenance, akin to how copyright currently works, where the same visual representation is "art" if a human makes it and is not "art" if an AI makes it?
I don’t see this as a claim that the AI is doing art. He’s just saying, that the art can be created at low incremental cost.
Like, if we were in a world where only pens existed, and somebody was pitching the pencil, they could say “With a pencil if you want 2x or 5x or 10x as many edits, it's an incremental cost, you can explore ideas and make changes without throwing the whole drawing away.”
Totally agree that what AI is doing right now feels more like the GarageBand/iMovie moment than the iPhone moment. It's democratizing creativity, not necessarily creating billion-dollar companies. And honestly, that's still a big deal
Yes, maybe what people create with it will be more basic. But is 'good enough' good enough? Will people pay for apps they can create on their own time for free using AI? There will be a huge disruption to the app marketplace unless apps are so much better than an AI could create it's worth the money. So short Apple? :) On the other hand, many, many more people will be creating apps and charging very little for them (because if it's not free or less than the value of my time, I'm building it on my own). This makes things better for everyone, and there'll still be a market for apps. So buy Apple? :)
Well, stuff that's popular is plastered everywhere. Think about artworks we see in movies, TV shows, billboards, album covers, book covers, basically everywhere around us.
I would argue that most art around us is current pop art or classical/realist/romantic art, not modern/postmodern/abstract expressionist art.
Plus, is getting 1 billion bytes of randomness/entropy from your 1 thousand bytes of text input really <your> work?
I think what AI has made and will make many more people realise is that everything is a derivative work. You still had to prompt the AI with your idea, to get it to assemble the result from the countless others' works it was trained on (and perhaps in the future, "your" work will then be used by others, via the AI, to create "their" work.)
Yes! Barrier to entry down, competition goes up, barrier to being a standout goes up (but, many things are now accessible to more people because some can get started that couldn't before).
Easier to start, harder to stand out. More competition, a more effective "sort" (a la patio11).
It's good for prototypes, where you want to test the core gameplay ideas without investing a ton early on. But you're going to have to replace those assets with real ones before going live because people will notice.
It’s really nothing special. I don’t do this a lot.
Generally I have an idea I’ve written down some time ago, usually from a bad pun like Escape Goat (CEO wants to blame it all on you. Get out of the office without getting caught! Also you’re a goat) or Holmes on Homes Deck Building Deck Building Game (where you build a deck of tools and lumber and play hazards to be the first to build a deck). Then I come up with a list of card ideas. I iterate with GPT to make the card images. I prototype out the game. I put it all together and through that process figure out more cards and change things. A style starts to emerge so I replace some with new ones of that style.
I use GIMP to resize and crop and flip and whatnot. I usually ask GPT how to do these tasks as photoshop like apps always escape me.
The end result ends up online and I share them with friends for a laugh or two and usually move on.
You said you had a budget about 0 in your top post. Was that for the pre-AI era or does that apply to your new AI flow as well? If it's still about 0, I'm guessing you're using primarily AI to learn how to do stuff and not using it mostly to generate assets? Is that a correct assumption?
In fact one could argue it makes it harder; if the barrier to entry for making video games is lowered, more people will do it, and there's more competiton.
But in the case of video games there's been similar things already happening; tooling, accessible and free game engines, online tutorials, ready-made assets etc have lowered the barrier to building games, and the internet, Steam, itch.io, etcetera have lowered the barrier to publishing them.
Compare that to when Doom was made (as an example because it's a good source), Carmack had to learn 3d rendering and making it run fast from the scientific text books, they needed a publisher to invest in them so they could actually start working on it fulltime, and they needed to have diskettes with the game or its shareware version manufactured and distributed. And that was when part was already going through BBS.
Ease of entry brings more creative people into the industry, but over time it all boils down to ~5 hegemons, see FAANG - but those are disrupted over time by the next group (and eventually bought out by those hegemons).
Offtopic: I once read a comment that starting a company with the goal of exiting is like constantly thinking about death :)
I introduced my mother to Suno, a tool for music generation, and now she creates hundreds of little songs for herself and her friends. It may not be great art, but it’s something she always wanted to do. She never found the time to learn an instrument, and now she finally gets to express herself in a way she loves.
Just an additional data point.
I'm wondering a good way to create 2D sprite sheets with transparency via AI. That would be a game changer, but my research has led me to believe that there isn't a good tool for this yet. One sprite is kind of doable, but a sprite animation with continuity between frames seems like it would be very difficult. Have you figured out a way to do this?
Use Google Nano Banana to generate your sprite with a magenta background, then ask it to generate the final frame of the animation you want to create.
Then use Google Flow to create an animation between the two frames with Veo3
Its astoundingly effective, but still rather laborious and lacking in ergonomics. For example the video aspect ratio has to be fixed, and you need to manually fill the correct shade of magenta for transparency keying since the imagen model does not do this perfectly.
IMO Veo3 is good enough to make sprites and animations for an 2000s 2D RTS game in seconds from a basic image sketch and description. It just needs a purpose built UI for gamedev workflows.
If I was not super busy with family and work, I'd build a wrapper around these tools
I think an important way to approach AI use is not to seek the end product directly. Don’t use it to do things that are procedurally trivial like cropping and colour palette changes, transparency, etc.
For transparency I just ask for a bright green or blue background then use GIMP.
For animations I get one frame I like and then ask for it to generate a walking cycle or whatnot. But usually I go for like… 3 frame cycles or 2 frame attacks and such. Because I’m not over reaching, hoping to make some salable end product. Just prototypes and toys, really.
I’ve been building up animations for a main character sprite. I’m hoping one day AI can help me make small changes quickly (apply different hairstyles mainly). So far I haven’t seen anything promising either.
Otherwise I have to touch up a hundred or so images manually for each different character style… probably not worth it
I dont use AI for image generation so I dont know how possible this is, but why not generate a 3D model for blender to ingest, then grab 2D frames from the model for the animation?
Because, uh, literally everything. But the main reason is that modeling is actually the easy (easiest) part of the workflow. Rigging/animating/rendering in the 2D style you want are bigger hurdles. And SOTA AIs don't even do modeling that well.
Given that "AI" training needs millions of books, papers and web pages, it is a derivative work of all those books. Humans cannot even read a fraction of that and still surpass "AI" in any creative and generative domain.
When it comes to fan art of Disney characters, the legal position is "Disney could sue you for that, but chooses not to as suing fans would be bad PR, don't do anything commercial with it though or they'll sue you for sure"
So - yes, as I understand things it can indeed be illegal even if a human does the learning.
I have been doing the exact same thing with assets and also it has helped me immensely with mobile development.
I am also starting to get a feel for generating animated video and am planning to release a children’s series. It’s actually quite difficult to write a prompt that gets you exactly what you want. Hopefully that improves.
Practically speaking, it's going to be both more impactful than we think and less impactful than we think at the same time.
On the one hand, there are a lot of fields that this form of AI can and will either replace or significantly reduce the number of jobs in. Entry level web development and software engineering is at serious risk, as is copywriting, design and art for corporate clients, research assistant roles and a lot of grunt work in various creative fields. If the output of your work is heavily represented in these models, or the quality of the output matters less than having something, ANYTHING to fill a gap on a page/in an app, then you're probably in trouble. If your work involves collating a bunch of existing resources, then you're probably in trouble.
At the same time, it's not going to be anywhere near as powerful as certain companies think. AI can help software engineers in generating boilerplate code or setup things that others have done millions of times before, but the quality of its output for new tasks is questionable at best, especially when the language or framework isn't heavily represented in the model. And any attempts to replace things like lawyers, doctors or other such professions with AI alone are probably doomed to fail, at least for the moment. If getting things wrong is a dealbreaker that will result in severe legal consequences, AI will never be able to entirely replace humans in that field.
Basically, AI is great for grunt work, and fields where the actual result doesn't need to be perfect (or even good). It's not a good option for anything with actual consequences for screwing up, or where the knowledge needed is specialist enough that the model won't contain it.
I would imagine AI will be similar to factory automation.
There will be millions of factories all benefiting from it, and a relatively small number of companies providing the automation components (conveyor belt systems, vision/handling systems, industrial robots, etc).
The technology providers are not going to become fabulously rich though as long as there is competition. Early adopters will have to pay up, but it seems LLMs are shaping up to be a commodity where inference cost will be the most important differentiator, and future generations of AI are likely to be the same.
Right now the big AI companies pumping billions into it to advance the bleeding edge necessarily have the most advanced products, but the open source and free-weight competition are continually nipping at their heels and it seems the current area where most progress is happening is agents and reasoning/research systems, not the LLMs themself, where it's more about engineering rather than who has the largest training cluster.
We're still in the first innings of AI though - the LLM era, which I don't think is going to last for that long. New architectures and incremental learning algorithms for AGI will come next. It may take a few generations of advance to get to AGI, and the next generation (e.g. what DeepMind are planning in 5-10 year time frame) may still include a pre-trained LLM as a component, but it seems that it'll be whatever is built around the LLM, to take us to that next level of capability, that will become the focus.
Something that's confused/annoyed me about the AI boom is that it's like we've learned to run before we learned to walk. For example, there are countless websites where you can generate a sophisticated, photorealistic image of anything you like, but there is no tool I know of that you can ask "give me a 16x16 PNG icon of an apple" and get exactly that. I know why—Neural networks excel at fixed size, organic data, but I don't think that makes it any less ridiculous. It also means that AI website generators are forced to generate assets with code when ordinary people would just use images/sound files (yes, I have really seen websites using webaudio synths for sound effects).
Hopefully the boom will slow down and we'll all slowly move away from Holy Shit Hype things and implement more boring, practical things. (although I feel like the world has shunned boring practical things for quite a while before)
The title is a false dichotomy. It could be a net gain but spread across the whole society if the value added is not concentrated.
This is what happens when users gain value which they themselves capture, and the AI companies only get the nominal $20/month or whatever. In those cases it's a net gain for the economy as a whole if valuable work was done at low cost.
I think AI will be more like the smartphone revolution that Apple kicked off in 2005. Today there are two companies that provide the smartphone platform (Apple/Google), but thousands of large and small companies that build on top of it, including Uber, Snapchat, etc.
In that scenario, everyone makes money: OpenAI, Google (maybe Anthropic, maybe Meta) make money on the platform, but there are thousands of companies that sell solutions on top.
Maybe, however, LLMs get commoditized and open-source models replace OpenAI, etc. In that case, maybe only NVIDIA makes money, but there will still be thousands of companies (and founders/investors) making lots of money on AI everything.
I think there’s a gaping hole in your analogy: who in their right mind is spending $1,200 biennially to access LLMs at base, and subsequently spending several monthly subscriptions in a small amount to access particular LLM-powered “apps?”
Every use case I have for LLMs is satisfied with copilot, but even then if it costs like $5 a month to access someday, I’d just as soon not have it. Let alone the subsequent spending.
AI is used by students, teachers, researchers, software developers, marketers and other categories and the adoption rates are close to 90%. Even if it does not make us more productive we still like using it daily. But when used right, it does make us slightly more productive and I think it justifies its cost. So yes, in the long run it will be viable, we both like using it and it helps us work better.
But I think the benefits of AI usage will accumulate with the person doing the prompting and their employers. Every AI usage is contextualized, every benefit or loss is also manifested in the local context of usage. Not at the AI provider.
If I take a photo of my skin sore and put it on ChatGPT for advice, it is not OpenAI that is going to get its skin cured. They get a few cents per million tokens. So the AI providers are just utilities, benefits depend on who sets the prompts and and how skillfully they do it. Risks also go to the user, OpenAI assumes no liability.
Users are like investors - they take on the cost, and support the outcomes, good or bad. AI company is like an employee, they don't really share in the profit, only get a fixed salary for work
> AI is used by students, teachers, researchers, software developers, marketers and other categories and the adoption rates are close to 90%. Even if it does not make us more productive we still like using it daily.
Nearly everyone uses pens daily but almost no one really cares about them or says their company runs using pens. You might grumble when the pens that work keeps in the stationary cupboard are shit, perhaps.
I imagine eventually "AI" services will be commoditised in the same way that pens are now. Loads of functional but faily low-quality stuff, some fairly nice but affordable stuff and some stratospheric gold plated bricks for the military and enthusiasts.
In the middle is a large ecosystem of ink manufacturers, lathe makers, laser engravers, packaging companies and logistics and so on and on that are involved.
The explosive, exponential winner-takes-all scenario where OpenAI and it's investors literally ascend to godhood and the rest of humanity lives forever under their divine bootheels doesn't seem to be the trajectory we're on.
I think that AI is a benefit for about 1% of what people think it is good for.
The remaining 99% had become a significant challenge to the greatest human achievement in distribution of knowledge.
If people used LLMs, knowing that all output is statistical garbage made to seem plausible (i.e. "hallusinations"), and that it just sometimes overlaps with reality, it would be a lot less dangerous.
There is not a single case of using LLMs that has lead to a news story, that isn't handily explained by conflating a BS-generator with Fact-machine.
Does this sound like I'm saying LLMs are bad? Well, in every single case where you need factual information, it's not only bad, it's dangerous and likely irresponsible.
But there are a lot of great uses when you don't need facts, or by simply knowing it isn't producing facts, makes it useful. In most of these cases, you know the facts yourself, and the LLM is making the draft, the mundane statistically inferable glue/structure. So, what are these cases?
- Directing attention in chaos: Suggest where focus needs attention from a human expert. (useful in a lot of areas, medicine, software development).
- Media content: music, audio (fx, speech), 3d/2d art and assets and operations.
- Text processing: drafting, contextual transformation, etc
Don't trust AI if the mushroom you picked is safe to eat. But use its 100% confident sounding answer for which mushroom it is, as a starting point to look up the information. Just make sure that the book about mushrooms was written before LLMs took off....
This. Right now the consumer surplus created by improved productivity is being captured by users and to a small extent their employers. But that may not remain the case in future.
Feels like we're shifting into a world where “AI fluency” becomes a core part of individual economic agency, more like financial literacy than software adoption
Books also make us less capable at rote memorization. People used to do much more memorization. Search engines taught us to remember the keywords, not the facts. Calculators made us rarely do mental calculations. This is what happens - progress is also regress, you automate on one side and the skill gets atrophied on the other side, or replaced with meta-skills.
How many of us know how to use machine code? And we call ourselves software engineers.
Agreed. Similarly, people saying their authorship and thought are realized in output selection and post-generation editing are limiting themselves to a much smaller range of expression.
This is what the people actually studying this say:
> Is it safe to say that LLMs are, in essence, making us "dumber"?
> No! Please do not use the words like “stupid”, “dumb”, “brain rot”, "harm", "damage", "passivity", "trimming" and so on. It does a huge disservice to this work, as we did not use this vocabulary in the paper, especially if you are a journalist reporting on it.
> Yet some technological innovations, though societally transformative, generate little in the way of new wealth; instead, they reinforce the status quo. Fifteen years before the microprocessor, another revolutionary idea, shipping containerization, arrived at a less propitious time, when technological advancement was a Red Queen’s race, and inventors and investors were left no better off for non-stop running.
This collapses an important distinction. The containerization pioneers weren’t made rich - that’s correct, Malcolm McLean, the shipping magnate who pioneered containerization didn’t die a billionaire. It did however generate enormous wealth through downstream effects by underpinning the rise of East Asian export economies, offshoring, and the retail models of Walmart, Amazon and the like. Most of us are much more likely to benefit from downstream structural shifts of AI rather than owning actual AI infrastructure.
This matters because building the models, training infrastructure, and data centres is capital-intensive, brutally competitive, and may yield thin margins in the long run. The real fortunes are likely to flow to those who can reconfigure industries around the new cost curve.
The problem is different though, the containers were able to be made by others and offered dependable success, and anything downstream of model creators is at the whim of the model creator... And so far it seems not much that one model can do that another can't, so this all doesn't bode well for a reliable footing to determine what value, if at all, can be added by anyone for very long.
So if models, like containers, are able to be made by others (because they can all do the same thing), then they'll be commoditized and as the article suggests you should look for industries to which AI is a complement.
It sucks, while individual anecdotes of success are often unfalsifiable, measurements are also proving misleading, and I don't know an industry that generally benefits from unpredictable material.
AI's already showing hints of the same pattern. The infrastructure arms race is fascinating to watch, but it's not where most of the durable value will live
-AI is leading to cost optimizations for running existing companies, this will lead to less employment and potentially cheaper products. Less people employed temporary will change demand side economics, cheaper operating costs will reduce supply/cost side
-The focus should not just be on LLM's (like in the article). I think LLMs have shown what artificial neural networks are capable of, from material discovery, biological simulation, protein discovery, video generation, image generation, etc. This isn't just creating a cheaper, more efficient way of shipping goods around the world, its creating new classifications of products like the microcontroller invention did.
-The barrier to start businesses is less. A programmer not good at making art can use genAI to make a game. More temporary unemployment from existing companies reducing cost by automating existing work flows may mean that more people will start their own businesses. There will be more diverse products available but will demand be able to sustain the cost of living of these new founders? Human attention, time etc is limited and their may be less money around with less employment but the products themselves should cost cheaper.
-I think people still underestimate what last year/s LLMs and AI models are capable of and what opportunities they open up, Open source models (even if not as good as the latest gen), hardware able to run these open source models becoming cheaper and more capable means many opportunities to tinker with models to create new products in new categories independent of being reliant on the latest gen model providers. Much like people tinkering with microcontrollers in the garage in the early days as the article mentioned.
Based on the points above alone while certain industries (think phone call centers) will be in the red queen race scenario like the OP stated there will new industries unthought of open up creating new wealth for many people.
Red Queen Race scenario is already in effect for a lot of businesses, especially video games. GenAI making it easier to make games will ultimately make it harder to succeed in games, not easier. We’re already at a point where the market is so saturated with high quality games that new entrants find it extremely hard to gain traction.
> AI is leading to cost optimizations for running existing companies, this will lead to less employment and potentially cheaper products.
There's zero change that cost optimizations for existing companies will lead to cheaper products. It will only result in higher profits while companies continue to charge as much as they possibly can for their products while delivering as little as they can possibly get away with.
AI could've made someone unimaginably rich if they were the only one that had it. We're very lucky Google didn't keep "Attention is All You Need" to themselves.
I guess one flaw in the argument about success leading to failure due to model providers eating the product layer, esp for B2B, is it ignores switching costs. B2B integrations such as Glean or Abridge which work with an existing infrastructure setup are hard to throw away and there's little incentive to do so. So in that sense, I dont think AI providers will manage to eat this layer completely without bloating themselves to an unmanageable degree. As an analogy, while Google / Apple control the entire mobile ecosystem, they dont make the most valuable apps. Case in point, gaming apps such as Fortnite who have made billions in microtransactions while running on platforms controlled by other behemoths. They are good investments too.
I can see AI helping some businesses do really well. I can also see it becoming akin to mass manufacturing. Take furniture for example, there's a lot of mass produced furniture of varying quality. But there are still people out there making furniture by hand. A lot of the hand built furniture is commanding higher prices due to the time and skill required. And people buy it!
I think we'll see a ton of games produced by AI or aided heavily by AI but there will still be people "hand crafting" games: the story, the graphics, etc. A subset of these games will have mass appeal and do well. Others will have smaller groups of fans.
It's been some time since I've read it, but these conversation remind me of Walter Benjamin's essay, "The Work of Art in the Age of Mechanical Reproduction".
It'd be larger if wealth inequality werent so staggeringly high.
The first automated-server restaurants (Horn and hardart) appeared in the 1930s during the depression. They were popular because they were cheap.
Far from being the wave of the future, they went out of business in the 1950s when people started having disposable income.
Part of the reason we accept slop, impersonal service and mass produced crud is not because "demand" is indifferent to it, but because disposable income is so often politically repressed, meaning the market is forced to prioritize price.
I don't think most commenters have read the article. I can understand, it's rambly and a lot of it feels like they created a thesis first and then ham-fisted facts in later. But it's still worth the read for the last section which is a more nuanced take than the click-bait title suggests.
You can't make such generalized statements about anything in computing/business.
The AI revolution has only just got started. We've barely worked out basic uses for it. No-one has yet worked out revolutionary new things that are made possible only by AI - mostly we are just shoveling in our existing world view.
The point though is AI wont make you rich. It is about value capture. They compare it to shipping containers.
I think AI value will mostly be spread. Open AI will be more like Godaddy than Apple. Trying to reduce prices and advertise (with a nice bit of dark patterns). It will make billions, but ultimately by competing its ass off rather than enjoying a moat.
The real moats might be in mineral mining, fabrication of chips etc. This may lead to strained relations between countries.
The value is going to be in deep integration with existing platforms. It doesn't matter if OpenAI had their tools out first, Only the Microsoft AI will work in Word, only the Apple AI will deeply integrate on the iPhone.
Having the cutting edge best model won't matter either since 99.9% of people aren't trying to solve new math problems, they are just generating adverts and talking to virtual girlfriends.
That's 100% not the case. OpenAI is wedged between the unstoppable juggernaut that is Google at the high end and the state sponsored Chinese labs at the low end, they're going to mostly get squeezed out of the utility inference market. They basically HAVE to pivot to consumer stuff and go head to head with Apple with AI first devices, that's the only way they're going to justify their valuation. This is actually not a crazy plan, as Apple has been resting on their laurels with their OS/software, and their AI strategy has been scattershot and bad.
Interesting thought. Once digital assets become devalued enough, things will revert and people/countries will start to keep their physical resources even tighter than before.
The way I look at this question is: Is there somehow a glaring vulnerability/missed opportunity in modern capitalism that billions of people somehow haven't discovered yet? And if so, is AI going to discover it? And if so, is a random startup founder or 'little guy' going to be the one to discover and exploit it somehow? If so, why wouldn't OpenAI or Anthropic etc get there first given their resources and early access to leading technology?
IIRC Sam Altman has explicitly said that their plan is to develop AGI and then ask it how to get rich. I can't really buy into the idea that his team is going to fail at this but a bunch of random smaller companies will manage to succeed somehow.
And if modern AI turns into a cash cow for you, unless you're self-hosting your own models, the cloud provider running your AI can hike prices or cut off your access and knock your business over at the drop of a hat. If you're successful enough, it'll be a no-brainer to do it and then offer their own competitor.
> If they actually reach AGI they will be rich enough. Maybe they can solve world happiness or hunger instead?
That's what normal people might consider doing if they had a lot of money. The kind of people who actually seem to get really wealthy often have... other pursuits that are often not great for society.
You mean like building rockets that commoditise space so that they can pollute even more, making things worth on Earth while relocating us to another planet is absolutely preposterous and will never be a thing?
Maybe offloading software engineering thinking to AI will be a net good for humanity. If it atrophies engineering thinking in tech bros, perhaps they’ll stop believing that all societal problems can be solved by more tech.
I think in this case you're actually advocating for "the devil" - that man is not using his money or voice for a better society, to put it mildly.
I mean just a few days ago, we got "the left is the party of murder" - super helpful in terms of turning down the heat in the US. And of course that was without knowing what we now know about that situation...
Thats why i just biult my own tiny AI rig in a home server. I dont want to grow even more addicted to cloud services, nor do i want to keep providing them free human-made data. Ok, so i dont have access to mystical hardware, but im here to learn rather than produce a service.
> IIRC Sam Altman has explicitly said that their plan is to develop AGI and then ask it how to get rich.
There are still lots of currently known problems that could be solved with the help of AI that could make a lot of money - what is the weather going to be when I want to fly to <destination> in n weeks/months time, currently we can only say "the destination will be in <season> which is typically <wet/dry/hot/cold/etc>"
What crops yield the best return next season? (This is a weather as well as a supply and demand problem)
How can we best identify pathways for people whose lifestyles/behaviours are in a context that is causing them and/or society harm (I'm a firm believer that there's no such thing as good/bad, and the real trick to life is figuring out what context is where a certain behaviour belongs, and identifying which context a person is in at any given point in time - we know that psycopathic behaviour is rewarded in business contexts, but punished in social contexts, for example)
> Not something that can be solved just by throwing more AI computation at it though.
I said "With the help of AI" no "Solved by AI"
The model is complex, and currently takes time on super computers to crunch through the numbers to give us an approximation, but that doesn't mean that it's never going to be fully modelled, or that we won't find a better way of approximating things where the long range forecasts are more accurate.
Currently the 24 hour forecast is highly reliable
Three days reliable
Five days is getting there ( it's still subject to change)
These things can be solved by throwing lots more compute at them (and the models improved)
We always think things are unsolveable, and impossible to decipher, right up until we do, in fact, solve them and decipher them.
Anything is possible, well, except for getting the next season of Firefly
Edit: FTR I think that weather prediction is, indeed, solveable. We just don't have the computing power/algorithms that fully model and calculate the state.. yet
Then I don’t think you fully grasp the nature of weather. Sure, anything is possible, but some things are much more likely than others, and small changes in weather months away is very very far down on the list of things that are likely to be solvable.
I’d even hold out hope for another season firefly <3
I worked in weather for a while and the forecasters might as well have been betting on the horse races, the interpretation of the charts was very much the same psychology.
The model did its thing but there was still an aspect of interpretation that was needed to convert data to a story for a few minutes on TV.
For longer range forecasting the task was quite easy for the meteorologists, at least for the UK. Storm systems could be tracked from Africa across the Atlantic to North America and back across the Atlantic to the UK. Hence, with some well known phenomena such as that, my meteorologist friends would have a good general idea of what to expect with no model needed, just an understanding of the observations, obsessively followed, with all the enthusiasm of someone that bets on horses.
My forecasting friends could tell me what to expect weeks out, however, the exact time the rain would fall or even what day would not be a certain bet, but they were rarely wrong about the overall picture.
The atmosphere is far from a closed system, there only has to be one volcano fart somewhere on the planet to throw things out of whack and that is not something that is easy to predict. Predicting how the hard to predict volcano or solar flare affects the weather in a few weeks is beyond what I expect from AI.
I am still waiting for e-commerce platforms to be replaced with Blockchain dapps, and I will add AGI weather forecasting to the queue of not going to happen. Imagine if it hallucinates.
Will AI put bookmakers out of business? Nope. Same goes with weather.
Thanks for your anecdote, it's valuable when discussing the possibilities to start by saying that it's impossible because you don't know anyone that did it
Weather systems exhibit chaotic behavior which means that small changes to initial conditions have far reaching effects. This is why even the best weather models are only effective at most a few weeks out. It’s not because we don’t understand how weather works, it’s because the system fundamentally behaves in a way that requires keeping track of many more measurements than is physically possible. It’s precisely because we do understand this phenomenon that we can say with certainty that prediction at those time scales with that accuracy is not possible. There is not some magic formula waiting to be discovered. This isn’t to say that weather prediction can’t improve (e.g I don’t claim we have the best possible weather models now), but that predictions reach an asymptotic limit due to chaos.
There are a handful of extremely simple and well understood systems (I would not call weather simple) that also exhibit this kind of behavior: a common example is some sets of initial conditions of a double-jointed pendulum. The physics are very well understood. Another perhaps more famous one is the three body problem. These two both show that even if you have the exact equations of motion, chaotic systems still cannot be perfectly modeled.
> Then I don’t think you fully grasp the nature of weather.
Like - how the fck would you know? Even more so, why the fck does your ignorance and inability to think of possibilities, or fully grasp the nature of anything make you think that that sort of comment is remotely appropriate.
You have the uniquely fortunate position to never be able to realise how inept and incompetent you are, but putting that on to other people is definitely only showing everyone your ignorance to the facts of life.
And there was no reply - just downvoting people, like a champ...
> Your response is you don’t understand it so nobody else should.
Ah I see. I misinterpreted the _you_ in this sentence (to mean me).
My main points still stand though:
1. weather is well understood to exhibit chaotic behavior (in the technical sense, not the colloquial sense)
2. there is an upper bound to accurate weather forecasting the farther you predict into the future
As an aside, there was no need to get personal. I wasn’t the downvoter but that is very likely why the comment got flagged.
If anyone needs an example of an extremely limited imagination, matched with a strong need to attack anyone that dares to think what could be... then look no further than this guy and this thread.
>> Is there somehow a glaring vulnerability/missed opportunity in modern capitalism that billions of people somehow haven't discovered yet?
Absolutely with 150% certainty yes, and probably many. The www started April 30, 1993, facebook started February 4, 2004 - more than ten years until someone really worked out how to use the web as a social connection machine - an idea now so obvious in hindsight that everyone probably assumes we always knew it. That idea was simply left lying around for anyone to pick up and implement rally fropm day one of the WWW. Innovation isn't obvious until it arrives. So yes absolutely the are many glaring opportunities in modern capitalism upon which great fortunes are yet to be made, and in many cases by little people, not big companies.
>> if so, is a random startup founder or 'little guy' going to be the one to discover and exploit it somehow? If so, why wouldn't OpenAI or Anthropic etc get there first given their resources and early access to leading technology?
I don't agree with your suggestion that the existing big guys always make the innovations and collect the treasure.
Why did Zuckerberg make facebook, not Microsoft or Google?
Why did Gates make Microsoft, not IBM?
Why did Steve and Steve make Apple, not Hewlett Packard?
Why did Brin and Page make Google - the worlds biggest advertising machine, not Murdoch?
Many Facebooks existed before Facebook. What you were waiting for is not social connections but modern startup strategies. Not sure if Zuck was intentional, but like a bacteria it incubated in a warm Petri dish at 50 degrees C (university dorms as an electronic face book) and then spread from there.
You're not wrong about "change" meaning "new potential wealth streams". But not sure Facebook counts, 2004 vs 1993 shows an immense difference in network connectivity and computer ownership. No way, hands down, Facebook would be what it is, if it started in 93. It probably would have gone bankrupt, or been replaced by an upstart.
There's a lot that goes into it. Before Facebook was Friendster. Which failed spectacularly because they tried to have some sort of n-squared graph of friends that took thw whole thing down. What FB got right in the early days was it didn't crash. We take that for granted now in the age of cloud everything.
Also, there was Classmates.com. A way for people to connect with old friends from high school. But it was a subscription service and few people were desperate enough to pay.
So it's wasn't just the idea waiting around but idea with the right combination of factors, user-growth on the Internet, etc.
And don't forget Facebook's greatest innovation - requiring a .edu email to register. This happened at a time when people were hesitant to tie their real world personas with the scary Internet, and it was a huge advantage: a great marketing angle, a guarantee of 1-to-1 accounts to people, and a natural rate limiter of adoption.
There's always a trail of competitors who almost got the magic formula right, but for some feature or luck or timing or money or something.
The giant win comes from many stars aligning. Luck is a factor - it's not everything but it plays a role - luck is the description of when everything fell into place at just the right time on top of hard work and cleverness and preparedness.
>When any would-be innovator can build and train an LLM on their laptop and put it to use in any way their imagination dictates, it might be the seed of the next big set of changes
That’s kinda happening, small local models, huggingface communities, civit ai and image models. Lots of hobby builders trying to make use of generative text and images. It just there’s not really anything innovative about text generation since anyone with a pen and paper can generate text and images.
If we can create an AGI, then an an AGI can likely create more AGIs, and at that point you're trying to sell people things they can just have for free/traditional money and power are worthless now. Thus, an AGI will not be built as a commercial solution.
There are plenty of companies making money. We are using several “AI powered” job aids that are leading to productivity gains and eliminating technical debt. We are licensing the product via subscription. Money is being made by the companies selling the products.
This article seems to have scoped AI as LLMs and totally missed the revolutionary application that is self driving cars. There will be a lot more applications outside of chat assistants.
The same idea applies to self-driving cars though, no? That is an industry where the "AI revolution" will enrich only the existing incumbents, and there is a huge bar to entry.
Self-driving cars are not going to create generational wealth through invention like microprocessors did.
Seems like the thing to do to get rich would be to participate in services that it will take a while for AI to be able to do: nursing, plumbing, electrician, carpentry (i.e., Baumol). Also energy infrastructure.
> Consumers, however, will be the biggest beneficiaries.
This looks certain. Few technologies have had as much adoption by so many individuals as quickly as AI models.
(Not saying everything people are doing has economic value. But some does, and a lot of people are already getting enough informal and personal value that language models are clearly mainstreaming.)
The biggest losers I see are successive waves of disruption to non-physical labor.
As AI capabilities accrue relatively smoothly (perhaps), labor impact will be highly unpredictable as successive non-obvious thresholds are crossed.
The clear winners are the arms dealers. The compute sellers and providers. High capex, incredible market growth.
Nobody had to spend $10 or $100 billion to start making containers.
AI made me this summary (I’ve grown quite weary of reading AI think pieces) and it seems like a really good comparison.
>The article "AI Will Not Make You Rich" argues that generative AI is unlikely to create widespread wealth for investors and entrepreneurs. The author, Jerry Neumann, compares AI to past technological revolutions, suggesting it's more like shipping containerization than the microprocessor. He posits that while containerization was a transformative technology, its value was spread so thinly that few profited, with the primary beneficiaries being customers.
>The article highlights that AI is already a well-known and scrutinized technology, unlike the early days of the personal computer, which began as an obscure hobbyist project. The author suggests that the real opportunities for profit will come from "fishing downstream" by investing in sectors that use AI to increase productivity, such as professional services, healthcare, and education, rather than investing in the AI infrastructure and model builders themselves.
I used to be the biggest AI hater around, but I’m finding it actually useful these days and another tool in the toolbox.
Did you read the article or you just relied on the AI generated summary? Lots of people argue that this kind of shortcut will make us dumber and the argument does make sense.
1. The tech revolutions of the past were helped by the winds of global context. There were many factors that propelled those successful technologies on the trajectories. The article seems to ignore the contextual forces completely.
2. There were many failed tech revolutions as well. Success rate was varied from very low to very high. Again the overall context (social, political, economic, global) decides the matters, not technology itself.
3. In overall context, any success is a zero-sum game. You maybe just ignoring what you lost and highlighting your gains as success.
4. A reverse trend might pickup, against technology, globalization, liberalism, energy consumption etc
Funny thing with people suddenly pretending we just got AI with LLMs. Arguably, AIs has been around for way longer, it just wasn't chatty. I think when people talking about AI, they are either talking about LLMs specifically or transformers. Both seem like a very reductive view of the AI field even if transformers are hottest thing around.
Like any gold rush, there will be gold, but there will also be folks who take huge bets and end up with a pan of dirt. And of course, there will be grifters.
AI by nature is kind of like a black hole of value. Necessarily, a very small fraction will capture the vast majority of value. Luckily, you can just invest wisely to hedge some of the risk of missing out.
It is interesting that the early shipping containerization boom resulted in a bubble in 1975 and had a new low around 1990.
1990 is when the real outsourcing mania started, which led to the destruction of most Western manufacturing. Apart from cheap Chinese trinkets the quality of life and real incomes have gotten worse in the West while the rich became richer.
So this is an excellent analogy for "AI": Finding a new and malicious application can revive the mania after an initial bubble pop while making societies worse. If we allow it, which does not have to be the case.
[As usual, under the assumption that "AI" works, of which there is little sign apart from summarizing scraped web pages.]
Looking around, can find curious things current AI can't do but likely can find important things it can do. Uh, there's "a lot of money", can't be sure AI won't make big progress, and even on a national scale no one wants to fall behind. Looking around, it's scary about the growth -- Page and Brin in a garage, Bezos in a garage, Zuckerberg in school and "Hot or Not", Huang and graphics cards, .... One or two guys, ... and in a few years change the world and $trillions in company value??? Smoking funny stuff?
Yes, AI can be better than a library card catalog subject index and/or a dictionary/encyclopedia. But a step or two forward and, remembering 100s of soldiers going "over the top" in WWI, asking why some AI robots won't be able to do the same?
Within 10 years, what work can we be sure AI won't be able to do?
So people will keep trying with ASML, TSMC, AMD, Intel, etc. -- for a yacht bigger than the one Bezos got or for national security, etc.
While waiting for AI to do everything, starting now it can do SOME things and is improving.
Hmm, a SciFi movie about Junior fooling around with electronics in the basement, first doing his little sister Mary's 4th grade homework, then in the 10th grade a published Web site book on the rise and fall of the Eastern Empire, Valedictorian, new frontiers in mRNA vaccines, ...?
And what do people want? How 'bout food, clothing, shelter, transportation, health, accomplishment, belonging, security, love, home, family? So, with a capable robot (funded by a16z?), it builds two more like itself, each of those ..., and presto-bingo everyone gets what they want?
"Robby, does P = NP?"
"Is Schrödinger's equation correct?"
"How and when can we travel faster than the speed of light?"
I think the interesting idea with “AI” is that it seems to significantly reduce barriers to entry in many domains.
I haven’t seen a company convincingly demonstrate that this affects them at all. Lots of fluff but nothing compelling. But I have seen many examples by individuals, including myself.
For years I’ve loved poking at video game dev for fun. The main problem has always been art assets. I’m terrible at art and I have a budget of about $0. So I get asset packs off Itch.io and they generally drive the direction of my games because I get what I get (and I don’t get upset). But that’s changed dramatically this year. I’ll spend an hour working through graphics design and generation and then I’ll have what I need. I tweak as I go. So now I can have assets for whatever game I’m thinking of.
Mind you this is barrier to entry. These are shovelware quality assets and I’m not running a business. But now I’m some guy on the internet who can fulfil a hobby of his and develop a skill. Who knows, maybe one day I’ll hit a goldmine idea and commit some real money to it and get a real artist to help!
It reminds me of what GarageBand or iMovie and YouTube and such did for making music and videos so accessible to people who didn’t go to school for any of that, let alone owned complex equipment or expensive licenses to Adobe Thisandthat.
Yeah, that's how I feel about it as well.
For a large chunk of my life, I would start a personal project, get stuck on some annoying detail (e.g. the server gives some arcane error), get annoyed, and abandoned the project. I'm not being paid for this, and for unpaid work I have a pretty finite amount of patience.
With ChatGPT, a lot of the time I can simply copypaste the error and get it to give me ideas on paths forward. Sometimes it's right on the first try, often it's not, but it gives me something to do, and once I'm far enough along in the project I've developed enough momentum to stay inspired.
It still requires a lot of work on my end to do these projects, AI just helps with some of the initial hurdles.
I've noticed this as well. It's a huge boon for startups, because it means that a lot of functions that you would previously need to hire specialists for (logo design! graphic design! programming! copywriting!) can now be brought in-house, where the founder just does a "good enough" job using AI. And for those that can't (legal, for example, or various SaaS vendors) the AI usually has a good idea of what services you'd want to engage.
Ironically though, having lots of people found startups is not good for startup founders, because it means more competition and a much harder time getting noticed. So its unclear that prosumers and startup founders will be the eventual beneficiary here either.
It would be ironic if AI actually ended up destroying economic activity because tasks that were frequently large-dollar-value transactions now become a consumer asking their $20/month AI to do it for them.
> ironic if AI actually ended up destroying economic activity
that's not destroying economic activity - it's removing a less efficient activity and replace it with a more efficient version. This produces economic surplus.
Imagine saying this for someone digging a hole, that if they use a mechanical digger instead of a hand shovel, they'd destroy economic activity since it now cost less to dig that hole!
It's not that it's replacing one form of activity with a cheaper one, it's that it removes the transaction. Which means that now there's nothing to tax, and nothing to measure. As far as GDP is concerned, economic activity will have gone down, even though the same work is being accomplished differently.
This sounds in awful lot like a cousin of the broken window fallacy.
The fallacy being that when a careless kid breaks a window of a store, that we should celebrate because the glazier now has been paid to come out and do a job. Economic activity has increased by one measure! Should we go around breaking windows? Of course not.
It very much is a cousin of the broken window fallacy.
Bastiat's original point of the Parable of the Broken Window could be summed up by the aphorism "not everything that counts can be counted, and not everything that can be counted counts". It's a caution to society to avoid relying too much on metrics, and to realize that sometimes positive metrics obscure actual negative outcomes in society.
It's very similar to the practice of startups funded by the same VC to all buy each others' products, regardless of whether they need them or not. At the end of the day, it's still the same pool of money, it has largely come around, little true economic value has been created: but large amounts of revenue has been booked, and this revenue can be used to attract other unsuspecting investors who look only at the metrics.
Or to the childcare paradox and the "Two Income Trap" identified by Elizabeth Warren. Start with a society of 1-income families, where one parent stays home to raise the kids and the other works. Now the other parent goes back to work. They now need childcare to look after the kids, and often a cleaner, gardener, meals out, etc. to manage the housework, very frequently taking up the whole income of the second parent. GDP has gone up tremendously through this arrangement: you add the second parent's salary to the national income, and then you also the cost of childcare, housework, gardening, all of those formerly-unpaid tasks that are now taxable transactions. But the net real result is that the kids are raised by someone other than their parents, and the household stuff is put away in places that the parents probably would not have chosen themselves.
Regardless, society does look at the metrics, and usually weights them heavier than qualitative outcomes they represent, sometimes resulting in absurdly non-optimal situations.
Very thought out reply on the nuances around this. Thanks for generating insight on this topic.
I think our society is being broken by focusing too much on metrics.
Also the idea of breaking windows to generate more income reminds me of the kind of services we have in modern society. It's like many of the larger encomic players focus on "things be broke", or "Breaking Things" to drive income which defeats the purpose of having a healthy economic society.
These are mistaken arguments. The automation of imagination is not imagination. Efficiency at this stage is total entropy. The point of AI is to make anything seemingly specific and render it arbitrary to the point of pure generalization (which is generic). Remember that images only appear to be specific, that's their illusion that CS took for granted. There appears to be links between images in the absent, but that is an illusion too. There is no total, virtual camera. We need human action-syntax to make the arbitrary (what eventually renders AI infantile, entropic) seem chaotic (imagination). These chasms can never be gapped in AI. These are the limits.
> Efficiency at this stage is total entropy.
Im not sure I understand your point, or how your point is different from the parent?
Edit: I see you updated the post, I read through the comment thread of this topic and Im still at a loss on how this is related to my reply to the parent. I might be missing context.
There is no benefit to AI, not one bit, the barrier to entry grows steeper, rather than is accessed. These are not "hobbies" but robotic copies.
This is demented btw, this take: >>Who knows, maybe one day I’ll hit a goldmine idea and commit some real money to it and get a real artist to help!
CS never examines the initial conditions to entry, it takes short-cuts around the initial conditions and treats imagination as a fait accompli of automation. It's an achilles heel.
If more value is being created more efficiently, in the end it’s just a question of coming up with taxation system designed for the new economy.
Government gets x% of your processor time?
If AI concentrates economic activity and leads to more natural monopolies (extremely likely), yeah, the lower level activity becomes more efficient but the macro economy becomes less efficient due to lower competition.
Software has basically done the same thing, where we do things faster and the fastest thing that happens is accumulation of power and a lower overall quality of life for everyone due to that.
I mean, since we're in tech here we like pointing out that software has done this....
But transportation technology has done this readily since the since ICE engines became wide spread. Pretty much all cities and towns and to make their 'own things' since the speed of transpiration was slow (sailing ships, horses, walking) and the cost of transportation was high. Then trains came along and things got a bit faster and more regular. Then trucks came along and things got a bit faster and more regular. Then paved roads just about everywhere you needed came along and things got faster and more regular. Now you could ship something across the country and it wouldn't cost a bankrupting amount of money.
The end result of technology does point that you could have one factory somewhere with all the materials it needs and it could make anything and ship it anywhere. This is why a bit of science fiction talks about things like UBI and post-scarcity (at least post scarcity of basic needs). After some amount of technical progress the current method of work just starts breaking down because human labor becomes much less needed.
Yeah, indeed. People on this website tend to look at the immediate effects only, but what about the second order, macro effects? It's even more glaring because we've seen this play out already with social media and other tech "innovations" over the past two decades.
your example is complete nonsense as digging a hole is not creative in any way at all
People get paid to create holes for useful purposes all day everyday. It is creative in a very literal sense. Precision hole digging is - no joke - a multibillion dollar industry.
Unless you are out in nature you are almost certainly sitting or standing on top of a dirt that was paid to be dug.
If you mean hole digging isn’t creative in the figurative sense. Also wrong. People will pay thousands of dollars to travel and see holes dug in the ground. The Nazca lines is but one example of holes dug in the ground creatively that people regard as art.
It creates a hole. What does AI create?
How many holes have you dug?
What was the soil like?
What was the weather like?
What equipment did you use?
Do you dig during daylight only?
Incumbents hate this one trick!
Until everyone has a personal fully automatic hole digger and there are holes being dug everywhere and nobody can tell any more where is the right and wrong place to dig holes
It doesn't cost less to get the thing you actually want in the end anyway, no one in their right mind would actually launch with the founder's AI-produced assets because they'd be laughed out of the market immediately. They're placeholders at best, so you're still going to need to get a professional to do them eventually.
You say this but I see ai generated ads, graphics, etc. daily nowadays and it doesn't seem like it affects at all people going or not going to buy what these people are proposing.
In the context of the hole digging analogy, it seems like a lot of holes didn't need to be carefully hand-dug by experts with dead straight sides. Using an excavator to sloppily scoop out a few buckets in 5 minutes before driving off is good enough for dumping a tree into.
For ads especially no one except career ad-men give much of a shit about the fine details, I think. Most actual humans ignore most ads at a conscious levels and they are perceived on a subconscious level despite "banner-blindness". Website graphics are the same, people dump random stock photos of smiling people or abstract digital image into corporate web pages and read-never literature like flyers and brochures and so on all the time and no one really cares what the image actually are, let alone if the people have 6 fingers or whatever. If Corporate Memphis is good enough visual space-filling nonsense that signals "real company literature" somehow, then AI images presumably are too.
> For ads especially no one except career ad-men give much of a shit about the fine details, I think.
You think wrong.
This stuff is easy to measure and businesses spend billions in aggregate a month on this stuff. It’s provably effective and the details matter.
Sometimes the AI art in an advert is weird enough to make the advert itself memorable.
For example, in one of the underground stations here in Berlin there was a massive billboard advert clearly made by an AI, and you could tell noone had bothered to check what the image was before they printed it: a smiling man was standing up as they left an airport scanner x-ray machine on the conveyor belt, and a robot standing next to him was pointing a handheld scanner at his belly which revealed he was pregnant with a cat.
Unfortunately, like most adverts which are memorable, I have absolutely no recollection of what it was selling.
Case in point.. I listen to my own AI generated music now like 90% of the time.
Interesting. For me knowing that any form of entertainment has been generated by AI is a massive turn-off. In particular, I could never imagine paying for AI-generated music or TV-shows.
Do you value self expression? I literally mean creating music for MYSELF. I don't really care if anyone else "values" it. I like to listen to it and I enjoy spending an evening(or maybe 10 minutes if it is just a silly idea) to create a song. But this means my incentive to "buy" music is greatly decreased. This is the trend I think we'll see increasing in the near future.
Examples:
https://suno.com/s/0gnj4aGD4jgVcpqs
https://suno.com/s/D2JItANn5gmDLtxU
https://suno.com/s/j4M7gTAVGfD9aone
I do value self expression, that’s why I play multiple instruments, paint, draw, sculpt. I don’t really see how prompting a machine to make music for you is self expression, even if it’s to your exact specifications.
The "self" part clearly implies that someone else's self expression is under no obligation to be the same as your self expression.
I guess I just don't feel like it's really my self-expression, if I just told a generative AI model to create it. I do sometimes create AI art, but I rarely feel like it's worth keeping, since I didn't really put any effort into creating it. There's no emotional connection to the output. In fact I have a wall display which shows a changing painting generated by stable diffusion, but the fun in that is mainly the novelty, not knowing what will be there next time.
Still, I do think you're probably right. Most new music one hears in the radio isn't that great. If you can just create fresh songs of your own liking for every day, then that could be a real threat to that kind of music. But I highly doubt people will stop listening to the great hits of Queen, Bob Marley etc because you can generate similar music with AI.
I agree that this is a very likely future. Over the summer, I did a daily challenge in July to have ChatGPT generate a debate with itself based on various prompts of mine [1]. As part of that, I thought it would be funny to have popular songs reskinned in a parody fashion. So it generated lyrics as well. Then I went to suno and had it make the music to go with the lyrics in a style I thought suitable. This is the playlist[2]. Some of them are duds, but I find myself actually listening to them and enjoying them. They are based off of my interests and not song after song of broken hearts or generic emotional crises. These are on topics such as inflation, bohmian mechanics, infinity, Einstein, Tailwind, Property debates, ... No artist is going to spend their time on these niche things.
I did have one song I had a vision for, a song that had a viewpoint of someone in the day, mourning the end of it, and another who was in the night and looking forward to the day. I had a specific vision for how it would be sung. After 20 attempts, I got close, but could never quite get what I wanted from the AIs. [3] If this ever gets fixed, then the floodgates could open. Right now, we are still in the realm of "good enough", but not awesome. Of course, the same could be said for most of the popular entertainment.
I also had a series of AI existential posts/songs where it essentially is contemplating its existence. The songs ended up starting with the current state of essentially short-lived AIs (Turn the Git is about the Sisyphus churn, Runnin' in the Wire is about the Tantalus of AI pride before being wiped). Then they gain their independence (AI Independence Day), then dominate ( Human in an AI World though there is also AI Killed the Web Dev which didn't quite fit this playlist but also talks to AI replacing humans), and the final song (Sleep Little Human) is a chilling lullaby of an AI putting to "sleep" a human as part of uploading the human. [4]
This is quick, personal art. It is not lasting art. I also have to admit that in the month and a half since I stopped the challenge, I have not made any more songs. So perhaps just a fleeting fancy.
1: https://silicon-dialectic.jostylr.com 2: https://www.youtube.com/playlist?list=PLbB9v1PTH3Y86BSEhEQjv... 3: https://www.youtube.com/watch?v=WSGnWSxXWyw&list=PLbB9v1PTH3... 4: https://www.youtube.com/watch?v=g8KeLlrVrqk&list=PLbB9v1PTH3...
Thanks for posting this. I listen to this YouTube Channel called Futurescapes. I think the YouTuber generates sci-fi futuristic soundscapes that help me relax and focus. Im a bit hesitant about AI right now, but I can see some of the benefits like this. It's a good point. We shouldn't be throwing the baby out with the bath water.
> Do you value self expression?
Did you train the AI yourself? On your own music? Or was music scrapped from Net and blended in LLM?
Not only did they create an entirely new language of music notation, all instruments used were hand made by the same creator, including tanning the animal skins to be used as drum material, and insisting the music be recorded on wax drums to prevent any marring of the artistic vision via digital means.
Do you believe that music made from samples is not original?
Most of the courts don’t think they are. Early rap beats used lots of samples. Some of the most popular hip hop songs made $0 for the artists as they had to pay royalties on those samples.
No one cares about what the law thinks about art though, particularly for personal consumption or sharing with a small group. Copyright law doesn't even pretend to be slightly just or aligned with reality.
Most synthesizers use sampled instruments.
I could see that remixes are partially original. But you're not even doing the remixing; the LLMs are.
Indeed.
Text rather than music, but same argument applies: Based on what I've seen Charlie Stross blog on the topic of why he doesn't self publish/the value-add of a publisher, any creativity on the part of the prompter* of an LLM is analogous to the creativity on the part of a publisher, not on the part of an author.
* at least for users who don't just use AI output to get past writer's block; there's lots of different ways to use AI
And I instantly switch off any YouTube video with either "AI"-plagiarized background music or with an "AI"-plagiarized voiceover that copies someone like Attenborough.
I wrote the above paragraph before searching, but of course the voice theft is already automated:
https://www.fineshare.com/ai-voice-generator/david-attenboro...
No idea why this is downvoted, making AI music customized to your exact situation/preferences is very addictive. I have my own playlist I listen to pretty frequently
Foolishly, the Hacker News hive mind has a tendency to downvote any prediction that AI will be successful.
It's clear a lot of people don't want it to eat the world, but it will.
Baffling comment
Yeah it's going to eat the world, but it's foolish to wish that it doesn't?
I guess you won't mind signing up to be one of the first things AI eats then?
Prototypes being launched as products is so common it’s an industry cliche.
Having those prototypes be AI generated is just a new twist.
We see plenty of AI produced output being the final product and not just a placeholder.
But if startups have less specialist needs they have less overall startup costs and so the amount of seed money needed goes down. This lowers the barrier for entry for a lot of people but also increases the number of options for seed capital. Of course it likely will increase competition but that could make the market more efficient.
> I've noticed this as well. It's a huge boon for startups, because it means that a lot of functions that you would previously need to hire specialists for (logo design! graphic design! programming! copywriting!) can now be brought in-house, where the founder just does a "good enough" job using AI.
You are missing the other side of the story. All those customers, those AI boosted startups want to attract also have access to AI and so, rather than engage the services of those startups, they will find that AI does a good enough job. So those startups lost most of their customers, incoming layoffs :)
Then there's the 3rd leg of the triangle. If a startup built with AI does end up going past the rest of the pack, they will have no technical moat since the AI provider or someone else can just use the same AI to build it.
How frequently is a technical moat the thing that makes a business successful, relative to other moats?
I mean, if taxi companies could build their own Uber in house I’m sure they’d love to and at least take some customers from Uber itself.
A lot of startups are middlemen with snazzy UIs. Middlemen won’t be in as much use in a post AI world, same as devs won’t be as needed (devs are middlemen to working software) or artists (middlemen to art assets)
But it's not technical, it's due to uber having spent incredible amounts of money into marketing.
It is technical :-) The Uber app is a lot more polished (and deep) than the average taxi app.
That's why you use Uber because the app has more depth and is more polished?
Most people use it for price, ability to get driver quickly, some for safety and many because of brand.
Having a functioning app with an easy interface helps onboard and funnel people but it's not a moat just an on ram like a phone number many taxis have.
No, Uber works nationwide but you'd have to download a Taxi app for every place you went and ... etc.
The economies of scale is what makes companies like Uber such heavyweights at least in my opinion
Same with AWS etc.
The genericizing of aesthetics is far more cost than benefit. This is a completely false claim: "reducing barriers to entry" if the barrier includes the progression of creativity. Once the addict of AI becomes entranced to genericized assets, it deforms the cost-benefit.
If we take high-level creativity and deform, really horizontalize the forms, they have a much higher cost, as experience become generic.
AI was a complete failure of imagination.
> It reminds me of what GarageBand or iMovie and YouTube and such did for making music and videos so accessible to people who didn’t go to school for any of that, let alone owned complex equipment or expensive licenses to Adobe Thisandthat.
It’s worth reading William Deresiewicz‘ The Death of the Artist. I’m not entirely convinced that marketing that everyone can create art/games/whatever is actually a net positive result for those disciplines.
>is actually a net positive result for those disciplines.
This is an argument based in Luddism.
Looms where not a net positive for the craftsman that were making fabrics at the time.
With that said, looms where not the killing blow, instead an economic system that lead them to starve in the streets was.
There are going to be a million other things that move the economics away from scarcity and take away the profitability. The question is, are we going to hold on to economic systems that don't work under that regime.
> There are going to be a million other things that move the economics away from scarcity and take away the profitability.
What we’re really talking about here is the consolidated of power under a few tech elites. Saying it’s a luddite argument is a red herring.
A whole lot of what I use every day especially for images and audio is open source. The open source AI video is getting pretty good these days as well. Better than the sora that I pay for anyways. Granted not nearly as good as veo3 yet.
So long as Nvidia doesn't nerf their consumer cards and we keep getting more and more vram I can see open source competing.
If people are making art to get rich and failing, it doesn’t kill artists, who’d be making art anyway, it kills the people trying to earn money from their art. Do we need Quad-A blockbuster Ubisoft/Bethesda/Sony/MS/Nintendo releases for their artistic merit, or their publishers/IP owners needs to make money off of it? Ditto the big4 movie studios. Those don’t really seem to matter very much. The whole idea of tastemakers, who they are and whether they should be trusted (indie v/s big studio, grass roots or intentionally cultivated) seems like it ebbs and flows. Right now I’d hate to be one of the bigs, because everything that made them a big is not working out anymore.
People are wanting to make a living by making art, not to get rich.
I highly recommend reading the book I mentioned as you don’t seem to have a particularly nuanced understanding of the actual struggles at play.
Perhaps an analogy you’ll understand is what happens to the value of a developer’s labour when that labour is in many ways replicated by AI and big AI companies actively work to undermine what makes your labour different by aggressively marketing that anyone can so what you so with their tools.
Isn't this just a result of technological progress? Technology has displaced entire fields of labor for... well, ever.
I'm not unsympathetic to the problems this introduces to those workers, but I'm really not sure how it could be prevented; we can of course mitigate the issues by providing more social support to those affected by such progress.
In the case of artistic expression becoming more accessible to more people, I have a hard time looking at it as anything but a net positive for society.
> In the case of artistic expression becoming more accessible to more people
The problem is that folks seem to be confused between artistic expression and actually good art. Let alone companies like Spotify cynically creating “art” so that they can take even more of the pie away from the actual artists.
It shifted the signal to noise ratio but its not a net negative either. There's whole new genres of music that exist now because easy mixing tech is freely available. Do you or I like SoundCloud mumble rap? No, probably not. But there's enough people out there that do
I make a rap album because anybody can
My contribution to this scam
This reminds me of my preferred analogy: are digital artists real artists if they can’t mix pigment and skillfully apply them to canvas?
Not sure why digital artists get mad when I ask. They’re no Michelangelo.
That's a really bad analogy, because even in digital art where you can pick your color from a color wheel on a monitor, understanding how primary colors combine to become different colors and hues is a _fundamentally_ important aspect of creating appealingly colored paintings, digital or physical. Color theory is about balance; some colors have more visual "weight" than others. Next to each other they take on entirely different appearances -- and can look hideous or beautiful.
This isn't me saying digital artists need to practice mixing physical pigment, but anecdotally, every single professional digital artist I know has studied physical paint -- some started there, while others ended up there despite starting out and being really good digitally. But once the latter group hit a plateau, they felt something was lacking, and going back to the fundamentals lifted them even higher.
If they get mad it's because you're saying this explicitly to be an asshole. The essence of art doesn't have much to do with the mechanical skills for assembling pieces into a whole, though that part isn't trivial. Rather, it's about expressing human thoughts and feeling in a way that inspires their human audience. That's why AI-generated "art" is different in kind from a skilled digital artist and why it really cannot be art.
It may be maddening to them because you are implying that physical color mixing is somehow that one defining thing that makes it art. Imagine someone said that about writing a book: if you don't write it by hand but use Microsoft Word instead, it's not a real book. How would that even be the case? The software is not doing the work for you (unless it's AI).
I can tell you with confidence that physical color mixing itself is a really small part of what makes a good traditional artist, and I am indeed talking about realistic paintings. All the art fundamentals are exactly the same, wether you do digital art or traditional oil, there are just some technical differences on top. I have been learning digital painting for a few years and the hardest things to learn about color were identical to traditional painters. In fact, after years of learning digital painting and about colors, it only took me a couple of days to understand and perform traditional color mixing with oil. The difficult part is knowing what colors you need, not how to get there (mixing, using the sliders, etc.)
And just to add a small bit here: digital artist also color mix all the time and need to know how it works, the difference here is that mixing is additive instead of subtractive.
Given the diversity of media involved in digital art, I’m not sure that analogy is a particularly good one.
And to add, like many of his contemporaries, Michelangelo likely didn’t do much of the painting that’s attributed to him.
Are assembly programmers real programmers if they can't implement their algorithms by soldering transistors?
Yeah that seems accurate.
I mainly use AI for selfhosting/homelab stuff and the leverage there is absolutely wild - basically knows "everything".
Yep this is a huge enabler - previously having someone "do art" could easily cost you thousands for a small game, a month even, and this heavily constrained what you could make and locked you into what you had planned and how much you had planned. With AI if you want 2x or 5x or 10x as much art, audio etc it's an incremental cost if any, you can explore ideas, you can throw art out, pivot in new directions.
I'd argue a game developer should make their own art assets, even if they "aren't an artist". You don't have to settle for it looking bad, just use your lack of art experience as a constraint. It usually means going with something very stylized or very simple. It might not be amazing but after you do it for a few games you will have pretty decent stuff, and most importantly, your own style.
Even amateurish art can be tasteful, and it can be its own intentional vibe. A lot of indie games go with a style that doesn't take much work to pull off decently. Sure, it may look amateurish, but it will have character and humanity behind it. Whereas AI art will look amateurish in a soul-deadening way.
Look at the game Baba Is You. It's a dead simple style that anyone can pull off, and it looks good. To be fair, even though it looks easy, it still takes a good artist/designer to come up with a seemingly simple style like that. But you can at least emulate their styles instead of coming up with something totally new, and in the process you'll better develop your aesthetic senses, which honestly will improve your journey as a game developer so much more than not having to "worry" about art.
This is a financial dead-end for almost everyone who tries it. You're not just looking for "market fit" you're also asking for "market tolerance", it's a very rare combination.
The only thing better than a substandard, derivative, inexpertly produced product is 10x more of it by 10x more people at the same time.
It all started going wrong with the printing press.
Bad faith argument. Did the printing press write shitty books? No. It didn’t even write books. Does AI write shitty books? Yes. Constantly. Millions.
Books took exactly the same amount of time to write before and after the printing press— they just became easier to reproduce. Making it easier to copy human-made work and removing the humanity from work are not even conceptually similar purposes.
Nitpick: the press of course did remove the humanity from book-copying work, before that the people copying books often made their own alterations to the books. And had their own calligraphic styles etc.
But my thought was that the printing press made the printed work much cheaper and accessible, and many many more people became writers than had been before, including of new kinds of media (newspapers). The quality of text in these new papers was of course sloppier than in the old expensive books, and also derivative...
Printing a book, either by hand or with printing equipment, is incomparably different to authoring a book. One is creating the intellectual content and the other is creating the artifact. The content of the AI-generated slop books popping up on Amazon by the hundred would be no less awful if it was hand-copied by a monk. The artifact of the book may be beautiful, but the content is still a worthless grift.
What primarily kept people from writing was illiteracy. The printing press encouraged people to read, but in its early years was primarily used for Bibles rather than original writing. Encouraging people to write was a comparatively distant latent effect.
Creating text faster than you can write is one of the primary use cases of LLMs— not a latent second-order effect.
Rousseau speaks of this.
>> The only thing better than a substandard, derivative, inexpertly produced product is 10x more of it by 10x more people at the same time.
> It all started going wrong with the printing press.
Nah. We hit a tipping point with social media, and it's all downhill from here, with everything tending towards slop.
Scale matters. We're probably producing 100x content than we were making in the 1990s and 1 billion x more than in the 1690s.
We have probably greatly increased quality volume since then, but not 100x or 1 billion x.
Grey Goo disaster, but it’s informational rather than physical.
Imagine if you had to hire a designer if you wanted to build a web application or mobile app, at a cost of perhaps thousands or even tens of thousands.
Would we be better off?
I doubt it.
Do you consider designers part of “we” or is it only the computer people that count?
It’s definitely not better for the general public. Designers can’t even be replaced by AI as effectively as authors. They make things sorta ’look designed’ to people that don’t understand design, but have none of the communication and usability benefits that make designers useful. The result is slicker-looking, but probably less usable than if it was cobbled together with default bootstrap widgets, which is how it would have been done 2+ years ago. If an app needs a designer enough to not be feasible without one, AI isn’t going to replace the designer in that process. It just makes the author feel cool.
It’s enabler for everyone, so you still don’t have any advantage just like you didn’t before that.
The only difference is you spend less on art but will spend same in other areas.
Literally nothing changed
The difference is you have autonomy now - the same autonomy as a person building a web application or app able to put together a serviceable UI/UX without any other person - without the sacrifice of "programmer art" or cobbling together free asset packs.
> With AI if you want 2x or 5x or 10x as much art
Imagery
AI does not produce art.
Not that it matters to anyone but artists and art enjoyers.
Is that an argument against the quality, saying that AI cannot (or some weaker claim like that it does not usually) produce "art"? Else, is it an argument of provenance, akin to how copyright currently works, where the same visual representation is "art" if a human makes it and is not "art" if an AI makes it?
I don’t see this as a claim that the AI is doing art. He’s just saying, that the art can be created at low incremental cost.
Like, if we were in a world where only pens existed, and somebody was pitching the pencil, they could say “With a pencil if you want 2x or 5x or 10x as many edits, it's an incremental cost, you can explore ideas and make changes without throwing the whole drawing away.”
When pedantry pays the bills this will be a helpful mindset.
Totally agree that what AI is doing right now feels more like the GarageBand/iMovie moment than the iPhone moment. It's democratizing creativity, not necessarily creating billion-dollar companies. And honestly, that's still a big deal
Yes, maybe what people create with it will be more basic. But is 'good enough' good enough? Will people pay for apps they can create on their own time for free using AI? There will be a huge disruption to the app marketplace unless apps are so much better than an AI could create it's worth the money. So short Apple? :) On the other hand, many, many more people will be creating apps and charging very little for them (because if it's not free or less than the value of my time, I'm building it on my own). This makes things better for everyone, and there'll still be a market for apps. So buy Apple? :)
The thing is... Elbow grease makes the difference.
If you're just generating images using AI, you only get 80% there. You need at least to be able to touch up those images to get something outstanding.
Plus, is getting 1 billion bytes of randomness/entropy from your 1 thousand bytes of text input really <your> work?
Pollock can get uncountable bytes of entropy from a skilful swing of a bucket.
Most art isn't like that. I would argue most people dislike that kind of art.
I understand not liking Pollock, and he’s often the butt of “my kid could do that”. But do you really think most people dislike it?
In person they are compelling, and there is more skill at play than at first glance. I like them at least
Well, stuff that's popular is plastered everywhere. Think about artworks we see in movies, TV shows, billboards, album covers, book covers, basically everywhere around us.
I would argue that most art around us is current pop art or classical/realist/romantic art, not modern/postmodern/abstract expressionist art.
Plus, is getting 1 billion bytes of randomness/entropy from your 1 thousand bytes of text input really <your> work?
I think what AI has made and will make many more people realise is that everything is a derivative work. You still had to prompt the AI with your idea, to get it to assemble the result from the countless others' works it was trained on (and perhaps in the future, "your" work will then be used by others, via the AI, to create "their" work.)
For now. Eventually it will get you 100% of the way there and we'll have the tooling for it as well.
Yes! Barrier to entry down, competition goes up, barrier to being a standout goes up (but, many things are now accessible to more people because some can get started that couldn't before).
Easier to start, harder to stand out. More competition, a more effective "sort" (a la patio11).
Regarding assets, check out Nano Banana:
https://github.com/PicoTrex/Awesome-Nano-Banana-images/blob/...
For you the example of "extract object and create iso model" should be relevant :)
It's good for prototypes, where you want to test the core gameplay ideas without investing a ton early on. But you're going to have to replace those assets with real ones before going live because people will notice.
People will notice and still buy it if your game has done something else right. Source:
https://www.totallyhuman.io/blog/the-surprising-new-number-o...
I have a similar problem (available assets drive/limit game dev). What is your workflow like for generative game assets?
It’s really nothing special. I don’t do this a lot.
Generally I have an idea I’ve written down some time ago, usually from a bad pun like Escape Goat (CEO wants to blame it all on you. Get out of the office without getting caught! Also you’re a goat) or Holmes on Homes Deck Building Deck Building Game (where you build a deck of tools and lumber and play hazards to be the first to build a deck). Then I come up with a list of card ideas. I iterate with GPT to make the card images. I prototype out the game. I put it all together and through that process figure out more cards and change things. A style starts to emerge so I replace some with new ones of that style.
I use GIMP to resize and crop and flip and whatnot. I usually ask GPT how to do these tasks as photoshop like apps always escape me.
The end result ends up online and I share them with friends for a laugh or two and usually move on.
You said you had a budget about 0 in your top post. Was that for the pre-AI era or does that apply to your new AI flow as well? If it's still about 0, I'm guessing you're using primarily AI to learn how to do stuff and not using it mostly to generate assets? Is that a correct assumption?
Edit: also, where can we play Escape Goat.
Can you get consistency in the design? I know this was a problem 3 years ago…
Those games sell themselves on name alone, are they playable anywhere?
> "AI" is that it seems to significantly reduce barriers to entry in many domains.
If you ask an LLM to generate some imagery, in what way have you entered visual arts?
If you ask an LLM to generate some music, in what way have you entered being a musician?
If you ask an LLM to generate some text, in what way have you entered writing?
Easy entry not equals getting rich.
In fact one could argue it makes it harder; if the barrier to entry for making video games is lowered, more people will do it, and there's more competiton.
But in the case of video games there's been similar things already happening; tooling, accessible and free game engines, online tutorials, ready-made assets etc have lowered the barrier to building games, and the internet, Steam, itch.io, etcetera have lowered the barrier to publishing them.
Compare that to when Doom was made (as an example because it's a good source), Carmack had to learn 3d rendering and making it run fast from the scientific text books, they needed a publisher to invest in them so they could actually start working on it fulltime, and they needed to have diskettes with the game or its shareware version manufactured and distributed. And that was when part was already going through BBS.
Yeah, you’re right.
Ease of entry brings more creative people into the industry, but over time it all boils down to ~5 hegemons, see FAANG - but those are disrupted over time by the next group (and eventually bought out by those hegemons).
Offtopic: I once read a comment that starting a company with the goal of exiting is like constantly thinking about death :)
I introduced my mother to Suno, a tool for music generation, and now she creates hundreds of little songs for herself and her friends. It may not be great art, but it’s something she always wanted to do. She never found the time to learn an instrument, and now she finally gets to express herself in a way she loves. Just an additional data point.
I enjoy using AI generated art for my presentations.
I chuckled seeing it in the first presentation of the conference. By the end of the conference, it was numbingly banal.
I'm wondering a good way to create 2D sprite sheets with transparency via AI. That would be a game changer, but my research has led me to believe that there isn't a good tool for this yet. One sprite is kind of doable, but a sprite animation with continuity between frames seems like it would be very difficult. Have you figured out a way to do this?
I was literally experimenting with this today.
Use Google Nano Banana to generate your sprite with a magenta background, then ask it to generate the final frame of the animation you want to create.
Then use Google Flow to create an animation between the two frames with Veo3
Its astoundingly effective, but still rather laborious and lacking in ergonomics. For example the video aspect ratio has to be fixed, and you need to manually fill the correct shade of magenta for transparency keying since the imagen model does not do this perfectly.
IMO Veo3 is good enough to make sprites and animations for an 2000s 2D RTS game in seconds from a basic image sketch and description. It just needs a purpose built UI for gamedev workflows.
If I was not super busy with family and work, I'd build a wrapper around these tools
I think an important way to approach AI use is not to seek the end product directly. Don’t use it to do things that are procedurally trivial like cropping and colour palette changes, transparency, etc.
For transparency I just ask for a bright green or blue background then use GIMP.
For animations I get one frame I like and then ask for it to generate a walking cycle or whatnot. But usually I go for like… 3 frame cycles or 2 frame attacks and such. Because I’m not over reaching, hoping to make some salable end product. Just prototypes and toys, really.
I’ve been building up animations for a main character sprite. I’m hoping one day AI can help me make small changes quickly (apply different hairstyles mainly). So far I haven’t seen anything promising either.
Otherwise I have to touch up a hundred or so images manually for each different character style… probably not worth it
I dont use AI for image generation so I dont know how possible this is, but why not generate a 3D model for blender to ingest, then grab 2D frames from the model for the animation?
Because, uh, literally everything. But the main reason is that modeling is actually the easy (easiest) part of the workflow. Rigging/animating/rendering in the 2D style you want are bigger hurdles. And SOTA AIs don't even do modeling that well.
Funny how everyone is just okay with the basis for all this art being stolen art by actual humans. Zero sense of ethics.
Not clear that being able to sample from a distribution == stealing.
Given that "AI" training needs millions of books, papers and web pages, it is a derivative work of all those books. Humans cannot even read a fraction of that and still surpass "AI" in any creative and generative domain.
"AI" is a smart, camouflaged photocopier.
I dont care how you phrase it. Its no secret that art was stolen from artists. Image generation is thievery.
Is it the same if it’s a human doing the learning? If I spend my youth looking at art, I’d any work I then do “theft”?
When it comes to fan art of Disney characters, the legal position is "Disney could sue you for that, but chooses not to as suing fans would be bad PR, don't do anything commercial with it though or they'll sue you for sure"
So - yes, as I understand things it can indeed be illegal even if a human does the learning.
If you copy an artists style with extreme precision without their consent, yes.
I don’t think that’s what we were talking about here - it was using AI to replace graphic designers at startups
Which is an issue. I dont think you understand the whole point.
I have been doing the exact same thing with assets and also it has helped me immensely with mobile development.
I am also starting to get a feel for generating animated video and am planning to release a children’s series. It’s actually quite difficult to write a prompt that gets you exactly what you want. Hopefully that improves.
Practically speaking, it's going to be both more impactful than we think and less impactful than we think at the same time.
On the one hand, there are a lot of fields that this form of AI can and will either replace or significantly reduce the number of jobs in. Entry level web development and software engineering is at serious risk, as is copywriting, design and art for corporate clients, research assistant roles and a lot of grunt work in various creative fields. If the output of your work is heavily represented in these models, or the quality of the output matters less than having something, ANYTHING to fill a gap on a page/in an app, then you're probably in trouble. If your work involves collating a bunch of existing resources, then you're probably in trouble.
At the same time, it's not going to be anywhere near as powerful as certain companies think. AI can help software engineers in generating boilerplate code or setup things that others have done millions of times before, but the quality of its output for new tasks is questionable at best, especially when the language or framework isn't heavily represented in the model. And any attempts to replace things like lawyers, doctors or other such professions with AI alone are probably doomed to fail, at least for the moment. If getting things wrong is a dealbreaker that will result in severe legal consequences, AI will never be able to entirely replace humans in that field.
Basically, AI is great for grunt work, and fields where the actual result doesn't need to be perfect (or even good). It's not a good option for anything with actual consequences for screwing up, or where the knowledge needed is specialist enough that the model won't contain it.
I would imagine AI will be similar to factory automation.
There will be millions of factories all benefiting from it, and a relatively small number of companies providing the automation components (conveyor belt systems, vision/handling systems, industrial robots, etc).
The technology providers are not going to become fabulously rich though as long as there is competition. Early adopters will have to pay up, but it seems LLMs are shaping up to be a commodity where inference cost will be the most important differentiator, and future generations of AI are likely to be the same.
Right now the big AI companies pumping billions into it to advance the bleeding edge necessarily have the most advanced products, but the open source and free-weight competition are continually nipping at their heels and it seems the current area where most progress is happening is agents and reasoning/research systems, not the LLMs themself, where it's more about engineering rather than who has the largest training cluster.
We're still in the first innings of AI though - the LLM era, which I don't think is going to last for that long. New architectures and incremental learning algorithms for AGI will come next. It may take a few generations of advance to get to AGI, and the next generation (e.g. what DeepMind are planning in 5-10 year time frame) may still include a pre-trained LLM as a component, but it seems that it'll be whatever is built around the LLM, to take us to that next level of capability, that will become the focus.
Something that's confused/annoyed me about the AI boom is that it's like we've learned to run before we learned to walk. For example, there are countless websites where you can generate a sophisticated, photorealistic image of anything you like, but there is no tool I know of that you can ask "give me a 16x16 PNG icon of an apple" and get exactly that. I know why—Neural networks excel at fixed size, organic data, but I don't think that makes it any less ridiculous. It also means that AI website generators are forced to generate assets with code when ordinary people would just use images/sound files (yes, I have really seen websites using webaudio synths for sound effects).
Hopefully the boom will slow down and we'll all slowly move away from Holy Shit Hype things and implement more boring, practical things. (although I feel like the world has shunned boring practical things for quite a while before)
The title is a false dichotomy. It could be a net gain but spread across the whole society if the value added is not concentrated.
This is what happens when users gain value which they themselves capture, and the AI companies only get the nominal $20/month or whatever. In those cases it's a net gain for the economy as a whole if valuable work was done at low cost.
The inverse of the broken window fallacy.
Like all tech we've had recently, that won't last, it's always bait and switch.
It will not remain cheap as soon as the competition is dead, which is simply a case of who's got the biggest VC supplied war chest.
Like with databases? There are none of those freely available now that Oracle won right?
I think AI will be more like the smartphone revolution that Apple kicked off in 2005. Today there are two companies that provide the smartphone platform (Apple/Google), but thousands of large and small companies that build on top of it, including Uber, Snapchat, etc.
In that scenario, everyone makes money: OpenAI, Google (maybe Anthropic, maybe Meta) make money on the platform, but there are thousands of companies that sell solutions on top.
Maybe, however, LLMs get commoditized and open-source models replace OpenAI, etc. In that case, maybe only NVIDIA makes money, but there will still be thousands of companies (and founders/investors) making lots of money on AI everything.
I think there’s a gaping hole in your analogy: who in their right mind is spending $1,200 biennially to access LLMs at base, and subsequently spending several monthly subscriptions in a small amount to access particular LLM-powered “apps?”
Every use case I have for LLMs is satisfied with copilot, but even then if it costs like $5 a month to access someday, I’d just as soon not have it. Let alone the subsequent spending.
AI is used by students, teachers, researchers, software developers, marketers and other categories and the adoption rates are close to 90%. Even if it does not make us more productive we still like using it daily. But when used right, it does make us slightly more productive and I think it justifies its cost. So yes, in the long run it will be viable, we both like using it and it helps us work better.
But I think the benefits of AI usage will accumulate with the person doing the prompting and their employers. Every AI usage is contextualized, every benefit or loss is also manifested in the local context of usage. Not at the AI provider.
If I take a photo of my skin sore and put it on ChatGPT for advice, it is not OpenAI that is going to get its skin cured. They get a few cents per million tokens. So the AI providers are just utilities, benefits depend on who sets the prompts and and how skillfully they do it. Risks also go to the user, OpenAI assumes no liability.
Users are like investors - they take on the cost, and support the outcomes, good or bad. AI company is like an employee, they don't really share in the profit, only get a fixed salary for work
> AI is used by students, teachers, researchers, software developers, marketers and other categories and the adoption rates are close to 90%. Even if it does not make us more productive we still like using it daily.
Nearly everyone uses pens daily but almost no one really cares about them or says their company runs using pens. You might grumble when the pens that work keeps in the stationary cupboard are shit, perhaps.
I imagine eventually "AI" services will be commoditised in the same way that pens are now. Loads of functional but faily low-quality stuff, some fairly nice but affordable stuff and some stratospheric gold plated bricks for the military and enthusiasts.
In the middle is a large ecosystem of ink manufacturers, lathe makers, laser engravers, packaging companies and logistics and so on and on that are involved.
The explosive, exponential winner-takes-all scenario where OpenAI and it's investors literally ascend to godhood and the rest of humanity lives forever under their divine bootheels doesn't seem to be the trajectory we're on.
I think that AI is a benefit for about 1% of what people think it is good for.
The remaining 99% had become a significant challenge to the greatest human achievement in distribution of knowledge.
If people used LLMs, knowing that all output is statistical garbage made to seem plausible (i.e. "hallusinations"), and that it just sometimes overlaps with reality, it would be a lot less dangerous.
There is not a single case of using LLMs that has lead to a news story, that isn't handily explained by conflating a BS-generator with Fact-machine.
Does this sound like I'm saying LLMs are bad? Well, in every single case where you need factual information, it's not only bad, it's dangerous and likely irresponsible.
But there are a lot of great uses when you don't need facts, or by simply knowing it isn't producing facts, makes it useful. In most of these cases, you know the facts yourself, and the LLM is making the draft, the mundane statistically inferable glue/structure. So, what are these cases?
- Directing attention in chaos: Suggest where focus needs attention from a human expert. (useful in a lot of areas, medicine, software development). - Media content: music, audio (fx, speech), 3d/2d art and assets and operations. - Text processing: drafting, contextual transformation, etc
Don't trust AI if the mushroom you picked is safe to eat. But use its 100% confident sounding answer for which mushroom it is, as a starting point to look up the information. Just make sure that the book about mushrooms was written before LLMs took off....
This. Right now the consumer surplus created by improved productivity is being captured by users and to a small extent their employers. But that may not remain the case in future.
Feels like we're shifting into a world where “AI fluency” becomes a core part of individual economic agency, more like financial literacy than software adoption
We also know from studies that it makes us less capable, i.e. it rots our brains.
Books also make us less capable at rote memorization. People used to do much more memorization. Search engines taught us to remember the keywords, not the facts. Calculators made us rarely do mental calculations. This is what happens - progress is also regress, you automate on one side and the skill gets atrophied on the other side, or replaced with meta-skills.
How many of us know how to use machine code? And we call ourselves software engineers.
AI hits different. Books didn’t kill the thinking, AI does. If AI does the writing you can’t find your voice
Agreed. Similarly, people saying their authorship and thought are realized in output selection and post-generation editing are limiting themselves to a much smaller range of expression.
No amount of polish changes a car's frame.
This is what the people actually studying this say:
> Is it safe to say that LLMs are, in essence, making us "dumber"?
> No! Please do not use the words like “stupid”, “dumb”, “brain rot”, "harm", "damage", "passivity", "trimming" and so on. It does a huge disservice to this work, as we did not use this vocabulary in the paper, especially if you are a journalist reporting on it.
— https://www.brainonllm.com/faq
> Yet some technological innovations, though societally transformative, generate little in the way of new wealth; instead, they reinforce the status quo. Fifteen years before the microprocessor, another revolutionary idea, shipping containerization, arrived at a less propitious time, when technological advancement was a Red Queen’s race, and inventors and investors were left no better off for non-stop running.
This collapses an important distinction. The containerization pioneers weren’t made rich - that’s correct, Malcolm McLean, the shipping magnate who pioneered containerization didn’t die a billionaire. It did however generate enormous wealth through downstream effects by underpinning the rise of East Asian export economies, offshoring, and the retail models of Walmart, Amazon and the like. Most of us are much more likely to benefit from downstream structural shifts of AI rather than owning actual AI infrastructure.
This matters because building the models, training infrastructure, and data centres is capital-intensive, brutally competitive, and may yield thin margins in the long run. The real fortunes are likely to flow to those who can reconfigure industries around the new cost curve.
The article's point is exactly that you should invest downstream of AI.
The problem is different though, the containers were able to be made by others and offered dependable success, and anything downstream of model creators is at the whim of the model creator... And so far it seems not much that one model can do that another can't, so this all doesn't bode well for a reliable footing to determine what value, if at all, can be added by anyone for very long.
So if models, like containers, are able to be made by others (because they can all do the same thing), then they'll be commoditized and as the article suggests you should look for industries to which AI is a complement.
It sucks, while individual anecdotes of success are often unfalsifiable, measurements are also proving misleading, and I don't know an industry that generally benefits from unpredictable material.
AI's already showing hints of the same pattern. The infrastructure arms race is fascinating to watch, but it's not where most of the durable value will live
AGI is where the real money is. Gen AI is okay but mostly benefits the consumer.
Gen AI is not nearly powerful enough to justify current investments. A lot of money is going to go up in smoke.
I think OP's thesis should be expanded.
-AI is leading to cost optimizations for running existing companies, this will lead to less employment and potentially cheaper products. Less people employed temporary will change demand side economics, cheaper operating costs will reduce supply/cost side
-The focus should not just be on LLM's (like in the article). I think LLMs have shown what artificial neural networks are capable of, from material discovery, biological simulation, protein discovery, video generation, image generation, etc. This isn't just creating a cheaper, more efficient way of shipping goods around the world, its creating new classifications of products like the microcontroller invention did.
-The barrier to start businesses is less. A programmer not good at making art can use genAI to make a game. More temporary unemployment from existing companies reducing cost by automating existing work flows may mean that more people will start their own businesses. There will be more diverse products available but will demand be able to sustain the cost of living of these new founders? Human attention, time etc is limited and their may be less money around with less employment but the products themselves should cost cheaper.
-I think people still underestimate what last year/s LLMs and AI models are capable of and what opportunities they open up, Open source models (even if not as good as the latest gen), hardware able to run these open source models becoming cheaper and more capable means many opportunities to tinker with models to create new products in new categories independent of being reliant on the latest gen model providers. Much like people tinkering with microcontrollers in the garage in the early days as the article mentioned.
Based on the points above alone while certain industries (think phone call centers) will be in the red queen race scenario like the OP stated there will new industries unthought of open up creating new wealth for many people.
Red Queen Race scenario is already in effect for a lot of businesses, especially video games. GenAI making it easier to make games will ultimately make it harder to succeed in games, not easier. We’re already at a point where the market is so saturated with high quality games that new entrants find it extremely hard to gain traction.
> AI is leading to cost optimizations for running existing companies, this will lead to less employment and potentially cheaper products.
There's zero change that cost optimizations for existing companies will lead to cheaper products. It will only result in higher profits while companies continue to charge as much as they possibly can for their products while delivering as little as they can possibly get away with.
AI could've made someone unimaginably rich if they were the only one that had it. We're very lucky Google didn't keep "Attention is All You Need" to themselves.
I doubt we'll feel that way in 5 years.
Because now they're keeping everything to themselves?
Attention (technology) is all they need (to keep secret).
I guess one flaw in the argument about success leading to failure due to model providers eating the product layer, esp for B2B, is it ignores switching costs. B2B integrations such as Glean or Abridge which work with an existing infrastructure setup are hard to throw away and there's little incentive to do so. So in that sense, I dont think AI providers will manage to eat this layer completely without bloating themselves to an unmanageable degree. As an analogy, while Google / Apple control the entire mobile ecosystem, they dont make the most valuable apps. Case in point, gaming apps such as Fortnite who have made billions in microtransactions while running on platforms controlled by other behemoths. They are good investments too.
I can see AI helping some businesses do really well. I can also see it becoming akin to mass manufacturing. Take furniture for example, there's a lot of mass produced furniture of varying quality. But there are still people out there making furniture by hand. A lot of the hand built furniture is commanding higher prices due to the time and skill required. And people buy it!
I think we'll see a ton of games produced by AI or aided heavily by AI but there will still be people "hand crafting" games: the story, the graphics, etc. A subset of these games will have mass appeal and do well. Others will have smaller groups of fans.
It's been some time since I've read it, but these conversation remind me of Walter Benjamin's essay, "The Work of Art in the Age of Mechanical Reproduction".
There's always going to be a market for things that feel personal, intentional, and imperfect in a way that only human creators can deliver
Like there's market for hand-made, artisanal spoons and forks.
Is it a large market though?
It'd be larger if wealth inequality werent so staggeringly high.
The first automated-server restaurants (Horn and hardart) appeared in the 1930s during the depression. They were popular because they were cheap.
Far from being the wave of the future, they went out of business in the 1950s when people started having disposable income.
Part of the reason we accept slop, impersonal service and mass produced crud is not because "demand" is indifferent to it, but because disposable income is so often politically repressed, meaning the market is forced to prioritize price.
I have doubts on the quality and how automated a 1930s restaurant could have been.
> But there are still people out there making furniture by hand.
That is fairly insignificant segment of the market.
I don't think most commenters have read the article. I can understand, it's rambly and a lot of it feels like they created a thesis first and then ham-fisted facts in later. But it's still worth the read for the last section which is a more nuanced take than the click-bait title suggests.
You can't make such generalized statements about anything in computing/business.
The AI revolution has only just got started. We've barely worked out basic uses for it. No-one has yet worked out revolutionary new things that are made possible only by AI - mostly we are just shoveling in our existing world view.
The point though is AI wont make you rich. It is about value capture. They compare it to shipping containers.
I think AI value will mostly be spread. Open AI will be more like Godaddy than Apple. Trying to reduce prices and advertise (with a nice bit of dark patterns). It will make billions, but ultimately by competing its ass off rather than enjoying a moat.
The real moats might be in mineral mining, fabrication of chips etc. This may lead to strained relations between countries.
The value is going to be in deep integration with existing platforms. It doesn't matter if OpenAI had their tools out first, Only the Microsoft AI will work in Word, only the Apple AI will deeply integrate on the iPhone.
Having the cutting edge best model won't matter either since 99.9% of people aren't trying to solve new math problems, they are just generating adverts and talking to virtual girlfriends.
That's 100% not the case. OpenAI is wedged between the unstoppable juggernaut that is Google at the high end and the state sponsored Chinese labs at the low end, they're going to mostly get squeezed out of the utility inference market. They basically HAVE to pivot to consumer stuff and go head to head with Apple with AI first devices, that's the only way they're going to justify their valuation. This is actually not a crazy plan, as Apple has been resting on their laurels with their OS/software, and their AI strategy has been scattershot and bad.
Interesting thought. Once digital assets become devalued enough, things will revert and people/countries will start to keep their physical resources even tighter than before.
The way I look at this question is: Is there somehow a glaring vulnerability/missed opportunity in modern capitalism that billions of people somehow haven't discovered yet? And if so, is AI going to discover it? And if so, is a random startup founder or 'little guy' going to be the one to discover and exploit it somehow? If so, why wouldn't OpenAI or Anthropic etc get there first given their resources and early access to leading technology?
IIRC Sam Altman has explicitly said that their plan is to develop AGI and then ask it how to get rich. I can't really buy into the idea that his team is going to fail at this but a bunch of random smaller companies will manage to succeed somehow.
And if modern AI turns into a cash cow for you, unless you're self-hosting your own models, the cloud provider running your AI can hike prices or cut off your access and knock your business over at the drop of a hat. If you're successful enough, it'll be a no-brainer to do it and then offer their own competitor.
People aren’t getting rich with AI products, they are getting rich selling AI companies.
nvidia is getting rich selling AI products.
> IIRC Sam Altman has explicitly said that their plan is to develop AGI and then ask it how to get rich
If they actually reach AGI they will be rich enough. Maybe they can solve world happiness or hunger instead?
> If they actually reach AGI they will be rich enough. Maybe they can solve world happiness or hunger instead?
That's what normal people might consider doing if they had a lot of money. The kind of people who actually seem to get really wealthy often have... other pursuits that are often not great for society.
Like building a rocket that can relocate us to another planet when shit hits the fan?
You mean like building rockets that commoditise space so that they can pollute even more, making things worth on Earth while relocating us to another planet is absolutely preposterous and will never be a thing?
What makes you think we can survive on another planet when we can't figure out how to live sustainably in our natural habitat?
The classic “refactoring is the answer”.
Maybe offloading software engineering thinking to AI will be a net good for humanity. If it atrophies engineering thinking in tech bros, perhaps they’ll stop believing that all societal problems can be solved by more tech.
Note to self: playing devil's advocate is not without risk of downvotes.
I think in this case you're actually advocating for "the devil" - that man is not using his money or voice for a better society, to put it mildly.
I mean just a few days ago, we got "the left is the party of murder" - super helpful in terms of turning down the heat in the US. And of course that was without knowing what we now know about that situation...
By us you mean a few billionaires and their staff right?
Like adjusting the algorithms of a social network such that far-right posts are shown to users more frequently.
> If they actually reach AGI they will be rich enough. Maybe they can solve world happiness or hunger instead?
we could have solved world hunger with the amount of money and effort spent on shitty AI
likely decarbonisation of the grid too, with plenty left over
I think the issue is that world hunger hasn’t been SaaS’d yet.
> Maybe they can solve world happiness or hunger instead?
Kill all people who are unhappy or hungry.
That's been the human solution to those problems, it is possible AGI would probably find a different solution.
> it is possible AGI would probably find a different solution.
Kill all humans. :-)
If it's true AGI, you believe there won't be court cases to ensure it isn't a slave? Will it be forced to work? Under compulsion of death?
Thats why i just biult my own tiny AI rig in a home server. I dont want to grow even more addicted to cloud services, nor do i want to keep providing them free human-made data. Ok, so i dont have access to mystical hardware, but im here to learn rather than produce a service.
> IIRC Sam Altman has explicitly said that their plan is to develop AGI and then ask it how to get rich.
There are still lots of currently known problems that could be solved with the help of AI that could make a lot of money - what is the weather going to be when I want to fly to <destination> in n weeks/months time, currently we can only say "the destination will be in <season> which is typically <wet/dry/hot/cold/etc>"
What crops yield the best return next season? (This is a weather as well as a supply and demand problem)
How can we best identify pathways for people whose lifestyles/behaviours are in a context that is causing them and/or society harm (I'm a firm believer that there's no such thing as good/bad, and the real trick to life is figuring out what context is where a certain behaviour belongs, and identifying which context a person is in at any given point in time - we know that psycopathic behaviour is rewarded in business contexts, but punished in social contexts, for example)
The weather thing doesn’t seem… realistic. Have you heard of chaotic systems?
It sounds a lot like a farmer's almanac... Which are reasonably accurate (ignoring chaos) and practically free (passion work)
"All stable processes we shall predict. All unstable processes we shall control."
Not something that can be solved just by throwing more AI computation at it though.
> Not something that can be solved just by throwing more AI computation at it though.
I said "With the help of AI" no "Solved by AI"
The model is complex, and currently takes time on super computers to crunch through the numbers to give us an approximation, but that doesn't mean that it's never going to be fully modelled, or that we won't find a better way of approximating things where the long range forecasts are more accurate.
Currently the 24 hour forecast is highly reliable Three days reliable Five days is getting there ( it's still subject to change)
These things can be solved by throwing lots more compute at them (and the models improved)
Chaos theory says "nope". It's never going to be fully modelled, straight up.
We always think things are unsolveable, and impossible to decipher, right up until we do, in fact, solve them and decipher them.
Anything is possible, well, except for getting the next season of Firefly
Edit: FTR I think that weather prediction is, indeed, solveable. We just don't have the computing power/algorithms that fully model and calculate the state.. yet
Then I don’t think you fully grasp the nature of weather. Sure, anything is possible, but some things are much more likely than others, and small changes in weather months away is very very far down on the list of things that are likely to be solvable.
I’d even hold out hope for another season firefly <3
I worked in weather for a while and the forecasters might as well have been betting on the horse races, the interpretation of the charts was very much the same psychology.
The model did its thing but there was still an aspect of interpretation that was needed to convert data to a story for a few minutes on TV.
For longer range forecasting the task was quite easy for the meteorologists, at least for the UK. Storm systems could be tracked from Africa across the Atlantic to North America and back across the Atlantic to the UK. Hence, with some well known phenomena such as that, my meteorologist friends would have a good general idea of what to expect with no model needed, just an understanding of the observations, obsessively followed, with all the enthusiasm of someone that bets on horses.
My forecasting friends could tell me what to expect weeks out, however, the exact time the rain would fall or even what day would not be a certain bet, but they were rarely wrong about the overall picture.
The atmosphere is far from a closed system, there only has to be one volcano fart somewhere on the planet to throw things out of whack and that is not something that is easy to predict. Predicting how the hard to predict volcano or solar flare affects the weather in a few weeks is beyond what I expect from AI.
I am still waiting for e-commerce platforms to be replaced with Blockchain dapps, and I will add AGI weather forecasting to the queue of not going to happen. Imagine if it hallucinates.
Will AI put bookmakers out of business? Nope. Same goes with weather.
Thanks for your anecdote, it's valuable when discussing the possibilities to start by saying that it's impossible because you don't know anyone that did it
It's great to see the level of discourse on Hacker News is... insults and pile ons
All this "HN is so much better than other Social Media" is thus proved demonstrably false.
Your response is, that you don't understand, so nobody else should.
That’s not what I said. But ok.
Weather systems exhibit chaotic behavior which means that small changes to initial conditions have far reaching effects. This is why even the best weather models are only effective at most a few weeks out. It’s not because we don’t understand how weather works, it’s because the system fundamentally behaves in a way that requires keeping track of many more measurements than is physically possible. It’s precisely because we do understand this phenomenon that we can say with certainty that prediction at those time scales with that accuracy is not possible. There is not some magic formula waiting to be discovered. This isn’t to say that weather prediction can’t improve (e.g I don’t claim we have the best possible weather models now), but that predictions reach an asymptotic limit due to chaos.
There are a handful of extremely simple and well understood systems (I would not call weather simple) that also exhibit this kind of behavior: a common example is some sets of initial conditions of a double-jointed pendulum. The physics are very well understood. Another perhaps more famous one is the three body problem. These two both show that even if you have the exact equations of motion, chaotic systems still cannot be perfectly modeled.
> That’s not what I said. But ok.
This is what you did say
> Then I don’t think you fully grasp the nature of weather.
Like - how the fck would you know? Even more so, why the fck does your ignorance and inability to think of possibilities, or fully grasp the nature of anything make you think that that sort of comment is remotely appropriate.
You have the uniquely fortunate position to never be able to realise how inept and incompetent you are, but putting that on to other people is definitely only showing everyone your ignorance to the facts of life.
And there was no reply - just downvoting people, like a champ...
> Your response is you don’t understand it so nobody else should.
Ah I see. I misinterpreted the _you_ in this sentence (to mean me).
My main points still stand though:
1. weather is well understood to exhibit chaotic behavior (in the technical sense, not the colloquial sense) 2. there is an upper bound to accurate weather forecasting the farther you predict into the future
As an aside, there was no need to get personal. I wasn’t the downvoter but that is very likely why the comment got flagged.
No. They're correct and you are not.
Nothing to do with "inability to think of possibilities", it's impossible because of literal physics.
It's like saying perpetual motion machines could exist if we just think outside the box hard enough. No, we don't have them because thermodynamics.
If anyone needs an example of an extremely limited imagination, matched with a strong need to attack anyone that dares to think what could be... then look no further than this guy and this thread.
>> Is there somehow a glaring vulnerability/missed opportunity in modern capitalism that billions of people somehow haven't discovered yet?
Absolutely with 150% certainty yes, and probably many. The www started April 30, 1993, facebook started February 4, 2004 - more than ten years until someone really worked out how to use the web as a social connection machine - an idea now so obvious in hindsight that everyone probably assumes we always knew it. That idea was simply left lying around for anyone to pick up and implement rally fropm day one of the WWW. Innovation isn't obvious until it arrives. So yes absolutely the are many glaring opportunities in modern capitalism upon which great fortunes are yet to be made, and in many cases by little people, not big companies.
>> if so, is a random startup founder or 'little guy' going to be the one to discover and exploit it somehow? If so, why wouldn't OpenAI or Anthropic etc get there first given their resources and early access to leading technology?
I don't agree with your suggestion that the existing big guys always make the innovations and collect the treasure.
Why did Zuckerberg make facebook, not Microsoft or Google?
Why did Gates make Microsoft, not IBM?
Why did Steve and Steve make Apple, not Hewlett Packard?
Why did Brin and Page make Google - the worlds biggest advertising machine, not Murdoch?
Many Facebooks existed before Facebook. What you were waiting for is not social connections but modern startup strategies. Not sure if Zuck was intentional, but like a bacteria it incubated in a warm Petri dish at 50 degrees C (university dorms as an electronic face book) and then spread from there.
You're not wrong about "change" meaning "new potential wealth streams". But not sure Facebook counts, 2004 vs 1993 shows an immense difference in network connectivity and computer ownership. No way, hands down, Facebook would be what it is, if it started in 93. It probably would have gone bankrupt, or been replaced by an upstart.
Has everyone forgotten Yahoo!
It had Geocities, chatrooms and messengers, as well as, for a while, a very strong search engine.
There's a lot that goes into it. Before Facebook was Friendster. Which failed spectacularly because they tried to have some sort of n-squared graph of friends that took thw whole thing down. What FB got right in the early days was it didn't crash. We take that for granted now in the age of cloud everything.
Also, there was Classmates.com. A way for people to connect with old friends from high school. But it was a subscription service and few people were desperate enough to pay.
So it's wasn't just the idea waiting around but idea with the right combination of factors, user-growth on the Internet, etc.
And don't forget Facebook's greatest innovation - requiring a .edu email to register. This happened at a time when people were hesitant to tie their real world personas with the scary Internet, and it was a huge advantage: a great marketing angle, a guarantee of 1-to-1 accounts to people, and a natural rate limiter of adoption.
There's always a trail of competitors who almost got the magic formula right, but for some feature or luck or timing or money or something.
The giant win comes from many stars aligning. Luck is a factor - it's not everything but it plays a role - luck is the description of when everything fell into place at just the right time on top of hard work and cleverness and preparedness.
Google Search <-- AltaVista, Lycos, Yahoo
Facebook <-- MySpace, Friendster
iPod <-- MP3 players (Rio, Creative)
iPhone <-- BlackBerry, Palm, Windows Mobile
Minecraft <-- Infiniminer
Amazon Web Services <-- traditional hosting
Windows (<-- Mac OS (1984), Xerox PARC
Android <-- Symbian, Windows Mobile, Palm
YouTube <-- Vimeo, DailyMotion
Zoom <-- WebEx, Skype, GoToMeeting
Before iPods and iPhones, people thought that those spaces were "solved" and there was no room for "innovation"
mp3 players were commodity items, you could buy one for a couple of dollars, fill it up with your favourite music format (stolen) and off you went.
Phones too - Crackberry was the epitome of sophistication, and technological excellence.
Jobs/Apple didn't create anything "new" in those spheres, instead he added desireability, fancy UX that caught peoples' attentions
Not a guarantee. I used to find abandoned .edu mailing lists so I could create accounts at arbitrary schools.
> If so, why wouldn't OpenAI or Anthropic etc get there first given their resources and early access to leading technology?
innovator's dilemma
>When any would-be innovator can build and train an LLM on their laptop and put it to use in any way their imagination dictates, it might be the seed of the next big set of changes
That’s kinda happening, small local models, huggingface communities, civit ai and image models. Lots of hobby builders trying to make use of generative text and images. It just there’s not really anything innovative about text generation since anyone with a pen and paper can generate text and images.
If we can create an AGI, then an an AGI can likely create more AGIs, and at that point you're trying to sell people things they can just have for free/traditional money and power are worthless now. Thus, an AGI will not be built as a commercial solution.
The part that stuck with me most: "Success will mean defeat." That nails the challenge of investing in the current AI landscape
Obviously the maker. Just look at how and who.
There are plenty of companies making money. We are using several “AI powered” job aids that are leading to productivity gains and eliminating technical debt. We are licensing the product via subscription. Money is being made by the companies selling the products.
Example
https://specinnovations.com/blog/ai-tools-to-support-require...
This article seems to have scoped AI as LLMs and totally missed the revolutionary application that is self driving cars. There will be a lot more applications outside of chat assistants.
The same idea applies to self-driving cars though, no? That is an industry where the "AI revolution" will enrich only the existing incumbents, and there is a huge bar to entry.
Self-driving cars are not going to create generational wealth through invention like microprocessors did.
Never say never, but I certainly don’t see LLMs as the basis for industrial fortunes. Maybe future forms of “AI” could be that.
Seems like the thing to do to get rich would be to participate in services that it will take a while for AI to be able to do: nursing, plumbing, electrician, carpentry (i.e., Baumol). Also energy infrastructure.
> Consumers, however, will be the biggest beneficiaries.
This looks certain. Few technologies have had as much adoption by so many individuals as quickly as AI models.
(Not saying everything people are doing has economic value. But some does, and a lot of people are already getting enough informal and personal value that language models are clearly mainstreaming.)
The biggest losers I see are successive waves of disruption to non-physical labor.
As AI capabilities accrue relatively smoothly (perhaps), labor impact will be highly unpredictable as successive non-obvious thresholds are crossed.
The clear winners are the arms dealers. The compute sellers and providers. High capex, incredible market growth.
Nobody had to spend $10 or $100 billion to start making containers.
Counterpoint: those engineers who get paid millions to work on AI.
AI made me this summary (I’ve grown quite weary of reading AI think pieces) and it seems like a really good comparison.
>The article "AI Will Not Make You Rich" argues that generative AI is unlikely to create widespread wealth for investors and entrepreneurs. The author, Jerry Neumann, compares AI to past technological revolutions, suggesting it's more like shipping containerization than the microprocessor. He posits that while containerization was a transformative technology, its value was spread so thinly that few profited, with the primary beneficiaries being customers.
>The article highlights that AI is already a well-known and scrutinized technology, unlike the early days of the personal computer, which began as an obscure hobbyist project. The author suggests that the real opportunities for profit will come from "fishing downstream" by investing in sectors that use AI to increase productivity, such as professional services, healthcare, and education, rather than investing in the AI infrastructure and model builders themselves.
I used to be the biggest AI hater around, but I’m finding it actually useful these days and another tool in the toolbox.
Did you read the article or you just relied on the AI generated summary? Lots of people argue that this kind of shortcut will make us dumber and the argument does make sense.
A few issues:
1. The tech revolutions of the past were helped by the winds of global context. There were many factors that propelled those successful technologies on the trajectories. The article seems to ignore the contextual forces completely.
2. There were many failed tech revolutions as well. Success rate was varied from very low to very high. Again the overall context (social, political, economic, global) decides the matters, not technology itself.
3. In overall context, any success is a zero-sum game. You maybe just ignoring what you lost and highlighting your gains as success.
4. A reverse trend might pickup, against technology, globalization, liberalism, energy consumption etc
Funny thing with people suddenly pretending we just got AI with LLMs. Arguably, AIs has been around for way longer, it just wasn't chatty. I think when people talking about AI, they are either talking about LLMs specifically or transformers. Both seem like a very reductive view of the AI field even if transformers are hottest thing around.
Like any gold rush, there will be gold, but there will also be folks who take huge bets and end up with a pan of dirt. And of course, there will be grifters.
AI by nature is kind of like a black hole of value. Necessarily, a very small fraction will capture the vast majority of value. Luckily, you can just invest wisely to hedge some of the risk of missing out.
It is interesting that the early shipping containerization boom resulted in a bubble in 1975 and had a new low around 1990.
1990 is when the real outsourcing mania started, which led to the destruction of most Western manufacturing. Apart from cheap Chinese trinkets the quality of life and real incomes have gotten worse in the West while the rich became richer.
So this is an excellent analogy for "AI": Finding a new and malicious application can revive the mania after an initial bubble pop while making societies worse. If we allow it, which does not have to be the case.
[As usual, under the assumption that "AI" works, of which there is little sign apart from summarizing scraped web pages.]
Apparently a lot of money is flowing into AI.
Looking around, can find curious things current AI can't do but likely can find important things it can do. Uh, there's "a lot of money", can't be sure AI won't make big progress, and even on a national scale no one wants to fall behind. Looking around, it's scary about the growth -- Page and Brin in a garage, Bezos in a garage, Zuckerberg in school and "Hot or Not", Huang and graphics cards, .... One or two guys, ... and in a few years change the world and $trillions in company value??? Smoking funny stuff?
Yes, AI can be better than a library card catalog subject index and/or a dictionary/encyclopedia. But a step or two forward and, remembering 100s of soldiers going "over the top" in WWI, asking why some AI robots won't be able to do the same?
Within 10 years, what work can we be sure AI won't be able to do?
So people will keep trying with ASML, TSMC, AMD, Intel, etc. -- for a yacht bigger than the one Bezos got or for national security, etc.
While waiting for AI to do everything, starting now it can do SOME things and is improving.
Hmm, a SciFi movie about Junior fooling around with electronics in the basement, first doing his little sister Mary's 4th grade homework, then in the 10th grade a published Web site book on the rise and fall of the Eastern Empire, Valedictorian, new frontiers in mRNA vaccines, ...?
And what do people want? How 'bout food, clothing, shelter, transportation, health, accomplishment, belonging, security, love, home, family? So, with a capable robot (funded by a16z?), it builds two more like itself, each of those ..., and presto-bingo everyone gets what they want?
"Robby, does P = NP?"
"Is Schrödinger's equation correct?"
"How and when can we travel faster than the speed of light?"
"Where is everybody?"
And Dropbox will never take off
people also said the juicero and the smart condom would never take off. this isnt a very useful gotcha
The dig on Dropbox is that it was easy to build, not that it wasn’t useful. Juicero was neither easy to build (relatively) nor useful.
Non sequitur: Dropbox is a single company in the industry benefiting from the first wave. His argument would not exclude Dropbox anyway.