My management chain has recently mandated the use of AI during day-to-day work, but also went the extra step to mandate that it make us more productive, too. Come annual review time, we need to write down all the ways AI made our work better. That positive outcome is pre-supposed: there doesn't seem to be any affordance for the case where AI actually makes your work worse or slower. I guess we're supposed to ignore those cases and only mention the times it worked.
It's kind of a mirror image of the global AI marketing hype-factory: Always pump/promote the ways it works well, and ignore/downplay when it works poorly.
Just ask an Ai to write how it made you more productive in daily work. It's really good at that. You can pad it out to 1m words by asking it to expand on each section of with subsections.
« AI has made me productive by writing most of the answer to this question. You may ignore everything after this sentence, it is auto-generated purely from the question, without any intersection with reality. »
I was in a lovely meeting where a senior "leader" was looking at effort estimates and said "Do these factor in AI-tools? Seems like it should be at least 30% lower if it did."
Like I use AI tools, I even like using them, but saying "this tool is so good it will cut our dev time by 30%" should be coming from the developers themselves or their direct manager. Otherwise they are just making figures up and forcing them onto their teams.
I was that manager. I dunno about your senior leader but with me it was coming from a healthy place. After a few months of ra-ra from the C suite about how we were now an AI-first company (we're a tech consultancy building one-off stuff for customers) and should be using it in all our customer projects, I asked the question, quite reasonably I thought, "so am I going to offer lower prices to my clients, or am I going to see much higher achieved margins on projects I sell?"
And, crickets. In practice I haven't seen any efficiencies despite my teams using AI in their work. I am not seeing delivery coming in under estimates, work costs what it always cost, we're not doing more stuff or better stuff, and my margins are the same. The only difference I can see is that I've had to negotiate a crapton of contractual amendments to allow my teams to use AI in their work.
I still think it's only good for demos and getting a prototype up and running which is like 5% of any project. Most technical work in enterprise isn't going from zero to something, it's maintaining something, or extending a big, old thing. AI stinks at that (today). You startup people with clean slates may have a different perspective.
The President is a Soviet planted saboteur, it's not that surprising, it's coming from the top down. I assume this is the US manufacturing revolution he has in mind.
>In 1988, the Soviet newspaper Komsomolskaya Pravda stated that the widely propagandized personal achievements of Stakhanov actually were puffery. The paper insisted that Stakhanov had used a number of helpers on support work, while the output was tallied for him alone.
What do you mean by this? My understanding is that Stakhanovism is kind of the opposite of US work culture in that it lionizes the worker and social contributions
Your understanding is somewhat incomplete. There is a strong top-down push to celebrate output maximization without inquiring too closely about if this output is meaningful, valuable, repeatable, or even happened at all.
It’s about making the productivity metrics go up, even if at the expense of real productivity or common sense. The man who it was named after ironically (unironically?) faked his own metrics
It doesn't seem to even be about making metrics go up. It's about telling a narrative-reinforcing story: That AI is great. It's worth it. Leadership is right to be obsessed with it.
I would have thought that in a fight between "fooling ourselves with a story" and "metrics go up" that the metrics would win, but it seems to not be the case.
Fascinating example of corporate double-speak here!
> My management chain has recently mandated the use of AI during day-to-day work, but also went the extra step to mandate that it make us more productive, too.
Now they're on record as pro-AI while the zeitgeist is all about it, but simultaneously also having plausible deniability if the whole AI thing crumbles to ashes: "we only said to use it if it helped productivity!"
I suspect a lot of companies that go that route are pushing a marketing effort since they themselves have a stake in AI.
But I'd love to hear from truly customer only businesses, where AI is pure cost, with no upside, unless it truly pays for itself in business impact, and if they too are stuck in some justifying of their added cost loop to make their decision seem a good one no matter what, or if they are being more careful?
It’s kind of a good way to make your business collapse though, because figuring out the kinds of problems where LLMs are useful and where they’ll destroy your productivity is extremely important
This must be how conspiracy theorists feel. How could a whole class of people (the professional managerial class) all decide at once that AI was a wonderful too we all must adopt now, and it's all going to make all of us more productive and we're 100% certain about it? It boggles the mind. I'm sure just it's just social contagion, hype, and profit motive, but it definitely feels like a conspiracy sometimes.
People making the decisions are 5%, they delegate to managers who delegate to their teams and all the way down.
Decision makers (not the guy who thinks corner radius should be 12 instead of 16, obviously) want higher ROI and they see AI working for them for high level stuff.
At low level things are never sane.
Before AI it was offshore. Now it’s offshore with AI.
Prepare for chaos, the machine priests have thrown open the warp gate. May the Emperor have mercy for us.
Make it long enough that it's not worth any human's time reading it. Like full-on balls to the walls slop. Pass it between different LLMs to add additional conclusions and subheadings.
In fact, do it in parallel where one chatbot is adding another few pages here, and simultaneously and independently, another is adding different pages somewhere else and concatenate the results together.
Once you get about 25 pages of dense slop, just conclude that AI made writing this report 1000x more efficient.
Hah, that's not going to help, they're not going to read any individual response. They're going to feed the entire thing into the AI slop machine and ask it to generate them a summary. It's a slop human centipede.
Their ability to convince themselves they are geniuses far exceeds your ability to convince them they're not. They've already decided that the decision was a success. The only question is are you still going to be around to suffer the consequences when their delusions collide with objective reality?
Look, for most corporate jobs, there's honestly no way that you truly cannot find any kind or level of usage of AI tools to make you at least a bit more productive -- even if it's as simple as helping draft emails, cleaning up a couple lines of code here and there, writing a SQL query faster because you're rusty with it, learning a new framework or library faster than you would have otherwise, learning a new concept to work with a cross-functional peer, etc. It does not pass the smell test that you could find absolutely nothing for most corporate jobs. I'd hazard a guess that this attitude, which borders on outright refusal to engage in a good-faith manner, is what they're trying to combat or make unacceptable.
If the corporate directive was to share "if AI has helped and how" I would agree. But my company started that way and when I tested the new sql query analysis tool and reported (nicely and politely with positive feedback too) that it was making up whole tables to join to (assuming we had a simple "users" table with email/id columns which we did not have due to being a large company with purposefully segmented databases. The users data was only ever presented via api calls, never direct dB access).
My report was entirely unacknowledged along with other reports that had negative findings. The team in charge published a self-report about the success rate and claimed over 90% perfect results.
About a year later, upper management changed to this style of hard requiring LLM usage. To the point of associating LLM api calls from your intellij instance with the git branch you were on and requiring 50% llm usage on a per-pr basis otherwise you would be pip-ed.
This is abusive behavior aimed at generating a positive response the c suite can give to the board.
I know you don't want to hear this, but I also know you know this is true: you would genuinely need to look at the full dataset that team collected to draw any meaningful conclusion here. Your single example means pretty much nothing in terms of whether the tool makes sense at large scale. Not a single tool or technology exists in this entire field that never fails or has issues. You could just as well argue that because you read something wrong on Google or Stack Overflow that those tools should be banned or discouraged, yet that is clearly false.
That said, I don't agree with or advocate the specific rollout methodology your company is using and agree that it feels more abusive and adversarial than helpful. That approach will certainly risk backfiring, even if they aren't wrong about the large-scale usefulness of the tools.
What you're experiencing is perhaps more poor change management than it is a fundamentally bad call about a toolset or technology. They are almost certainly right at scale more than they are wrong; what they're struggling with is how to rapidly re-skill their employee population when it contains many people resistant to change at this scale and pace.
> I know you don't want to hear this, but I also know you know this is true
I wasn't sanctimonious to you, don't be so to me please.
> you would genuinely need to
> look at the full dataset that
> team collected to draw any
> meaningful conclusion here
I compared notes with a couple friends on other teams and it was the same for each one. Yes it's anecdotes but when the same exact people that are producing/integrating the service are also grading its success AND combine this very argument while hiding any data that could be used against them, I know I am dealing with people who will not tell the truth about what the data actually says.
It's not a good-faith question to say "here's a new technology, write about how it made you more productive" and expect the answer to have a relationship with the truth. You're pre-ordaining the answer!
Lets imagine it is 1990 and the tool is e-mail over snail mail. Would you want leadership of a company to allow every employee to find out on their own if email is better way to communicate despite the spam, impersonal nature, security and myriad other issues that patently exist to this day ? or allow exceptions if an employee insists (or even shows) how snail is better for them?
It is hardly feasible for an organization to budget time for replicating and validating results, form their own conclusions, for any employee form who wishes to question the effectiveness of the tool or the manner of deployment.
Presumably the organization has done that validation with reasonably sized sample of similar roles over significant period of time. It doesn't matter though, it would be also sound reasoning for leadership to take a strategic call even when such tests are not conducted or not applicable.
There are costs and time associated with accurate validation which they are unable / unwilling to wait or even pay for, even if they wish to. The competition is moving faster and not waiting, so deploying now rather than wait and validate is not necessarily even a poor decision.
---
Having said that, they can articulate their intent better than "write about how it made you more productive", by adding more description along the lines of "if not then explain all the things you have tried to try and adopt the tool and what and how it did not go well for you/ your role"
Typically well structured organizations with in-house I/O psychologists would add this kind of additional language in the feedback tooling, line managers may not be as well trained to articulate it in informal conversations, which is whole different kind of problem.
The answer isn't pre-ordained -- it's simply already known from experience, at least to a sufficient degree to not trust someone claiming it should be totally avoided. Like I said, there are not many corporate roles where it's legitimately impossible to find any kind of gain, even a small or modest one, anywhere at all.
We call these workers “pilots,” as opposed to “passengers.” Pilots use gen AI 75% more often at work than passengers, and 95% more often outside of work.
Identify a real issue with the technology, then shift the blame to a made-up group of people who (supposedly) aren't trying hard enough to embrace the technology.
Embody a pilot mindset, with high agency and optimism
Ridiculous, I have it on good authority that embracing the 'hacker ethos' by becoming a 'coding ninja' with a 'wizard' mindset will propel you to next-level synergisms within transformative paradigms like AI and blockchain.
To leverage that hacker ethos for maximum synergy, you'll need to empower a holistic and agile mindset. This allows you to pivot toward a disruptive paradigm and monetize your scalable core competencies across the entire ecosystem.
This isn't wrong though. There's obviously two types of people using AI: one is "explain to me how X works", and the other is "do X for me". Same pattern with every technology.
The ai use mandates are odd. My guess is that the c-level execs have very little practical technical skills at this point, probably haven't written a line of code in 20 years. And they believe ALL the ai hype. They think LLM's can do anything, so any employees not using them are clearly wasting time.
When you say "AI cannot do my job, [insert whatever reason you find compelling]"
Execs only hear "I am trying to protect my job from automation".
The executives have convinced themselves that the AI productivity benefits are real, and generally refuse to listen to any argument to the contrary. Especially from their own employees.
This impedes their ability to evaluate productivity data; If a worker fails to show productivity, it can't be that AI is bad, because that'd mean the executives are wrong about something. It must be that the employee is sabotaging our AI efforts.
I've come to appreciate that using AI tools are a skill on it's own. Anything beyond auto code completion takes quite a bit of conscious effort to experiment with and then learn how to delegate to in a workflow. They often end up being valuable, but it did take some work to get out of my productivity 'local maximum' that maybe not everyone would naturally take on.
I think if LLMs improved or our usages of them improved to the point we became design/code reviewers full time many of us would leave to do something less boring and so in some ways there is a negative incentive to investigate different AI driven workflows.
Not odd under the theory that they are being done to buy wiggle room to reduce the force later on. They announce the firing and layoff of those who haven't made their forecasted numbers.
Today, I discussed with a product manager who insists on attaching AI generated prototypes to PRDs without any design sessions for exploration or refinement (I’m a UX designer). These prototypes contain many design issues that I must review and address each time.
Worse still, they look polished and creates the illusion that the work is nearly complete. So instead of moving faster, we end up with more back and forth about the AI miss interpretations.
I've never yet accepted an AI written answer when responding to my emails all though I try it routinely. Mostly it just doesn't capture my style. But even when it does, there's some kind of essential spark missing.
I think a lot about the concept that the AI output is still 99% regression to a mean of some kind. In that sense, the part it can generate for you is all the boring stuff - what doesn't add value. And to be sure, if you're writing an email etc, a huge amount of that is boring filler, most of the time. But the part it specifically cannot do is the only part that matters - the original, creative part.
The filler was never important anyway. Physically typing text was never the barrier. It's finding time and space to have the creative thought necessary to put into the communication that is the challenge. And the AI really doesn't help at all with that.
My CEO sent an ai generated blog today. I've never felt more frustrated reading something in my life. "x happened, here's what it means", "groundbreaking", "game-changer", "significant", "forefront of a technological shift"
My friends job of late has basically become reviewing AI-generated slop his non-technical boss is generating that mostly seems to work and proving why it's not production-ready.
Last week he was telling me about a PR he'd received. It should have been a simple additional CRUD endpoint, but instead it was a 2,000+ loc rats nest adding hooks that manually manipulated their cache system to make it appear to be working without actually working.
He spent most of his day explaining why this shouldn't be merged.
More and more I think Brandolini's law applies directly to AI-generated code
> The amount of [mental] energy needed to refute ~bullshit~ [AI slop] is an order of magnitude bigger than that needed to produce it.
He wants to build a website that will turn him into a bazillionaire.
He asks AI how to solve problem X.
AI provides direction, but he doesn't quite know how to ask the right questions.
Still, the AI manages to give him a 70% solution.
He will go to his grave before he learns enough programming to do the remaining 30% himself, or, understand the first 70%.
Delegating to AI isn't the same as delegating to a human. If you mistrust the human, you can find another one. If you mistrust the AI, there aren't many others to turn to, and each comes with an uncomfortable learning curve.
In the early aughts, I was so adept at navigating my town because I delivered pizza. I could draw a map from memory. My directional skills were A+.
Once GPS became ubiquitous, I started relying on it, and over about a decade, my navigational skills degraded to the point of embarrassment. I've lived in the same major city now for 5 years and I still need a GPS to go everywhere.
This is happening to many people now, where LLMs are replacing our thinking. My dad thinks he is writing his own memoirs. Yeah pop, weird how you and everyone else just started using the "X isn't Y, it's Z" trope liberally in your writing out of nowhere.
It's definitely scary. And it's definitely sinister. I maintain that this is intentional, and the system is working the way they want it to.
AI slop at work is an absolute waste of time and effort.
But your dad is trying to write his story, probably because he wants to leave something behind so he’s not forgotten. It might be cliche-riddled but AI is helping him write his experiences in a form he’s happy with and it’s still his story even if he got help.
He’s also probably only writing it for an audience of one - you. So don’t shit on it, read it. Or you might regret it.
I get what you are saying, and a situation like this needs to be treated with extreme tact and care. But no, it's not his story, it's a low res approximation of his story as viewed through the lens of the stastical average reddit comment or self published book.
If the father is really into the tech side of it (as opposed to pure laziness), I'd ask him for the prompts alongside the generated text and just ignore the output. The prompts are the writing that is meant for the original commentor, and it is well worth it to take the tact of not judging those by their writing quality independently.
I sympathize with people who find writing difficult. But, putting myself in GP's shoes, I can't imagine trying to read my father's LLM-generated memoir. How could I possibly understand it as _his_ work? I would be sure that he gave the LLM some amount of information that would render it technically unique, but there's no way I could hear his voice in words that he didn't choose.
If you're writing something for an audience of one, literally nothing matters more than the connection between you and the reader. As someone with a father who's getting on in years, even imagining this scenario is pretty depressing.
I think you might’ve missed this part from the post:
> AI-generated slop his non-technical boss is generating
It’s his boss. The type of boss who happily generates AI slop is likely to be the type of person who wants things done their way. The employee doesn’t have the power to block the merge if the boss wants it, thus the conversation on why it shouldn’t be merged needs to be considerably longer (or they need to quit).
Why would he merge any code from his non technical boss? Writing code obviously isnt his role so I dont know why he would expect his code to be getting merged all of a sudden. Just straight up tell him this isnt useful please stop.
> The amount of [mental] energy needed to refute ~bullshit~ [AI slop] is an order of magnitude bigger than that needed to produce it
I see this in code-reviews where AI tools like code-rabbit and greptile are producing workslop in enormous quantities. It is sucking up enormous amount of human energy just reading the nicely formatted bs put out by these tools. All of that for finding an occasional nugget that turns out to be useful.
I largely agree. As a counterpoint, today I delivered a significant PR that was accepted easily by the lead dev with the following approach:
1. Create a branch and vibe code a solution until it works (I'm using codex cli)
2. Open new PR and slowly write the real PR myself using the vibe code as a reference, but cross referencing against existing code.
This involved a fair few concepts that were new to me, but had precedent in the existing code. Overall I think my solution was delivered faster and of at least the same quality as if I'd written it all by hand.
I think its disrespectful to PR a solution you don't understand yourself. But this process feels similar to my previous non-AI assisted approach where I would often code spaghetti until the feature worked, and then start again and do it 'properly' once I knew the rough shape of the solution
The best way I’ve found to use LLM’s for writing anything that matters is, after feeding it the right context, take its output and then retype it in your own words. Then the LLM has helped capture your brain dump and organize it but by forcing yourself to write it rather than copy and paste… you get to make it your own. This technique has worked quite well with domains I’m not the best at yet like marketing copy. I want my shit to have my own voice but I’m not sure what to cover… so let the LLM help me with what to cover and then I can rewrite its work.
Sadly, I've seen multiple well-known developers here on HN argue that reading code in fact isn't hard and that it's easy to review AI-generated code. I think fundamentally what AI-generated code is doing is exposing the cracks in many, many engineers across the board that either don't care about code quality or are completely unable to step back and evaluate their own process to see if what they're doing is good or not. If it works it works and there's no need to understand why or how.
I think this is equally true of writing. Once you see something written one way, it's very hard to imagine other ways of writing the same thing. The influence of anchoring bias is quite strong.
A strong editor is able to overcome this anchoring bias, imagine alternative approaches to the same problem, and evaluate them against each other. This is not easy and requires experience and practice. I am starting to think that a lot of people who "co-write" with ChatGPT are seriously overestimating their own editing skills.
Reviewing code is basically applying the Chesterton’s fence principle to everything. With AI code there’s typically so much incidental noise that trying to identify intention is a challenge.
But then again I’ve found a lot of people are not bothered by overly convoluted code that is the equivalent of using a hammer for screws either…
Worse - there is no actual intention, so attempting to grok it from the code is even more wasted energy.
You have to nitpick everything, because there is no actual meaningful aim that is consistent.
I ran across an outsourcer that did the same thing about 20 years ago (near as I could tell he was randomly cutting and pasting random parts of stack overflow answers until it compiled!). We got him away from the code base/fired ASAP because he was an active threat to everyone.
The article as I see it is just one paragraph that end "So much activity, so much enthusiasm, so little return. Why?" Is there more if you're a subscriber to Harvard Business Review?
The AI revolution 2022-2030 is a speed run of the IT revolution of 1970-2000. In other words, how to 1000x management overhead while reducing real productivity, meanwhile skyrocketing nominal productivity.
It'll be very funny if any AI productivity gains are balanced by productivity loss due to slop - all the while using massive amounts of electricity to achieve nothing.
I don’t think AI slop is inherently mandatory, but I worry that the narrative around AI will devalue engineering work enough that it becomes impossible to avoid.
I've spent 16 years and I won't exactly be cheering if we hit a wall with AI.
I love programming, but I also love building things. When I imagine what having an army of mid-level engineers that genuinely only need high level instruction to reliably complete tasks, and don't require raising hundreds of millions while become beholden to some 3rd party, would let me build... I get very excited.
Programming is almost never an objective in itself, it's a stepping stone to some other task. It's nice it pays a good living, but I have a feeling that's going away eventually.
Is it the “workslop” that is causing the problem, or the slop that companies demand and that passes for work in the first place? Really wanna summon the ghost of David Graeber (“Bullshit Jobs”) here: if you’re a manager who demands your employees to produce PowerPoints about the TPS reports, you probably shouldn’t be surprised when you get meaningless LLM argle-bargle in return.
The thing about companies asking for slop is that a middle manager maintaining usual stream of vacuous text is a proxy for that person paying attention to a given set of problems and alerting others. AI becomes a problem because someone can maintain a vacuous text stream without that attention.
And this is my real fear with the current crop of AI. That rather than improving the system to require less junk, we just use AI to generate junk and AI to parse AI-generated junk that another AI summarises ad infinitum…
Like, instead of building a framework that reduces boilerplate code, we use an LLM to generate that boilerplate instead (in a more complex and confusing manner than using a traditional template would).
Or, when dealing with legal or quasi-legal matters (like certifications),
1. Generate AI slop to fill a template
2. Use AI to parse that template and say whether it’s done
3. Use another to write a response.
Lots of paperwork happening near instantly, mostly just burning energy and not adding to either the body of knowledge or getting to a solution.
AI is functionally equivalent to disinformation as it automates the dark matter of communication/language, transfers the status back to the recipient, it teaches receivers that units contents are no longer valid in general and demands a tapeworm format to replace what is being trained on.
Use your imagination. Any event that cannot be defined by low dimensional meaning (which there are myriad). The problem with the distinctions between event perception (what causes a car crash) and what AI assimilates is the arbitrary bottleneck of words, causal statements, images, deep reductions which are possible illusions of semantics like beliefs, motivations, desires.
Humans can take in a remarkable array of stimuli in order to perform tasks that are not cause and effect through optic flow. AI is stuck behind a veil of predicted tokens.
In essence, AI cannot automate mimicry of understanding even when the magic act demonstrates. Events are already tapeworms, but our skillset is so stuck behind the veil of folk psychology and f science, we are merely pretending we understand these events.
So a tapeworm format is probably somewhat like a non-causal contradictory event that may have infinite semantic readings. Think edges of paradox: Kubrick, Escher, Cezanne, Giorgione, Song Dynasty landscapes, Heraclitus, find the Koanic paradoxes of the East and keep conjoining them. Think beyond words, as thoughts are wordless, the tapeworms are out there, math doesn't and can't see them.
This is what's so jarring about meme-culture, which are tapeworms to AI as they are tapeworms to narrative containment culture, what might be viewed as academia/intelligensia/CS engineering/plain English media, the tapeworms are here, CS I think assumes the causal/semantic linkages are simple, meaning is low-bandwidth, but the reality is semantic is both limitless (any event) and illusory (meme-culture ie several memes in sequence). AI has no path in either case.
There are vast, unexplored sums of knowledge that LLMs can't automate, mimic or regurgitate. This is obvious simply from meme-culture. Try asking Chat GPT to derive meaning from Kirk's shooters casing engravings. Listen to the explanations unravel.
Once you attach the nearly limitless loads of meaning available to event-perception (use cognitive mapping in neuroscience where behavior has no meaning, it simply has tasks and task demands vary wildly so that semantic loads are factors rather than simple numbers), LLMs appear to be like puppets of folk psychology using tokens predictably in embedded space. These tokens have nothing to do with the reality of knowledge or events. Of course engineers can't grasp this, you've been severely limited to using folk psychology infected cog sci as a base of where your code is developed from, when in reality, it's almost totally illusory. CS has no future game in probability, it's now a bureaucracy. The millions or billions of parameters have zero access to problems like these that sit beyond cog sci, I'll let Kelso zing it
No one — human or LLM — actually knows the meanings of the phrases that Tyler James Robinson wrote on his cartridge casings. There's lots of speculation but he isn't talking, and even if he was we wouldn't know whether he was telling the truth. If you want us to take you seriously then you'll have to come up with a valid example instead of posting a bunch of pseudo-intellectual drivel.
You're proving me correct. The pseudoscience is CS, it has no game in events. The interdisciplinary search for semantics derived from events isn't pseudo-intellectual drivel, it's the central quest of key sciences and subfields that range into neuroscience.
Of course we can concatenate meanings from his behavior and clues, but these meanings are not accessible in AI or narratives. You're essentially throwing in the towel as proof legacy explanations have died in automation in AI.
Face it CS, your approach is bureaucratic, for enforcing the dead status-quo of knowledge, not for the leading edges.
The problem with most corporate work that these managerial idiots want replaced with AI is that is all so utterly useless. Reports written that no one will ever read, presentations made for the sake of the busy-ness of "making a deck", notes and minutes of meetings that should never have taken place in the first place. Summaries written by AI of longer-form work that are then shoved into AI to make sense of the AI-written summary.
I like the quote in the middle of the article: "creating a mentally lazy, slow-thinking society that will become wholly dependant [sic] upon outside forces". I believe that orgs that fall back on the AI lie, who insist on schlepping slop from one side to the other, will be devoured by orgs that see through the noise.
It's like code. The most bug-free code are those lines that are never written. The most productive workplace is the one that never bothers with that BS in the first place. But, promotions and titles and egos are on the line so...
AI in its current form, like the swirling vortex of corporate bilge that people are forced to swim through day after day after day to, can't die fast enough.
> Summaries written by AI of longer-form work that are then shoved into AI to make sense of the AI-written summary.
Also the problem where someone has bullet-points, they fluff them up in an LLM, send the prose, and then the receiver tries to use an LLM to summarize it back down to bullet-points.
I may be over-optimistic in predicting that eventually everyone involved will rage-flip the metaphorical table, and start demanding/sending the short version all the time, since there's no longer anything to be gained by prettying it up.
So many times you write a document for someone to review and question that it could be written better. Yet the reader will now just ask for bullet points from the AI. I was hoping people would go, right, let's just start writing bullet points from the get go and not bother with a full document.
Once you allow AI to replace the process, you kind of reveal that the process never mattered to you, if you want faster pace at the expense of other things, you don't need to pay for AI, just drop the unnecessary process.
I feel AI is now just a weird excuse, like you're pretending you have not lowered the quality and stopped writing proper documents, professional emails, full test suites, properly reviewed each other's code, no, you still do all this, just not you personally, it's "automated".
It's like cutting corners but being able to pretend like the corner isn't cut, because AI still fully takes the corner :p
So true. We used to appoint someone in the group to take notes. These notes were always correct, to the point, short and easy to read.
Now our manager(s) are heavily experimenting with recording all meetings and desperately trying to produce useful reports using all sorts of AI tools. The output is always lengthy and makes the manager super happy. Look, amazing reports! But on closer inspection they're consistently incomplete one way or another, sometimes confidently incorrect and full of happy corpo mumbo jumbo. More slop to wade through, when looking for factual information later on.
Our manager is so happy to report that he's using AI for everything. Even in cases where I think completeness and correctness is important. I honestly think it's scary how quickly that desire for correctness is gone and replaced with "haha this is cool tech".
Us devs are much more reluctant. We don't want to fall behind, but in the end when it comes to correctness and accountability, we're the ones responsible. So I won't brainlessly dump my work into an LLM and take its word for granted.
It's their company; we just work at it. If we want to exert more control in the workplace, we obviously need more power in the workplace. In the meantime, if they want the equivalent of their company's prefrontal cortex to be burned out with a soldering iron, that's their prerogative.
The problem with corporate work is that it exists - that corporations exist.
You do have the option to spend your time elsewhere - if you can handle every NPC friend and family member thinking you've lost your mind when you quit that cushy corporate gig and go work a low status, low pay job in peace and quiet - something like a night time security guard.
If you were around for the heyday of Markov chain email and Usenet spam this whole thing is familiar. Sure AI slop generation is not directly comparable to Markov process and generated texts are infinitely smoother yet it has similar mental signature. I believe this similarity puts me squarely in the offended 22%.
My management chain has recently mandated the use of AI during day-to-day work, but also went the extra step to mandate that it make us more productive, too. Come annual review time, we need to write down all the ways AI made our work better. That positive outcome is pre-supposed: there doesn't seem to be any affordance for the case where AI actually makes your work worse or slower. I guess we're supposed to ignore those cases and only mention the times it worked.
It's kind of a mirror image of the global AI marketing hype-factory: Always pump/promote the ways it works well, and ignore/downplay when it works poorly.
Just ask an Ai to write how it made you more productive in daily work. It's really good at that. You can pad it out to 1m words by asking it to expand on each section of with subsections.
if one works at a place like ryandrake for sure so much this :) also ask it to ultrathink and be super comprehensive, you’ll be promoted in no time
« AI has made me productive by writing most of the answer to this question. You may ignore everything after this sentence, it is auto-generated purely from the question, without any intersection with reality. »
I was in a lovely meeting where a senior "leader" was looking at effort estimates and said "Do these factor in AI-tools? Seems like it should be at least 30% lower if it did."
Like I use AI tools, I even like using them, but saying "this tool is so good it will cut our dev time by 30%" should be coming from the developers themselves or their direct manager. Otherwise they are just making figures up and forcing them onto their teams.
I was that manager. I dunno about your senior leader but with me it was coming from a healthy place. After a few months of ra-ra from the C suite about how we were now an AI-first company (we're a tech consultancy building one-off stuff for customers) and should be using it in all our customer projects, I asked the question, quite reasonably I thought, "so am I going to offer lower prices to my clients, or am I going to see much higher achieved margins on projects I sell?"
And, crickets. In practice I haven't seen any efficiencies despite my teams using AI in their work. I am not seeing delivery coming in under estimates, work costs what it always cost, we're not doing more stuff or better stuff, and my margins are the same. The only difference I can see is that I've had to negotiate a crapton of contractual amendments to allow my teams to use AI in their work.
I still think it's only good for demos and getting a prototype up and running which is like 5% of any project. Most technical work in enterprise isn't going from zero to something, it's maintaining something, or extending a big, old thing. AI stinks at that (today). You startup people with clean slates may have a different perspective.
Another possibility: your teams are working less now.
It's amazing how US business culture has reinvented Soviet Stakhanovism.
This is absolutely dead-on.
The President is a Soviet planted saboteur, it's not that surprising, it's coming from the top down. I assume this is the US manufacturing revolution he has in mind.
https://en.wikipedia.org/wiki/Stakhanovite_movement
>In 1988, the Soviet newspaper Komsomolskaya Pravda stated that the widely propagandized personal achievements of Stakhanov actually were puffery. The paper insisted that Stakhanov had used a number of helpers on support work, while the output was tallied for him alone.
What do you mean by this? My understanding is that Stakhanovism is kind of the opposite of US work culture in that it lionizes the worker and social contributions
Your understanding is somewhat incomplete. There is a strong top-down push to celebrate output maximization without inquiring too closely about if this output is meaningful, valuable, repeatable, or even happened at all.
It’s about making the productivity metrics go up, even if at the expense of real productivity or common sense. The man who it was named after ironically (unironically?) faked his own metrics
It doesn't seem to even be about making metrics go up. It's about telling a narrative-reinforcing story: That AI is great. It's worth it. Leadership is right to be obsessed with it.
I would have thought that in a fight between "fooling ourselves with a story" and "metrics go up" that the metrics would win, but it seems to not be the case.
what you said is what I was thinking. Thanks for phrasing it so eloquently
Same thing happened to work from home. Meta straight up sabotaged the reason for it's own rebrand in service of it.
Ways AI have made me more productive: Spellcheck has reduced the number of typos I've made in slack threads between 4 and 10%.
Fascinating example of corporate double-speak here!
> My management chain has recently mandated the use of AI during day-to-day work, but also went the extra step to mandate that it make us more productive, too.
Now they're on record as pro-AI while the zeitgeist is all about it, but simultaneously also having plausible deniability if the whole AI thing crumbles to ashes: "we only said to use it if it helped productivity!"
Do you see? They cannot be wrong.
> but also went the extra step to mandate that it make us more productive, too.
Before you make any decision, ask yourself: "Is this good for the company?"
Does your company have a stake in AI?
I suspect a lot of companies that go that route are pushing a marketing effort since they themselves have a stake in AI.
But I'd love to hear from truly customer only businesses, where AI is pure cost, with no upside, unless it truly pays for itself in business impact, and if they too are stuck in some justifying of their added cost loop to make their decision seem a good one no matter what, or if they are being more careful?
> Come annual review time, we need to write down all the ways AI made our work better.
That is where the AI come into full use.
It’s kind of a good way to make your business collapse though, because figuring out the kinds of problems where LLMs are useful and where they’ll destroy your productivity is extremely important
This must be how conspiracy theorists feel. How could a whole class of people (the professional managerial class) all decide at once that AI was a wonderful too we all must adopt now, and it's all going to make all of us more productive and we're 100% certain about it? It boggles the mind. I'm sure just it's just social contagion, hype, and profit motive, but it definitely feels like a conspiracy sometimes.
There’s no conspiracy.
People making the decisions are 5%, they delegate to managers who delegate to their teams and all the way down.
Decision makers (not the guy who thinks corner radius should be 12 instead of 16, obviously) want higher ROI and they see AI working for them for high level stuff.
At low level things are never sane.
Before AI it was offshore. Now it’s offshore with AI.
Prepare for chaos, the machine priests have thrown open the warp gate. May the Emperor have mercy for us.
Just make shit up, or even better have the AI make shit up for you
The problem is, the shit that's made up will be used to justify the decision as a success and ensure the methodology continues.
If they’re mandating use like this I doubt it’s their only dysfunction. At least this one has a built in scapegoat.
Sounds like they are going to consider it a success no matter what.
Make it long enough that it's not worth any human's time reading it. Like full-on balls to the walls slop. Pass it between different LLMs to add additional conclusions and subheadings.
In fact, do it in parallel where one chatbot is adding another few pages here, and simultaneously and independently, another is adding different pages somewhere else and concatenate the results together.
Once you get about 25 pages of dense slop, just conclude that AI made writing this report 1000x more efficient.
Hah, that's not going to help, they're not going to read any individual response. They're going to feed the entire thing into the AI slop machine and ask it to generate them a summary. It's a slop human centipede.
Their ability to convince themselves they are geniuses far exceeds your ability to convince them they're not. They've already decided that the decision was a success. The only question is are you still going to be around to suffer the consequences when their delusions collide with objective reality?
I went through this shit an year ago. The reports had to be weekly, though.
Everything sounded very mandatory, but a couple of months later nobody was asking about reports anymore.
Just give them an AI generated response.
Soundslike a case of Republicanism.
News just in, Nvidia dumped $100B in OpenAI to pump the failing bubble.
Look, for most corporate jobs, there's honestly no way that you truly cannot find any kind or level of usage of AI tools to make you at least a bit more productive -- even if it's as simple as helping draft emails, cleaning up a couple lines of code here and there, writing a SQL query faster because you're rusty with it, learning a new framework or library faster than you would have otherwise, learning a new concept to work with a cross-functional peer, etc. It does not pass the smell test that you could find absolutely nothing for most corporate jobs. I'd hazard a guess that this attitude, which borders on outright refusal to engage in a good-faith manner, is what they're trying to combat or make unacceptable.
If the corporate directive was to share "if AI has helped and how" I would agree. But my company started that way and when I tested the new sql query analysis tool and reported (nicely and politely with positive feedback too) that it was making up whole tables to join to (assuming we had a simple "users" table with email/id columns which we did not have due to being a large company with purposefully segmented databases. The users data was only ever presented via api calls, never direct dB access).
My report was entirely unacknowledged along with other reports that had negative findings. The team in charge published a self-report about the success rate and claimed over 90% perfect results.
About a year later, upper management changed to this style of hard requiring LLM usage. To the point of associating LLM api calls from your intellij instance with the git branch you were on and requiring 50% llm usage on a per-pr basis otherwise you would be pip-ed.
This is abusive behavior aimed at generating a positive response the c suite can give to the board.
I know you don't want to hear this, but I also know you know this is true: you would genuinely need to look at the full dataset that team collected to draw any meaningful conclusion here. Your single example means pretty much nothing in terms of whether the tool makes sense at large scale. Not a single tool or technology exists in this entire field that never fails or has issues. You could just as well argue that because you read something wrong on Google or Stack Overflow that those tools should be banned or discouraged, yet that is clearly false.
That said, I don't agree with or advocate the specific rollout methodology your company is using and agree that it feels more abusive and adversarial than helpful. That approach will certainly risk backfiring, even if they aren't wrong about the large-scale usefulness of the tools.
What you're experiencing is perhaps more poor change management than it is a fundamentally bad call about a toolset or technology. They are almost certainly right at scale more than they are wrong; what they're struggling with is how to rapidly re-skill their employee population when it contains many people resistant to change at this scale and pace.
> I know you don't want to hear this, but I also know you know this is true
I wasn't sanctimonious to you, don't be so to me please.
> you would genuinely need to
> look at the full dataset that
> team collected to draw any
> meaningful conclusion here
I compared notes with a couple friends on other teams and it was the same for each one. Yes it's anecdotes but when the same exact people that are producing/integrating the service are also grading its success AND combine this very argument while hiding any data that could be used against them, I know I am dealing with people who will not tell the truth about what the data actually says.
It's not a good-faith question to say "here's a new technology, write about how it made you more productive" and expect the answer to have a relationship with the truth. You're pre-ordaining the answer!
Lets imagine it is 1990 and the tool is e-mail over snail mail. Would you want leadership of a company to allow every employee to find out on their own if email is better way to communicate despite the spam, impersonal nature, security and myriad other issues that patently exist to this day ? or allow exceptions if an employee insists (or even shows) how snail is better for them?
It is hardly feasible for an organization to budget time for replicating and validating results, form their own conclusions, for any employee form who wishes to question the effectiveness of the tool or the manner of deployment.
Presumably the organization has done that validation with reasonably sized sample of similar roles over significant period of time. It doesn't matter though, it would be also sound reasoning for leadership to take a strategic call even when such tests are not conducted or not applicable.
There are costs and time associated with accurate validation which they are unable / unwilling to wait or even pay for, even if they wish to. The competition is moving faster and not waiting, so deploying now rather than wait and validate is not necessarily even a poor decision.
---
Having said that, they can articulate their intent better than "write about how it made you more productive", by adding more description along the lines of "if not then explain all the things you have tried to try and adopt the tool and what and how it did not go well for you/ your role"
Typically well structured organizations with in-house I/O psychologists would add this kind of additional language in the feedback tooling, line managers may not be as well trained to articulate it in informal conversations, which is whole different kind of problem.
The answer isn't pre-ordained -- it's simply already known from experience, at least to a sufficient degree to not trust someone claiming it should be totally avoided. Like I said, there are not many corporate roles where it's legitimately impossible to find any kind of gain, even a small or modest one, anywhere at all.
We call these workers “pilots,” as opposed to “passengers.” Pilots use gen AI 75% more often at work than passengers, and 95% more often outside of work.
Identify a real issue with the technology, then shift the blame to a made-up group of people who (supposedly) aren't trying hard enough to embrace the technology.
Embody a pilot mindset, with high agency and optimism
Thanks for the career advice.
Ridiculous, I have it on good authority that embracing the 'hacker ethos' by becoming a 'coding ninja' with a 'wizard' mindset will propel you to next-level synergisms within transformative paradigms like AI and blockchain.
To leverage that hacker ethos for maximum synergy, you'll need to empower a holistic and agile mindset. This allows you to pivot toward a disruptive paradigm and monetize your scalable core competencies across the entire ecosystem.
Yeah, the article was good until I reached that point. In which it became an ad for BetterUp consultancy to transform passengers into pilots.
> Embody a pilot mindset, with high agency and optimism
Fly away from here at high speed
This isn't wrong though. There's obviously two types of people using AI: one is "explain to me how X works", and the other is "do X for me". Same pattern with every technology.
A pilot has ultimate authority of how a plane is flown, because it's their ass in the fire if the plane can't land.
If you're a low-level office drone, you are not a pilot.
The ai use mandates are odd. My guess is that the c-level execs have very little practical technical skills at this point, probably haven't written a line of code in 20 years. And they believe ALL the ai hype. They think LLM's can do anything, so any employees not using them are clearly wasting time.
The AI usage mandates are odd because why do the execs doubt that the workers try on their own to get the maximum utility out of the AI tools?
AI criticism and pushback.
When you say "AI cannot do my job, [insert whatever reason you find compelling]" Execs only hear "I am trying to protect my job from automation".
The executives have convinced themselves that the AI productivity benefits are real, and generally refuse to listen to any argument to the contrary. Especially from their own employees.
This impedes their ability to evaluate productivity data; If a worker fails to show productivity, it can't be that AI is bad, because that'd mean the executives are wrong about something. It must be that the employee is sabotaging our AI efforts.
Well AI advocates keep insisting that the only reason for someone to not benefit is if they're resistant to change and too lazy to learn.
I've come to appreciate that using AI tools are a skill on it's own. Anything beyond auto code completion takes quite a bit of conscious effort to experiment with and then learn how to delegate to in a workflow. They often end up being valuable, but it did take some work to get out of my productivity 'local maximum' that maybe not everyone would naturally take on.
I think if LLMs improved or our usages of them improved to the point we became design/code reviewers full time many of us would leave to do something less boring and so in some ways there is a negative incentive to investigate different AI driven workflows.
Execs more or less always assume that workers are some combination of stupid and lazy
After all if they weren't stupid and lazy they would be important execs, not unimportant workers
That adds to the dissonance for sure.
Not odd under the theory that they are being done to buy wiggle room to reduce the force later on. They announce the firing and layoff of those who haven't made their forecasted numbers.
Wow, this article resonates with me.
Today, I discussed with a product manager who insists on attaching AI generated prototypes to PRDs without any design sessions for exploration or refinement (I’m a UX designer). These prototypes contain many design issues that I must review and address each time.
Worse still, they look polished and creates the illusion that the work is nearly complete. So instead of moving faster, we end up with more back and forth about the AI miss interpretations.
I've never yet accepted an AI written answer when responding to my emails all though I try it routinely. Mostly it just doesn't capture my style. But even when it does, there's some kind of essential spark missing.
I think a lot about the concept that the AI output is still 99% regression to a mean of some kind. In that sense, the part it can generate for you is all the boring stuff - what doesn't add value. And to be sure, if you're writing an email etc, a huge amount of that is boring filler, most of the time. But the part it specifically cannot do is the only part that matters - the original, creative part.
The filler was never important anyway. Physically typing text was never the barrier. It's finding time and space to have the creative thought necessary to put into the communication that is the challenge. And the AI really doesn't help at all with that.
My CEO sent an ai generated blog today. I've never felt more frustrated reading something in my life. "x happened, here's what it means", "groundbreaking", "game-changer", "significant", "forefront of a technological shift"
I hope you learned an important lesson about reading the next email from the CEO.
Why are you reading your CEO's blog?
This question applies whether it's written by an AI or not.
Workslop production is how we determine who should get a ticket for Ark B.
My friends job of late has basically become reviewing AI-generated slop his non-technical boss is generating that mostly seems to work and proving why it's not production-ready.
Last week he was telling me about a PR he'd received. It should have been a simple additional CRUD endpoint, but instead it was a 2,000+ loc rats nest adding hooks that manually manipulated their cache system to make it appear to be working without actually working.
He spent most of his day explaining why this shouldn't be merged.
More and more I think Brandolini's law applies directly to AI-generated code
> The amount of [mental] energy needed to refute ~bullshit~ [AI slop] is an order of magnitude bigger than that needed to produce it.
The nephew has no programming knowledge.
He wants to build a website that will turn him into a bazillionaire.
He asks AI how to solve problem X.
AI provides direction, but he doesn't quite know how to ask the right questions.
Still, the AI manages to give him a 70% solution.
He will go to his grave before he learns enough programming to do the remaining 30% himself, or, understand the first 70%.
Delegating to AI isn't the same as delegating to a human. If you mistrust the human, you can find another one. If you mistrust the AI, there aren't many others to turn to, and each comes with an uncomfortable learning curve.
Who is "the nephew"?
Metaphorical - the story of the professional who had to make way for the boss’s nephew who took a php course last week…
In the early aughts, I was so adept at navigating my town because I delivered pizza. I could draw a map from memory. My directional skills were A+.
Once GPS became ubiquitous, I started relying on it, and over about a decade, my navigational skills degraded to the point of embarrassment. I've lived in the same major city now for 5 years and I still need a GPS to go everywhere.
This is happening to many people now, where LLMs are replacing our thinking. My dad thinks he is writing his own memoirs. Yeah pop, weird how you and everyone else just started using the "X isn't Y, it's Z" trope liberally in your writing out of nowhere.
It's definitely scary. And it's definitely sinister. I maintain that this is intentional, and the system is working the way they want it to.
I’m going to defend your dad here.
AI slop at work is an absolute waste of time and effort.
But your dad is trying to write his story, probably because he wants to leave something behind so he’s not forgotten. It might be cliche-riddled but AI is helping him write his experiences in a form he’s happy with and it’s still his story even if he got help.
He’s also probably only writing it for an audience of one - you. So don’t shit on it, read it. Or you might regret it.
I get what you are saying, and a situation like this needs to be treated with extreme tact and care. But no, it's not his story, it's a low res approximation of his story as viewed through the lens of the stastical average reddit comment or self published book.
If the father is really into the tech side of it (as opposed to pure laziness), I'd ask him for the prompts alongside the generated text and just ignore the output. The prompts are the writing that is meant for the original commentor, and it is well worth it to take the tact of not judging those by their writing quality independently.
I sympathize with people who find writing difficult. But, putting myself in GP's shoes, I can't imagine trying to read my father's LLM-generated memoir. How could I possibly understand it as _his_ work? I would be sure that he gave the LLM some amount of information that would render it technically unique, but there's no way I could hear his voice in words that he didn't choose.
If you're writing something for an audience of one, literally nothing matters more than the connection between you and the reader. As someone with a father who's getting on in years, even imagining this scenario is pretty depressing.
More precisely, each ‘AI’ is just a statistical grouping of a large subset of other (generally randomly) selected humans.
You don’t even get the same ‘human’ with the same AI, as you can see with various prompting.
It’s like doing a lossy compression of an image, and then wondering why the color of a specific pixel isn’t quite right!
Understand the first 70%.
With the 70% you then pitch "I have this" and some Corp/VC will buyout the remaining 30%.
They then in return hire engineers who are willing to lap the 70% slop, and fix the rest with more AI slop.
Your brother dies happily achieving their dream of being a bazillionaire doing nothing more than typing a few sentences in a search bar.
> He spent most of his day explaining why this shouldn't be merged.
"Explain to me in detail exactly how and why this works, or I'm not merging."
This should suffice as a response to any code the developer did not actively think about before submitting, AI generated or not.
I think you might’ve missed this part from the post:
> AI-generated slop his non-technical boss is generating
It’s his boss. The type of boss who happily generates AI slop is likely to be the type of person who wants things done their way. The employee doesn’t have the power to block the merge if the boss wants it, thus the conversation on why it shouldn’t be merged needs to be considerably longer (or they need to quit).
Why would he merge any code from his non technical boss? Writing code obviously isnt his role so I dont know why he would expect his code to be getting merged all of a sudden. Just straight up tell him this isnt useful please stop.
"You're absolutely right— This code works by [...]"
If it ever stops leading with a cheery affirmation we’re doomed.
> The amount of [mental] energy needed to refute ~bullshit~ [AI slop] is an order of magnitude bigger than that needed to produce it
I see this in code-reviews where AI tools like code-rabbit and greptile are producing workslop in enormous quantities. It is sucking up enormous amount of human energy just reading the nicely formatted bs put out by these tools. All of that for finding an occasional nugget that turns out to be useful.
I largely agree. As a counterpoint, today I delivered a significant PR that was accepted easily by the lead dev with the following approach:
1. Create a branch and vibe code a solution until it works (I'm using codex cli)
2. Open new PR and slowly write the real PR myself using the vibe code as a reference, but cross referencing against existing code.
This involved a fair few concepts that were new to me, but had precedent in the existing code. Overall I think my solution was delivered faster and of at least the same quality as if I'd written it all by hand.
I think its disrespectful to PR a solution you don't understand yourself. But this process feels similar to my previous non-AI assisted approach where I would often code spaghetti until the feature worked, and then start again and do it 'properly' once I knew the rough shape of the solution
The best way I’ve found to use LLM’s for writing anything that matters is, after feeding it the right context, take its output and then retype it in your own words. Then the LLM has helped capture your brain dump and organize it but by forcing yourself to write it rather than copy and paste… you get to make it your own. This technique has worked quite well with domains I’m not the best at yet like marketing copy. I want my shit to have my own voice but I’m not sure what to cover… so let the LLM help me with what to cover and then I can rewrite its work.
https://www.joelonsoftware.com/2000/05/26/reading-code-is-li...
https://www.joelonsoftware.com/2000/04/06/things-you-should-... (read the bold text in the middle of the article)
These articles are 25 years old.
Sadly, I've seen multiple well-known developers here on HN argue that reading code in fact isn't hard and that it's easy to review AI-generated code. I think fundamentally what AI-generated code is doing is exposing the cracks in many, many engineers across the board that either don't care about code quality or are completely unable to step back and evaluate their own process to see if what they're doing is good or not. If it works it works and there's no need to understand why or how.
I think this is equally true of writing. Once you see something written one way, it's very hard to imagine other ways of writing the same thing. The influence of anchoring bias is quite strong.
A strong editor is able to overcome this anchoring bias, imagine alternative approaches to the same problem, and evaluate them against each other. This is not easy and requires experience and practice. I am starting to think that a lot of people who "co-write" with ChatGPT are seriously overestimating their own editing skills.
Reviewing code is basically applying the Chesterton’s fence principle to everything. With AI code there’s typically so much incidental noise that trying to identify intention is a challenge.
But then again I’ve found a lot of people are not bothered by overly convoluted code that is the equivalent of using a hammer for screws either…
Worse - there is no actual intention, so attempting to grok it from the code is even more wasted energy.
You have to nitpick everything, because there is no actual meaningful aim that is consistent.
I ran across an outsourcer that did the same thing about 20 years ago (near as I could tell he was randomly cutting and pasting random parts of stack overflow answers until it compiled!). We got him away from the code base/fired ASAP because he was an active threat to everyone.
Ship it!
The article as I see it is just one paragraph that end "So much activity, so much enthusiasm, so little return. Why?" Is there more if you're a subscriber to Harvard Business Review?
I had to disable UBO and my VPN's ad blocker. Then the whole piece showed up
old flow : send RCA to customer in a timely manner
New flow: please run RCA through chatgpt and forward to your manager who will run it through chat GPT and send to the customer.
RCA is now 10x longer, only 10% accurate, and took 3x longer to get to the customer.
The AI revolution 2022-2030 is a speed run of the IT revolution of 1970-2000. In other words, how to 1000x management overhead while reducing real productivity, meanwhile skyrocketing nominal productivity.
The monetary analogy of nominal vs real productivity is so good — I’m stealing it for some undetermined use in the future.
I appreciate that . I do spend a lot of time trying to be illustrative
It'll be very funny if any AI productivity gains are balanced by productivity loss due to slop - all the while using massive amounts of electricity to achieve nothing.
I don’t think AI slop is inherently mandatory, but I worry that the narrative around AI will devalue engineering work enough that it becomes impossible to avoid.
That light bulb lying in a pool of epoxy resin is cool as hell though. A lot of poetic talent must have gone into the prompt.
As someone who has spent the last 30 years honing my craft (programming), all I can say is this: ha ha!
I've spent 16 years and I won't exactly be cheering if we hit a wall with AI.
I love programming, but I also love building things. When I imagine what having an army of mid-level engineers that genuinely only need high level instruction to reliably complete tasks, and don't require raising hundreds of millions while become beholden to some 3rd party, would let me build... I get very excited.
Programming is almost never an objective in itself, it's a stepping stone to some other task. It's nice it pays a good living, but I have a feeling that's going away eventually.
Implement a no workslop policy. Reputation will take care of the rest. Basically an educational task.
And just think of all the money spent on ChatGPT subscriptions. You’re not gonna see that back anytime soon.
Is it the “workslop” that is causing the problem, or the slop that companies demand and that passes for work in the first place? Really wanna summon the ghost of David Graeber (“Bullshit Jobs”) here: if you’re a manager who demands your employees to produce PowerPoints about the TPS reports, you probably shouldn’t be surprised when you get meaningless LLM argle-bargle in return.
Fair point,
The thing about companies asking for slop is that a middle manager maintaining usual stream of vacuous text is a proxy for that person paying attention to a given set of problems and alerting others. AI becomes a problem because someone can maintain a vacuous text stream without that attention.
So it's likely to become an arms-race.
> So it's likely to become an arms-race.
And this is my real fear with the current crop of AI. That rather than improving the system to require less junk, we just use AI to generate junk and AI to parse AI-generated junk that another AI summarises ad infinitum…
Like, instead of building a framework that reduces boilerplate code, we use an LLM to generate that boilerplate instead (in a more complex and confusing manner than using a traditional template would).
Or, when dealing with legal or quasi-legal matters (like certifications), 1. Generate AI slop to fill a template 2. Use AI to parse that template and say whether it’s done 3. Use another to write a response.
Lots of paperwork happening near instantly, mostly just burning energy and not adding to either the body of knowledge or getting to a solution.
"transfers the effort from creator to receiver."
AI is functionally equivalent to disinformation as it automates the dark matter of communication/language, transfers the status back to the recipient, it teaches receivers that units contents are no longer valid in general and demands a tapeworm format to replace what is being trained on.
What’s a tapeworm format?
Whatever the training can't assimilate, yet can be transmitted by users as analytic statements.
I'm not sure I understand this. Maybe an example would help, please?
Use your imagination. Any event that cannot be defined by low dimensional meaning (which there are myriad). The problem with the distinctions between event perception (what causes a car crash) and what AI assimilates is the arbitrary bottleneck of words, causal statements, images, deep reductions which are possible illusions of semantics like beliefs, motivations, desires.
Humans can take in a remarkable array of stimuli in order to perform tasks that are not cause and effect through optic flow. AI is stuck behind a veil of predicted tokens.
In essence, AI cannot automate mimicry of understanding even when the magic act demonstrates. Events are already tapeworms, but our skillset is so stuck behind the veil of folk psychology and f science, we are merely pretending we understand these events.
So a tapeworm format is probably somewhat like a non-causal contradictory event that may have infinite semantic readings. Think edges of paradox: Kubrick, Escher, Cezanne, Giorgione, Song Dynasty landscapes, Heraclitus, find the Koanic paradoxes of the East and keep conjoining them. Think beyond words, as thoughts are wordless, the tapeworms are out there, math doesn't and can't see them.
This is what's so jarring about meme-culture, which are tapeworms to AI as they are tapeworms to narrative containment culture, what might be viewed as academia/intelligensia/CS engineering/plain English media, the tapeworms are here, CS I think assumes the causal/semantic linkages are simple, meaning is low-bandwidth, but the reality is semantic is both limitless (any event) and illusory (meme-culture ie several memes in sequence). AI has no path in either case.
maybe chatgpt can help us understand it better
chat can't grasp what it hasn't been trained to automate.
I don't think you actually grasp what you're writing either, or at least you can't explain it in any coherent way, so LLMs are no worse on that score.
There are vast, unexplored sums of knowledge that LLMs can't automate, mimic or regurgitate. This is obvious simply from meme-culture. Try asking Chat GPT to derive meaning from Kirk's shooters casing engravings. Listen to the explanations unravel.
Once you attach the nearly limitless loads of meaning available to event-perception (use cognitive mapping in neuroscience where behavior has no meaning, it simply has tasks and task demands vary wildly so that semantic loads are factors rather than simple numbers), LLMs appear to be like puppets of folk psychology using tokens predictably in embedded space. These tokens have nothing to do with the reality of knowledge or events. Of course engineers can't grasp this, you've been severely limited to using folk psychology infected cog sci as a base of where your code is developed from, when in reality, it's almost totally illusory. CS has no future game in probability, it's now a bureaucracy. The millions or billions of parameters have zero access to problems like these that sit beyond cog sci, I'll let Kelso zing it
https://drive.google.com/file/d/1oK0E4siLUv9MFCYuOoG0Jir_65T...
No one — human or LLM — actually knows the meanings of the phrases that Tyler James Robinson wrote on his cartridge casings. There's lots of speculation but he isn't talking, and even if he was we wouldn't know whether he was telling the truth. If you want us to take you seriously then you'll have to come up with a valid example instead of posting a bunch of pseudo-intellectual drivel.
You're proving me correct. The pseudoscience is CS, it has no game in events. The interdisciplinary search for semantics derived from events isn't pseudo-intellectual drivel, it's the central quest of key sciences and subfields that range into neuroscience.
Of course we can concatenate meanings from his behavior and clues, but these meanings are not accessible in AI or narratives. You're essentially throwing in the towel as proof legacy explanations have died in automation in AI.
Face it CS, your approach is bureaucratic, for enforcing the dead status-quo of knowledge, not for the leading edges.
Are you able to read this article without paying, or do you only get this one paragraph summary?
Check for no-script or ad-blockers. I could read in Chrome but not Firefox.
This is similar to how the internet destroys productivity with YouTube. Humans are masters of compensating productivity gains with time retrieval.
Now that ai makes my programming 10x more efficient, I will work 5x less destroying “half of my” productivity.
The problem with most corporate work that these managerial idiots want replaced with AI is that is all so utterly useless. Reports written that no one will ever read, presentations made for the sake of the busy-ness of "making a deck", notes and minutes of meetings that should never have taken place in the first place. Summaries written by AI of longer-form work that are then shoved into AI to make sense of the AI-written summary.
I like the quote in the middle of the article: "creating a mentally lazy, slow-thinking society that will become wholly dependant [sic] upon outside forces". I believe that orgs that fall back on the AI lie, who insist on schlepping slop from one side to the other, will be devoured by orgs that see through the noise.
It's like code. The most bug-free code are those lines that are never written. The most productive workplace is the one that never bothers with that BS in the first place. But, promotions and titles and egos are on the line so...
AI in its current form, like the swirling vortex of corporate bilge that people are forced to swim through day after day after day to, can't die fast enough.
> Summaries written by AI of longer-form work that are then shoved into AI to make sense of the AI-written summary.
Also the problem where someone has bullet-points, they fluff them up in an LLM, send the prose, and then the receiver tries to use an LLM to summarize it back down to bullet-points.
I may be over-optimistic in predicting that eventually everyone involved will rage-flip the metaphorical table, and start demanding/sending the short version all the time, since there's no longer anything to be gained by prettying it up.
So many times you write a document for someone to review and question that it could be written better. Yet the reader will now just ask for bullet points from the AI. I was hoping people would go, right, let's just start writing bullet points from the get go and not bother with a full document.
Once you allow AI to replace the process, you kind of reveal that the process never mattered to you, if you want faster pace at the expense of other things, you don't need to pay for AI, just drop the unnecessary process.
I feel AI is now just a weird excuse, like you're pretending you have not lowered the quality and stopped writing proper documents, professional emails, full test suites, properly reviewed each other's code, no, you still do all this, just not you personally, it's "automated".
It's like cutting corners but being able to pretend like the corner isn't cut, because AI still fully takes the corner :p
So true. We used to appoint someone in the group to take notes. These notes were always correct, to the point, short and easy to read. Now our manager(s) are heavily experimenting with recording all meetings and desperately trying to produce useful reports using all sorts of AI tools. The output is always lengthy and makes the manager super happy. Look, amazing reports! But on closer inspection they're consistently incomplete one way or another, sometimes confidently incorrect and full of happy corpo mumbo jumbo. More slop to wade through, when looking for factual information later on.
Our manager is so happy to report that he's using AI for everything. Even in cases where I think completeness and correctness is important. I honestly think it's scary how quickly that desire for correctness is gone and replaced with "haha this is cool tech".
Us devs are much more reluctant. We don't want to fall behind, but in the end when it comes to correctness and accountability, we're the ones responsible. So I won't brainlessly dump my work into an LLM and take its word for granted.
It's their company; we just work at it. If we want to exert more control in the workplace, we obviously need more power in the workplace. In the meantime, if they want the equivalent of their company's prefrontal cortex to be burned out with a soldering iron, that's their prerogative.
The problem with corporate work is that it exists - that corporations exist.
You do have the option to spend your time elsewhere - if you can handle every NPC friend and family member thinking you've lost your mind when you quit that cushy corporate gig and go work a low status, low pay job in peace and quiet - something like a night time security guard.
If you were around for the heyday of Markov chain email and Usenet spam this whole thing is familiar. Sure AI slop generation is not directly comparable to Markov process and generated texts are infinitely smoother yet it has similar mental signature. I believe this similarity puts me squarely in the offended 22%.