I was at Microsoft until July of this year until I left for an SF-based company (not AI though).
The difference between the two with regards to AI tool usage couldn’t be more different- at Microsoft, they had started penalizing you in perf if you didn’t use the AI tools, which often were under par and you didn’t have a choice in. At the new place, perf doesn’t care if you use AI or not- just what you actually deliver. And, shocker, turns out they actually spend a lot building and getting feedback on internal AI tooling and so it gets a lot of use!
The Microsoft culture is a sort of toxic “get AI usage by forcing it down the engineer throats” vs the new “make it actually useful and win users” approach at that new place. The Microsoft approach builds resentment in the engineering base, but I’m convinced it’s the only way leadership there knows how to drive initiatives.
Someone wrote on HN the (IMO) main reason why people do not accept AI.
AI is about centralisation of power
So basically, only a few companies that hold on the large models will have all the knowledge required to do things, and will lend you your computer collecting monthly fees. Also see https://be-clippy.com/ for more arguments (like Adobe moving to cloud to teach their model on your work).
For me AI is just a natural language query model for texts. So if I need to find something in text, make join with other knowledge etc. things I'd do in SQL if there was an SQL processing natural language, I do in LLM. This enhances my work. However other people seem to feel threatened. I know a person who resigned CS course because AI was solving algorithmic exercises better than him. This might cause global depression, as we no longer are on the "top". Moreover he went to medicine, where people basically will be using AI to diagnose people and AI operators are required (i.e. there are no threats of reductions because of AI in Public Health Service)
So the world is changing, the power is being gathered, there is no longer possibility to "run your local cloud with open office, and a mail server" to take that power from the giants.
The core issue is that AI is taking away, or will take away, or threatens to take away, experiences and activities that humans would WANT to do. Things that give them meaning and many of these are tied to earning money and producing value for doing just that thing. As someone said "I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes".
Much of the meaning we humans derive from work is tied to the value it provides to society. One can do coding for fun but doing the same coding where it provides value to others/society is far more meaningful.
Presently some may say: AI is amazing I am much more productive, AI is just a tool or that AI empowers me. The irony is that this in itself shows the deficiency of AI. It demonstrates that AI is not yet powerful enough to NOT need to empower you to NOT need to make you more productive. Ultimately AI aims to remove the need for a human intermediary altogether that is the AI holy grail. Everything in between is just a stop along the way and so for those it empowers stop and think a little about the long term implications. It may be that for you right now it is comfortable position financially or socially but your future you in just a few short months may be dramatically impacted.
I can well imagine the blood draining from peoples faces, the graduate coder who can no longer get on the job ladder. The law secretary whose dream job is being automated away, a dream dreamt from a young age. The journalist whose value has been substituted by a white text box connected to an AI model.
But why not? AI also has very powerful open models (that can actually be fine-tuned for personal use) that can compete against the flagship proprietary models.
As an average consumer, I actually feel like i'm less locked into gemini/chatgpt/claude than I am to Apple or Google for other tech (i.e. photos).
> AI also has very powerful open models (that can actually be fine-tuned for personal use) that can compete against the flagship proprietary models.
It was already tough to run flagship-class local models and it's only getting worse with the demand for datacenter-scale compute from those specific big players. What happens when the model that works best needs 1TB of HBM and specialized TPUs?
AI computation looks a lot like early Bitcoin: first the CPU, then the GPUs, then the ASICs, then the ASICs mostly being made specifically by syndicates for syndicates. We are speedrunning the same centralization.
Economies of scale makes this a space that is really difficult to be competitive in as a small player.
If it's ever to be economically viable to run a model like this, you basically need to run it non-stop, and make money doing so non-stop in order to offset the hardware costs.
You can still run your local cloud and AI providers will be heavily consolidated to a few.
While for programming task I do use Claude currently, local models can be tuned to serve 80% of the time reduction you win by using AI. Depends a bit on the work you do. This will improve probably, while frontier models seem to hit hard ceilings.
Where I would disagree is that joining concepts or knowledge works at all with current AI. It works decently bad in my opinion. Even the logical and mathematical improvements of the latest Gemini model don't impress too much yet.
Local models are fine for the way we have been using AI, like as a chatbot, or a fancy autocomplete. But everyone is craming AI into everything. Windows will be an agentic OS whether we like it or not. There will be no using your own local model for that use case. It is looking like everything is moving that way.
> This enhances my work. However other people seem to feel threatened.
I wish people would stop spreading this as if it were the main reason. It’s a weak argument and disconnected from reality, like those people who think the only ones who dislike cryptocurrencies are the ones who didn’t become rich from it.
There are plenty of reasons to be against the current crop of AI that have nothing to do with employment. The threat to the environment, the consolidation of resources by the ones at the top, the spread of misinformation and lies, the acceleration of mass surveillance, the decay of critical thinking, the decrease in quality of life (e.g. people who live next to noisy data centres)… Not everything is about jobs and money, the world is bigger than that.
Don't forget the fact that most of the time, the AI tools don't actually work well enough to be worth the trouble.
AI meeting notes are great! After you spend twice as long editing out the errors, figuring out which of the two Daves was talking each time, and removing all the unimportant side-items that were captured in the same level of detail as the core decision.
AI summaries are great - if you're the sort of person that would use a calculator that's wrong 10% of the time. The rest of us realize that an hour spent reading something is more rewarding and useful than an hour spent double checking an AI summary for accuracy.
AI as Asbestos isn't even an apt comparison, both are toxic and insidious, but Asbestos at least had a clear and compelling use case at the time. It solved some problems both better and cheaper than available alternatives. AI solves problems poorly and at higher cost, and people call you "threatened" if you point that out.
AI summaries are great for orgs that dont do them (AI better than nothing) but not that great for those orgs that have an agenda and a note taker. What is very rare. Bur better quality
It’s not clear to me what exactly you’re asking, but I’ll try to answer it anyway. I’d say that of the people who are against AI, fewer than 50% (to use your number) aren’t against it solely (or primarily, or at all) because they feel threatened for their job. Does that answer your question?
> The threat to the environment, the consolidation of resources by the ones at the top, the spread of misinformation and lies, the acceleration of mass surveillance, the decay of critical thinking
My question was that how many people are actually concerned about those things? If you think about it it's kind of obvious but it takes conscious effort to see it and I suspect not many people do.
So you are saying now that you can bypass a lot of solutions offered by a mix of small/large providers by using a single solution from a huge provider, this is the opposite of a centralization of power?
>"by using a single solution from a huge provider"
The parent didn't say that though and clearly didn't mean it.
Smaller SaaS providers have a problem right now. They can't keep up with the big players in terms of features, integrations and aggressive sales tactics. That's why concentration and centralisation is growing.
If a lot of specialised features can be replaced by general purpose AI tools, that could weaken the stranglehold that the biggest Saas players have, especially if those open weights models can be deployed by a large number of smaller service providers or even self hosted or operated locally.
That's the hypothesis I think. I'm not sure it will turn out that way though.
I'm not sure whether the current hyper-competitive situation where we have a lot of good enough open weights models from different sources will continue.
I'm not sure that AI models alone will ever be reliable enough to replace deterministic features.
I'm not sure whether AI doesn't create so many tricky security issues that once again only the biggest players can be trusted to manage them or provide sufficient legal liability protection.
Sorry but your SQL comparison is way off. SQL is deterministic, has a defined implementation that databases must follow and when you run a statement it presents a query plan.
This is the absolute opposite to using an LLM. Please stop using this comparison and perhaps look for others, like for example, a randomised search engine.
I am not in Seattle. I do work in AI but have shifted more towards infrastructure.
I feel fatigued by AI. To be more precise, this fatigue includes several factors. The first one is that a lot of people around me get excited by events in the AI world that I find distracting. These might be new FOSS library releases, news announcements from the big players, new models, new papers. As one person, I can only work on 2-3 things at a given interval in time. Ideally I would like to focus and go deep in those things. Often, I need to learn something new and that takes time, energy and focus. This constant Brownian motion of ideas gives a sense of progress and "keeping up" but, for me at least, acts as a constantly tapped brake.
Secondly, there is a sentiment that every problem has an AI solution. Why sit and think, run experiments, try to build a theoretical framework when one can just present the problem to a model. I use LLMs too but it is more satisfying, productive, insightful when one actually thinks hard and understands a topic before using LLMs.
Thirdly, I keep hearing that the "space moves fast" and "one must keep up". The fundamentals actually haven't changed that much in the last 3 years and new developments are easy to pick up. Even if they did, trying to keep up results in very shallow and broad knowledge that one can't actually use. There are a million things going on and I am completely at peace with not knowing most of them.
Lastly, there is pressure to be strategic. To guess where the tech world is going, to predict and plan, to somehow get ahead. I have no interest in that. I am confident many of us will adapt and if I can't, I'll find something else to do.
I am actually impressed with and heavily use models. The tiresome part now are some of the humans around the technology who participate in the behaviors listed above.
> The fundamentals actually haven't changed that much in the last 3 years
Even said fundamentals don't have much in the way to foundations. It's just brute forcing your way using a O(n^3) algorithm using a lot of data and compute.
Dario wishes he was the grifter Altman is. He's like a kirland brand grifter compared to Altman. Altman is a generational level talent when it comes to grifting.
> I am actually impressed with and heavily use models. The tiresome part now are some of the humans around the technology who participate in the behaviors listed above.
the AI just an LLM and it just does what it is told to.
I get excited by new model releases, try it, switch it to default if I feel it's better, and then I move on. I don't understand why any professional SWE should engage in weird cultish behavior about these models, it's a better mousetrap as far as I'm concerned
its just the old pc vs mac cultism. nobody who actually has work to do cares. much like authors obsessed with typewriters, transport companies with auto brands, etc
1. You were a therapy session for her. Her negativity was about the layoffs.
2. FAANG companies dramatically overhired for years and are using AI as an excuse for layoffs.
3. AI scene in Seattle is pretty good, but as with everywhere else was/is a victim of the AI hype. I see estimates of the hype being dead in a year. AI won't be dead, but throwing money at the whatever Uber-for-pets-AI-ly idea pops up won't happen.
4. I don't think people hate AI, they hate the hype.
Anyways, your app actually does sound interesting so I signed up for it.
Yeah, as a gamer I get a lot of game news in my feeds. Apparently there's a niche of indie games that claim to be AI-free. [0]
And I read a lot of articles about games that seem to love throwing a dig at AI even if it's not really relevant.
Personally, I can see why people dislike Gen AI. It takes people's creations without permission.
That being said, morality of the creation of AI tooling aside, there are still people who dislike AI-generated stuff. Like, they'd enjoy a song, or an image, or a book, and then suddenly when they find out it's AI suddenly they hate it. In my experience with playing with comfy ui to generate images, it's really easy to get something half decent, it's really hard to get something very high quality. It really is a skill in itself, but people who hate AI think it's just type a prompt and get image. I've seen workflows with 80+ nodes, multiple prompts, multiple masks, multiple loras, to generate one single image. It's a complex tool to learn, just like photoshop. Sure you can use Nano-Banana to get something but even then it can take dozens of generations and prompt iterations to get what you want.
>Like, they'd enjoy a song, or an image, or a book, and then suddenly when they find out it's AI suddenly they hate it.
Yes, because for some people its about supporting human creation. Finding out it's part of a grift to take from said humans can be infuriating. People don't want to be a part of that.
Most of the people that dislike genAI would have the exact same opinion if all the training data was paid for in full (whatever a fair price would be for what is essentially just reference material)
That if carries a lot of meaning here. In reality it is and was impossible to pay for all the stolen data. Also LLM corpos not only didn't pay for the data, but they never even asked. I know it may be a surprise, but some people would refuse to sell their data to a mechanical parrot.
Would agree with this and think it is more than just your reasons, especially if you venture outside the US at least from what I've experienced. I've seen it at least personally more so where AI tech hubs aren't around and there is no way to "get in on the action". I see blue collar workers who are less threatened ask me directly with less to lose - why would anyone want to invent this? One of the reasons the average person on the street doesn't relate well to tech workers in general; there is a perceived lack of "street smarts" and self preservation.
Anecdotally its almost like they see them like mad scientists who are happy blowing up themselves and the world if they get to play with the new toy; almost childlike usually thinking they are doing "good" in the process. Which is seen as a sign of a lack of a type of intelligence/maturity by most people.
ChatGPT is one of the most used websites in the world and it's used by the most normal people in the world, in what way is the opinion "generally negative"?
No it's not. No one is forced to use ChatGPT, it got popular by itself. When millions use it voluntarily, that contradicts the 'generally negative' statement, even if there are legitimate criticisms of other aspects of AI.
Yes, and I made an argument supporting that "used" and "it's bad" are not mutually exclusive . You simply repeated what I responded to and asserted you're the right opinion.
I get your argument but in this case it is that straightforward because it's not a forced monopoly like e.g. Microsoft Windows. Common folk decided to use ChatGPT because they think it is good. Think Google Search, it got its market position because it was good.
>Common folk decided to use ChatGPT because they think it is good.
That is not the only reason to use a tool you think is bad. "good enough" doesn't mean "good". If you think it's better to generate an essay due in an hour then rush something by hand, that doesn't mean it's "good". If I decide to make a toy app full of useless branches, no documentation, and tons of sleep calls, it doesn't mean the program is "good". It's just "good enough".
That's the core issue here. "good enough" varies on the context, and not too many people are using it like the sales pitch to boost the productivity of the already productive.
I don't agree with your comments, especially using PirateBay as an example. Stating either as "bad" is purely subjective. I find both PirateBay and ChatGPT both good things. They both bring value to me personally.
I'd wager that most people would find both as "good" depending on how you framed the question.
We'll see how long that lasts with their their new Ad framework. Probably most normal people are put off by all the other AI being marketed at them. A useful AI website is one thing, AI forced into everything else is quite another. And then they get to hear on the news or from their friends how AI-everything is going to take all the jobs so a few controversial people in tech can become trillionaires.
Seemingly the primary economic beneficiaries of AI are people who own companies and manage people. What this means for the average person working for a living is probably a lot of change, additional uncertainty, and additional reductions in their standard of living. Rich get richer, poor get poorer, and they aren't rich.
Sure, I meant the anglosphere. But in most countries, the less people are aware of technology or use the internet the less they are enthusiastic about AI.
Some people take find their life meaning through craft and work. When that craft is suddenly less scarce, less special, so does that craft-tied meaning.
I wonder if these feelings are what scribes and amanuenses felt when the printing press arrived.
I do enjoy programming, I like my job and take pride on it, but I actively try for it not to be the life-mean giving activity. I'm a just mercenary of my trade.
The craft isn't any less scarce. If anything, only more. The craft of building wooden furniture is just as scarce as ever, despite the existence of Ikea.
Which is the only woodworkers that survive are the ones with enough customers willing to pay premium prices for furniture, or lucky to live in countries where Ikea like shops aren't yet a thing.
They are also the people who are able to see the most clearly how subpar generative-AI output is. When you can't find a single spot without AI slop to rest your eyes on and see it get so much praise, it's natural to take it as a direct insult to your work.
I mean, I would still hate to be replaced by some chat bot (without being fairly compensated because, societally, it's kind of a dick move for every company to just fire thousands of people and then nobody can find a job elsewhere), but I wouldn't be as mad if the damn tools actually worked. They don't. It's one thing to be laid off, it's another to be laid off, ostensibly, to be replaced by some tool that isn't even actually thinking or reasoning, just crapping out garbage.
And I will not be replying to anyone who trots out their personal AI success story. I'm not interested.
The tech works well enough to function as an excuse for massive layoffs. When all that is over, companies can start hiring again. Probably with a preference for employees that can demonstrate affinity with the new tools.
On this topic I think it’s pretty off base to call HN a “well insulated bubble” - AI skepticism and outright hate is pretty common here and AI negative comments often get a lot of support. This thread itself offers plenty of examples.
That's probably me for a lot of people. The reality is a bit finer than this namely :
- I hate VC funded AI which is actually super shallow (basically OpenAI/Claude wrappers)
- I hate VC funded genuine BigAI that sells itself as the literal opposite of what it is, e.g. OpenAI... being NOT open.
- I hate AI that hides it's ecological cost. Generating text, videos, etc is actually fascinating, but not if making the shittiest video with the dumbest script is taking the same amount of energy I'd need to fly across the globe.
- I hate AI that hides it's human cost, namely using cheap labor from "far away" where people have to label atrocities (murders, rape, child abuse, etc) without being provided proper psychological support.
- I hate AI that embodies capitalist principles of exploitation. If somehow your entire AI business relies on an entire pyramid of everything listed above to capture a market then hike the price once dependency is entrenched you might be a brilliant business man but you suck as a human being.
etc... I could go on but you get the idea.
I do love open source public AI research though. Several of my very good friends are researchers in universities working on the topic. They are smart, kind and just great human beings. Not fucking ghouls riding the hype with 0 concern for our World.
So... yes maybe AI haters have a slightly more refined perspective but of course when one summarize whatever text they see in 3 words via their favorite LLM, it's hard to see.
> making the shittiest video with the dumbest script is taking the same amount of energy I'd need to fly across the globe.
I get your overall point, but the hyperbole is probably unhelpful. Flying a human across the globe takes several MWh. That's billions of tokens created (give or take an order of magnitude...).
Does your comparison include training, data center building, GPUs productions, etc or solely inference? (genuine question I don't know the total cost for e.g. Sora2, only inference which AFAIK is significant yet pale in comparison to everything upstream)
No, that's one reason why there's at least an order of magnitude wiggle room there. I just took the first number for J/Token I found on arxiv from 2025. Choosing the exact model and hardware it runs on is also making a large difference (probably larger than your one-time upfront costs, since those are needed only once and spread out across years of inference).
My point is mobility, especially commercial flight, is extremely energy intense and the average westerner will burn much more resources here than on AI use. People get mad at the energy and water use of AI, and they aren't wrong, but right now it really is only a drop in the ocean of energy and water we're wasting anyways.
> right now it really is only a drop in the ocean of energy and water we're wasting anyways.
That's not what I heard. Maybe it was in 2024 but now data centers have their own categories in energy consumption whereas until now it was "others". I think we need to update our collective understanding in terms of actual energy consumed. It was all fun & games until recently and slop was kind of harmless consequence ecologically speaking but from what I can tell in terms of energy, water, etc it is not negligible anymore.
Probably just a matter of perspective. It's a few hundreds of TWh per year in 2025 - that's huge, and it's growing quickly. But again, that's still only a small fraction of a percent of total human primary energy consumption during the same time.
You could say the same about the airplane, does the CO2 emissions that the airline states for my seat include building the plane, the R/D, training the pilot.
Sure and I do, it's LCA https://en.wikipedia.org/wiki/Life-cycle_assessment the problem IMHO being that the AI hype entire ecosystem is literally hiding everything it can about this behind the veil of giving information to competitors. We have CO2eq on model cards but we don't have much datapoints on proprietary models running on Azure cloud or wherever. At best we infer from some research papers that are close enough but we don't know for the most popular models and that's quite problematic. The car industry did everything it could too, e.g. Volkswagen scandal so let's not repeat that.
I don’t think people hate models. They hate that techbros are putting LLMs in places they don’t belong … and then trying to anthropomorphize the thing finding what best rhymes with your prompt as “reasoning” and “intelligence” (which it isn’t).
AGI? No, although it's not there.
LLMs? Yes, lots. The main benefit they can give is to sort-of-speed-up internet search, but I have to go and check the sources anyway so I'll revert back to 20+ years of experience of doing it myself.
Any other application of machine learning such almost instant speech to text? No, it's useful.
I hate to be cagey here but I just really don’t want to make anyone’s life harder than it needs to be by revealing their identity. Microsoft is a really tough place to be an employee right now.
That's because there are at least 5 different definitions of AI.
- At it's inception in 1955 it was "learning or any other feature of intelligence" simulated by a machine [1] (fun fact: both neural networks and computers using natural language were on the agenda back then)
- Following from that we have the "all machine learning is AI" which was the prevalent definition about a decade ago
- Then there's the academic definition that is roughly "computers acting in real or simulated environments" and includes such mundane and algorithmic things as path finding
- Then there's obviously AGI, or the closely related Hollywood/SciFi definition of AI
- Then there's just "things that the general public doesn't expect computers to be able to do". Back when chess computers used to be called AI this was probably the closest definition that fits. Clever sales people also used to love to call prediction via simple linear regression AI
Notably four out of five of them don't involve computers actually being intelligent. And just a couple years ago we still sold simple face detection as AI
It's the opposite. It is doing the driving but you really have to provide lane assist, otherwise you hit the tree, or start driving in the opposite direction.
Many people claim it's doing great because they have driven hundreds of kilometers, but don't particularly care whether they arrived at the exact place, and are happy with the approximate destination.
Is the siren song of "AI effect" so strong in your mind that you look at a system that writes short stories, solves advanced math problems and writes working code, and then immediately pronounce it "not intelligent"?
It doesn’t actually solve those math problems though, does it? It replies with a solution if it has seen one often enough in training data or something that looks like a solution but isn’t. At the end, the human still needs to proof it.
Same for short stories, it doesn’t actually write new stories, it rehashes stories it (probably illegally) ingested in training data.
LLMs are good at mimicking the content they were trained on, they don’t actually adopt or extend the intelligence required to create that content in the first place.
Oh, I remember those talks. People actually checking whether an LLM's response is something that was in the training data, something that was online that it replicated, or something new.
They weren't finding a lot of matches. That was odd.
That was in the days of GPT-2. That was when the first weak signs of "LLMs aren't just naively rephrasing the training data" emerged. That finding was controversial, at the time. GPT-2 couldn't even solve "17 + 29". ChatGPT didn't exist yet. Most didn't believe that it was possible to build something like it with LLM tech.
I wish I could say I was among the people who had the foresight, but I wasn't. Got a harsh wake-up call on that.
And yet, here we are, in year 20-fucking-25, where off-the-shelf commercially available AIs burn through math competitions and one shot coding tasks. And people still say "they just rehash the training data".
Because the alternative is: admitting that we found an algorithm that crams abstract thinking into arrays of matrix math. That it's no longer human exclusive. And that seems to be completely unpalatable to many.
You and I must be using very different versions of Claude. As an infra/systems guy (non-coder), the ability for me to develop some powerful tools simply by leveraging Claude has been nothing short of amazing. I started using Claude about 8 months ago and have since created about 20 tools ranging from simple USB detection scripts (for secure erasing SSDs) to complex tools like an Azure File Manager and a production-ready data migration tool (Azure to Snowflake). Yes, I know bash and some Python, but Claude has really helped me create tools that would have taken many weeks/months to build using the right technology stack. I am happy to pay for the Claude Max plan; it has returned huge dividends to my productivity.
And, maybe that is the difference. Non coders can use AI to help build MVPs and tooling they could otherwise not do (or take a long time to get done). On the other hand, professional coders see this as an intrusion to their domain, become very skeptical because it does not write code "their way" or introduces some bugs, and push back hard.
Every HN thread about AI eventually has someone claiming the code it produces is “trash” or “non-working.” There are plenty of top-tier programmers here who dismiss anyone who actually finds LLM-generated code useful, even when it gets the job done.
I’m tempted to propose a new law—like Poe’s or Godwin’s—that goes something like: “Any discussion about AI will eventually lead to someone insisting it can’t match human programmers.”
Seeing an AI casually spit out an 800 lines script that works first try is really fucking humbling to me, because I know I wouldn't be able to do that myself.
Sure, it's an area of AI advantage, and I still crush AI in complex codebases or embedded code. But AI is not strictly worse than me, clearly. The fact that it already has this area of advantage should give you a pause.
Humbling indeed. I am utterly amazed at Claude's breadth of knowledge and ability to understand the context of our conversations. Even if I misspell words, don't use the exact phrase, or call something a function instead of a thread, Claude understands what I want and helps make it happen. Not to mention the ability to read hundreds of lines of debug output and point out a tiny error that caused the bug.
I think these companies would benefit from honesty, if they're right and their new AI capabilities are really powerful then poisoning their workforce against AI is the worst thing they could do right now. Like a generous severance approach and compassionate layoffs would go a long way right now.
Thanks for signing up. I’m going to try really hard to open up some beta slots next week so more people can try it. There’s some embarrassingly bad bugs in prod right now…
I was an early employee at a unicorn and I saw this culture take hold once we started hiring from Big Tech talent pools and offering Big Tech comp packages, though before AI hype took off. There's a crazy lack of agency that kicks in for Big Tech folks that's really hard to explain. This feeling that each engineer is this mercenary trying really hard to not get screwed by the internal system.
Most of it is because there's little that ties actual output to organizational outcomes. AI mandates after all are just a way to bluntly for e engineers to use AI, where if you were at a startup or smaller company you would probably organically find how much an LLM helps you where. It may not even help your actual work even if it helps your coworkers. That market feedback is sorely missing from the Big Techs and so hamfisted engineering mandates have to do in order to for e engineers to become more efficient.
In these cases I always try to remind friends that you can always leave a Big Tech. The thing is, from what I can tell, a lot of these folks have developed lifestyle inflation from working in Big Tech and some of their anger comes from feeling trapped in their Big Tech role due to this. While I understand, I'm not particularly sympathetic to this viewpoint. At the end of the day your lifestyle is in your hands.
It's a bit more expensive. It's not the end of the world. Production will likely increase if the demand is consistent.
> What about diverting funding from much more useful and needed things?
And who determines that? People put there money where they want to. People think AI will provide value to other people those people will, therefore, pay money for AI. So the funding that AI is receiving is directly proportional to how useful and needed people think AI is. I disagree, but I'm not a dictator.
> What about automation of scams, surveillance, etc?
Technology makes things easier, including bad things. This isn't the first time this happened and it won't be the last. It also makes avoiding those things easier though but that usually lags a bit behind.
> I can keep going.
Please do because it seems like you're grasping at straws.
Not to diminish your overall point, but enshittification has been happening well before AI, AI just made it much easier and faster to enshittify everything.
It's the closing trash compactor of soullessness and hate of the human, described vividly as having affected Microsoft culture as thoroughly as intergranular corrosion can turn a solid block of aluminum to dust.
Fuck Microsoft for both hating me and hating their own people. Fuck. That. Shit.
> It's the closing trash compactor of soullessness and hate of the human, described vividly as having affected Microsoft culture as thoroughly as intergranular corrosion can turn a solid block of aluminum to dust.
That's a great way to describe it. There's a good article that points out AI is the new aesthetic of fascism. And, of course, in Miyazaki's words, "I strongly feel that this is an insult to life itself."
> Engineers don't try because they think they can't.
This article assumes that AI is the centre of the universe, failing to understand that that assumption is exactly what's causing the attitude they're pointing to.
There's a dichotomy in the software world between real products (which have customers and use cases and make money by giving people things they need) and hype products (which exist to get investors excited, so they'll fork over more money). This isn't a strict dichotomy; often companies with real products will mix in tidbits of hype, such as Microsoft's "pivot to AI" which is discussed in the article. But moving toward one pole moves you away from the other.
I think many engineers want to stay as far from hype-driven tech as they can. LLMs are a more substantive technology than blockchain ever was, but like blockchain, their potential has been greatly overstated. I'd rather spend my time delivering value to customers than performing "big potential" to investors.
So, no. I don't think "engineers don't try because they think they can't." I think engineers KNOW they CAN and resent being asked to look pretty and do nothing of value.
Yeah, "Engineers don't try" is a frustrating statement. We've all tried generative AI, and there's not that much to it — you put text in, you get text back out. Some models are better at some tasks, some tools are better at finding the right text and connecting it to the right actions, some tools provide a better wrapper around the text-generation process. Certain jobs are very easy for AI to do, others are impossible (but the AI lies about them).
A lot of us tried it and just said, "huh, that's interesting" and then went back to work. We hear AI advocates say that their workflow is amazing, but we watch videos of their workflow, and it doesn't look that great. We hear AI advocates say "the next release is about to change everything!", but this knowledge isn't actionable or even accurate.
There's just not much value in chasing the endless AI news cycle, constantly believing that I'll fall behind if I don't read the latest details of Gemini 3.1 and ChatGPT 6.Y (Game Of The Year Edition). The engineers I know who use AI don't seem to have any particular insights about it aside from an encyclopedic knowledge of product details, all of which are changing on a monthly basis anyway.
New products that use gen AI are — by default — uninteresting to me because I know that under the hood, they're just sending text and getting text back, and the thing they're sending to is the same thing that everyone is sending to. Sure, the wrapper is nice, but I'm not paying an overhead fee for that.
> Yeah, "Engineers don't try" is a frustrating statement. We've all tried generative AI, and there's not that much to it — you put text in, you get text back out.
"Engineers don't try" doesn’t refer to trying out AI in the article. It refers to trying to do something constructive and useful outside the usual corporate churn, but having given up on that because management is single-mindedly focused on AI.
One way to summarize the article is: The AI engineers are doing hype-driven AI stuff, and the other engineers have lost all ambition for anything else, because AI is the only thing that gets attention and helps the career; and they hate it.
ZIRP is gone, and so are the Good Times when any idiot could get money with nothing but a PowerPoint slide deck and some charisma.
That doesn't mean investors have gotten smarter, they've just become more risk averse. Now, unless there's already a bandwagon in motion, it's hard as hell to get funded (compared to before at least).
“Lost all ambition for anything else” is a funny way for the article to frame “hate being forced to run on the hampster wheel on ai, because an exec with the power to fire everyone is foaming at the mouth about ai and seemingly needs everyone to use it”
To add another layer to it, the reason execs are foaming at the mouth is because they are hoping to fire the as many people as possible. Including those who implemented whatever AI solution in the first place.
The most ironic part is that AI skills won't really help you with job security.
You touched on some of the reasons; it doesn't take much skill to call an API, the technology is in a period of rapid evolution, etc.
And now with almost every company trying to adopt "AI" there is no shortage of people who can put AI experience on their resume and make a genuine case for it.
Maybe not what the OP or article is talking about, but it's super frustrating recently dealing with non/less technical mgrs, PMs, etc who now think they have this Uno card to bypass technical discussion just because they vibe coded some UI demo. Like no shit, that wasn't the hard part. But since they don't see the real/less visible past like data/auth/security, etc they act like engineers "aren't trying", less innovative, anti-AI or whatever when you bring up objections to their "whole app" they made with their AI snoopy snow cone machine.
Hmm, (whatever is in execs' head about) AI appears to amplify the same kind of thinking fallacies that are discussed in the eternal Mythical Manmonth essay, which was written like half a century ago. Funny how some things don't change much...
It reminds me of how we moved from "mockups" to "wireframes" -- in other words, deliberately making the appearance not look like a real, finished UI, because that could give the impression that the project was nearly done
But now, to your point: they can vibe-code their own "mockups" and that brings us back to that problem
> We hear AI advocates say that their workflow is amazing, but we watch videos of their workflow, and it doesn't look that great. We hear AI advocates say "the next release is about to change everything!", but this knowledge isn't actionable or even accurate.
There's a lot of disconnected-from-reality hustling (a.k.a lying) going on. For instance, that's practically Elon Musk's entire job, when he's actually doing it. A lot of people see those examples, think it's normal, and emulate it. There are a lot of unearned superlatives getting thrown around automatically to describe tech.
I’ve been an engineer for 20 years, for myself, small companies, and big tech, and now working for my own saas company.
There are many valid critiques of AI, but “there’s not much there” isn’t one of them.
To me, any software engineer who tries an LLM, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job, which is using technology to solve problems. Maybe AI isn’t the right tool for the job, but that kind of shallow dismissal indicates a closed mind, or perhaps a fear-based reaction. Either way, the market is going to punish them accordingly.
Punishment eh? Serves them right for being skeptical.
I've been around long enough that I have seen four hype cycles around AI like coding environments. If you think this is new you should have been there in the 80's (Mimer, anybody?), when the 'fourth generation' languages were going to solve all of our coding problems. Or in the 60's (which I did not personally witness on account of being a toddler), when COBOL, the language for managers was all the rage.
In between there was LISP, the AI language (and a couple of others).
I've done a bit more than looking at this and saying 'huh, that's interesting'. It is interesting. It is mostly interesting in the same way that when you hand an expert a very sharp tool they can probably carve wood better than with a blunt one. But that's not what is happening. Experts are already pretty productive and they might be a little bit more productive but the AI has it's own envelope of expertise and the closer you are to the top of the field the smaller your returns in that particular setting will be.
In the hands of a beginner there will be blood all over the workshop and it will take an expert to sort it all out again, quite possibly resulting in a net negative ROI.
Where I do get use out of it: to quickly look up some verifiable fact, to tell me what a particular acronym stands for in some context, to be slightly more functional than wikipedia for a quick overview of some subfield (but you better check that for gross errors). So yes, it is useful. But it is not so useful that competent engineers that are not using AI are failing at their job, and it is at best - for me - a very mild accelerator in some use cases. I've seen enough AI driven coding projects strand hopelessly by now to know that there are downsides to that golden acorn that you are seeing.
The few times that I challenged the likes of ChatGPT with an actual engineering problem to which I already knew the answer by way of verification the answers were so laughably incorrect that it was embarrassing.
I'm not a big llm booster, but I will say that they're really good for proof of concepts, for turning detailed pseudocode into code, sometimes for getting debugging ideas. I'm a decade younger than you, but I've programmed in 4GLs (yuch), lived through a few attempts at visual programming (ugh), and ... LLM assistance is different. It's not magic and it does really poorly at the things I'm truly expert at, but it does quite well with boring stuff that's still a substantial amount of programming.
And for the better. I've honestly not had this much fun programming applications (as opposed to students stuff and inner loops) in years.
> but it does quite well with boring stuff that's still a substantial amount of programming.
I'm happy that it works out for you, and probably this is a reflection of the kind of work that I do, I wouldn't know how to begin to solve a problem like designing a braille wheel or a windmill using AI tools even though there is plenty of coding along the way. Maybe I could use it to make me faster at using OpenSCAD but I am never limited by my typing speed, much more so by thinking about what it is that I actually want to make.
I've used it a little for openscad with mixed results - sometimes it worked. But I'm a beginner at openscad and suspect if I were better it would have been faster to just code it. It took a lot of English to describe the shape - quite possibly more than it would have taken to just write in openscad. Saying "a cube 3cm wide by 5cm high by 2cm deep" vs cube([5, 3, 2]) ... and as you say, the hard part is before the openscad anyway.
OpenSCAD has a very steep learning curve. The big trick is not to think sequentially but to design the part 'whole'. That requires a mental switch. Instead of building something and then adding a chamfered edge (which is possible, but really tricky if the object is complex enough) you build it out of primitives that you've already chamfered (or beveled). A strategic 'hull' here and there to close the gaps helps a lot.
Another very useful trick is to think in terms of vertices of your object rather than the primitives creates by those vertices. You then put hulls over the vertices and if you use little spheres for the vertices the edges take care of themselves.
This is just about edges and chamfers, but the same kind of thinking applies to most of OpenSCAD. If I compare how productive I am with OpenSCAD vs using a traditional step-by-step UI driven cad tool it is incomparable. It's like exploratory programming, but for physical objects.
> There are many valid critiques of AI, but “there’s not much there” isn’t one of them.
"There's not much there" is a totally valid critique of a lot of the current AI ecosystem. How many startups are simple prompt wrappers on top of ChatGPT? How many AI features in products are just "click here to ask Rovo/Dingo/Kingo/CutesyAnthropomorphizedNameOfAI" text boxes that end up spitting out wrong information?
There's certainly potential but a lot of the market is hot air right now.
> Either way, the market is going to punish them accordingly.
I doubt this, simply because the market has never really punished people for being less efficient at their jobs, especially software development. If it did, people proficient in vim would have been getting paid more than anyone else for the past 40 years.
IMO if the market is going to punish anyone it’s the people who, today, find that AI is able to do all their coding for them.
The skeptics are the ones that have tried AI coding agents and come away unimpressed because it can’t do what they do. If you’re proudly proclaiming that AI can replace your work then you’re telling on yourself.
> simply because the market has never really punished people for being less efficient at their jobs
In fact, it tends to be the opposite. You being more efficient just means you get "rewarded" with more work, typically without an appropriate increase in pay to match the additional work either.
Especially true in large, non-tech companies/bureaucratic enterprises where you are much better off not making waves, and being deliberately mediocre (assuming you're not a ladder climber and aren't trying to get promoted out of an IC role).
In a big team/org, your personal efficiency is irrelevant. The work can only move as fast as the slowest part of the system.
This is very true. So you can't just ask people to use AI and expect better output even if AI is all the hype. The bottlenecks are not how many lines of code you can produce in a typical big team/company.
I think this means a lot of big businesses are about to get "disrupted" because small teams can become more efficient because for them sheer generation of somtimes boilerplate low quality code is actually a bottleneck.
Sadly capitalism rewards scarcity at a macro level, which in some ways is the opposite of efficiency. It also grants "social status" to the scarce via more resources. As long as you aren't disrupted, and everyone in your industry does the same/colludes, restricting output and working less usually commands more money up to a certain point (prices are set more as a monopoly in these markets). Its just that scarcity was in the past correlated with difficulty which made it "somewhat fair" -> AI changes that.
Its why unions, associations, professional bodies, etc exist for example. This whole thread is an example -> the value gained from efficiency in SWE jobs doesn't seem to be accruing value to the people with SWE skills.
I think part of this is that there is no one AI and there is no one point in time.
The other day Claude Code correctly debugged an issue for me, that was seen in production, in a large product. It found a bug a human wrote, a human reviewed, and fixed it. For those interested the bug had to do with chunk decoding, the author incorrectly re-initialized the decoder in the loop for every chunk. So single chunk - works. >1 chunk fails.
I was not familiar with the code base. Developers who worked on the code base spent some time and didn't figure out what was going on. They also were not familiar with the specific code. But once Claude pointed this out that became pretty obvious and Claude rewrote the code correctly.
So when someone tells me "there's not much there" and when the evidence says the opposite I'm going to believe my own lying eyes. And yes, I could have done this myself but Claude did this much faster and correctly.
That said, it does not handle all tasks with the same consistency. Some things it can really mess up. So you need to learn what it does well and what it does less well and how and when to interact with it to get the results you want.
It is automation on steroids with near human (lessay intern) capabilities. It makes mistakes, sometimes stupid ones, but so do humans.
>So when someone tells me "there's not much there" and when the evidence says the opposite I'm going to believe my own lying eyes. And yes, I could have done this myself but Claude did this much faster and correctly.
If the stories were more like this where AI was an aid (AKA a fancy auto complete), devs would probably be much more optimistic. I'd love more debugging tools.
Unfortunately, the lesson an executive here would see is "wow AI is great! fire those engineers who didn't figure it out". Then it creeps to "okay have AI make a better version of this chunk decoder". Which is wrong on multiple levels. Can you imagine if the result for using Intellisense for the first time was to slas your office in half? I'd hate autocomplete too?
What's "there" though is that despite being wrappers of chat gpt, the product itself is so compelling that it's essentially got a grip on the entire american economy. That's why everyone's crabs in a bucket about it, there's something real that everyone wants to hitch on to. People compare crypto or NFTs to this in terms of hype cycle, but it's not even close.
>there's something real that everyone wants to hitch on to.
Yeah, stock prices, unregulated consolidation, and a chance to replace the labor market. Next to penis enhancement, it's a CEO's wet dream. They will bet it all for that chance.
Granted, I think its hastiness will lead to a crash, so the CEO's played themselves short term.
> To me, any software engineer who tries an LLM, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job, which is using technology to solve problems.
I would argue that the "actual job" is simply to solve problems. The client / customer ultimately do not care what technology you use. Hell, they don't really care if there's technology at all.
And a lot of software engineers have found that using an LLM doesn't actually help solve problems, or the problems it does solve are offset by the new problems it creates.
What you described isn't a shallow dismissal. They tried it, found it to not be useful in solving the problems they face, and moved on. That's what any reasonable professional should do if a tool isn't providing them value. Just because you and they disagree on whether the tool provides value doesn't mean that they are "failing at their job".
Or maybe it indicates that the person looking at the LLM and deciding there’s not much there knows more than you do about what they are and how they work, and you’re the one who’s wrong about their utility.
>To me, any software engineer who tries an LLM, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job, which is using technology to solve problems
This feels like a mentality of "a solution trying to find a problem". There's enough actual problems to solve that I don't need to create more.
But sure, the extension of this is "Then they go home and research more usages and see a kerfluffle of legal, community, and environmental concerns". Then decides to not get involved in the politics".
>Either way, the market is going to punish them accordingly.
If you want to punish me because I gave evaluations you disagreed with, you're probably not a company I want to work for. I'm not a middle manager.
It really depends on what you’re doing. AI models are great at kind of junior programming tasks. They have very broad but often shallow knowledge - so if your job involves jumping between 18 different tools and languages you don’t know very well, they’re a huge productivity boost. “I don’t write much sql, or much Python. Make a query using sqlalchemy which solves this problem. Here’s our schema …”
AI is terrible at anything it hasn’t seen 1000 times before on GitHub. It’s bad at complex algorithmic work. Ask it to implement an order statistic tree with internal run length encoding and it will barely be able to get off the starting line. And if it does, the code will be so broken that it’s faster to start from scratch. It’s bad at writing rust. ChatGPT just can’t get its head around lifetimes. It can’t deal with really big projects - there’s just not enough context. And its code is always a bit amateurish. I have 10+ years of experience in JS/TS. It writes code like someone with about 6-24 months experience in the language. For anything more complex than a react component, I just wouldn’t ship what it writes.
I use it sometimes. You clearly use it a lot. For some jobs it adds a lot of value. For others it’s worse than useless. If some people think it’s a waste of time for them, it’s possible they haven’t really tried it. It’s also possible their job is a bit different from your job and it doesn’t help them.
> that kind of shallow dismissal indicates a closed mind, or perhaps a fear-based reaction
Or, and stay with me on this, it’s a reaction to the actual experience they had.
I’ve experimented with AI a bunch. When I’m doing something utterly formulaic it delivers (straightforward CRUD type stuff, or making a web page to display some data). But when I try to use it with the core parts of my job that actually require my specialist knowledge they fall apart. I spend more time correcting them than if I just write it myself.
Maybe you haven’t had that experience with work you do. But I have, and others have. So please don’t dismiss our reaction as “fear based” or whatever.
I would've thought that in 20 years you would have met other devs who do not think like you?
something I enjoy about our line of work is there are different ways to be good at it, and different ways to be useful. I really enjoy the way different types of people make a team that knows its strengths and weaknesses.
anyway, I know a few great engineers who shrug at the agents. I think different types of thinker find engagement with these complex tools to be a very different experience. these tools suit some but not all and that's ok
This is the correct viewpoint (in my opinion, of course). There are many ways that lead to a solution, some are better, some are worse, some are faster, some much slower. Different tools and different strokes for different folks and if it works for you then more power to you. That doesn't mean you get to discard everybody for whom it does not work in exactly the same way.
I think a big mistake junior managers make is that they think that their nominal subordinates should solve problems the way that they would solve them, without recognizing that there are multiple valid paths and that it doesn't so much matter which path is chosen as long as the problem is solved on time and within the allocated budget.
I use AI all the time, but the only gain they have is better spelling and grammar than me. Spelling and grammar has long been my weak point. I can write the same code they write just as fast without - typing has never been the bottleneck in writing code. The bottleneck is thinking and I still need to understand the code AI writes since it is incorrect rather often so it isn't saving any effort, other than the time to look up the middle word of some long variable name.
My dismissal I think indicates exhaustion from the additional work I’d need to do to make an LLM write my code, annoyance at its inaccuracies, and disgust at the massive scam and grift that is the LLM influencers.
Writing code via a LLM feels like writing with a wet noodle. It’s much faster and write what I mean, myself, with the terse was and precision of my own thought.
Just because you can solve problems with one class of tools doesn’t mean another class is pointless. A whole new class of problems just became solvable.
> A whole new class of problems just became solvable.
This is almost by definition not really true. LLMs spit out whatever they were trained on, mashed up. The solutions they have access to are exactly the ones that already exist, and for the most part those solutions will have existed in droves to have any semblance of utility to the LLM.
If you're referring to "mass code output" as "a new class of problem", we've had code generators of differing input complexity for a very long time; it's hardly new.
So what do you really mean when you say that a new class of problems became solvable?
> To me, any software engineer who tries an LLM, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job,
I don't understand why people seem so impatient about AI adoption.
AI is the future, but many AI products aren't fully mature yet. That lack of maturity is probably what is dampening the adoption curve. To unseat incumbent tools and practices you either need to do so seamlessly OR be 5-10x better (Only true for a subset of tasks). In areas where either of these cases apply, you'll see some really impressive AI adoption. In areas where AI's value requires more effort, you'll see far less adoption. This seems perfectly natural to me and isn't some conspiracy - AI needs to be a better product and good products take time.
> I don't understand why people seem so impatient about AI adoption.
We're burning absurd, genuinely farcical amounts of money on these tools now, so of course they're impatient. There's Trillions (with a "T") riding on this massive hypewave, and the VCs and their ilk are getting nervous because they see people are waking up to the reality that it's at best a kinda useful tool in some situations and not the new God that we were promised that can do literally everything ever.
I mean, this is the other extreme to the post being replied to (either you think it's useless and walk away, or you're failing at your job for not using it)
I personally use it, I find it helpful at times, but I also find that it gets in my way, so much so it can be a hindrance (think losing a day or so because it's taken a wrong turn and you have to undo everything)
FTR The market is currently punishing people that DO use it (CVs are routinely being dumped at the merest hint of AI being used in its construction/presentation, interviewers dumping anyone that they think is using AI for "help", code reviewers dumping any take home assignments that have even COMMENTS massaged by AI)
> If you haven’t had a mind blown moment with AI yet, you aren’t doing it right
I AM very impressed, and I DO use it and enjoy the results.
The problem is the inconsistency. When it works it works great, but it is very noticeable that it is just a machine from how it behaves.
Again, I am VERY impressed by what was achieved. I even enjoy Google AI summaries to some of the questions I now enter instead of search terms. This is definitely a huge step up in tier compared to pre-AI.
But I'm already done getting used to what is possible now. Changes after that have been incremental, nice to have and I take them. I found a place for the tool, but if it wanted to match the hype another equally large step in actual intelligence is necessary, for the tool to truly be able to replace humans.
So, I think the reason you don't see more glowing reviews and praise is that the technical people have found out what it can do and can't, and are already using it where appropriate. It's just a tool though. One that has to be watched over when you use it, requiring attention. And it does not learn - I can teach a newbie and they will learn and improve, I can only tweak the AI with prompts, with varying success.
I think that by now I have developed a pretty good feel for what is possible. Changing my entire workflow to using it is simply not useful.
I am actually one of those not enjoying coding as such, but wanting "solutions", probably also because I now work for an IT-using normal company, not for one making an IT product, and my focus most days is on actually accomplishing business tasks.
I do enjoy being able to do some higher level descriptions and getting code for stuff without having to take care of all the gritty details. But this functionality is rudimentary. It IS a huge step, but still not nearly good enough to really be able to reliably delegate to the AI to the degree I want.
The big problem is AI is amazing at doing the rote boilerplate stuff that generally wasn't a problem to begin with, but if you were to point a codebot at your trouble ticket system and tell it to go fix the issues it will be hopeless. Once your system gets complex enough the AI effectiveness drops off rapidly and you as the engineer have to spend more and more time babysitting every step to make sure it doesn't go off the rails.
In the end you can save like 90% of the development effort on a small one-off project, and like 5% of the development effort on a large complex one.
I think too many managers have been absolutely blown away by canned AI demos and toy projects and have not been properly disappointed when attempting to use the tools on something that is not trivial.
I think the 90/90 rule comes into play. We all know Tom Cargill quote (even if we’ve never seen it attributed):
The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time.
It feels like a gigantic win when it carves through that first 90%… like, “wow, I’m almost done and I just started!”. And it is a genuine win! But for me it’s dramatically less useful after that. The things that trip up experienced developers really trip up LLMs and sometimes trying to break the task down into teeny weeny pieces and cajole it into doing the thing is worse than not having it.
So great with the backhoe tasks but mediocre-to-counterproductive with the shovel tasks. I have a feeling a lot of the impressiveness depends on which kind of tasks take up most of your dev time.
The other problem is that if you didn't actually write the first 90% then the second 90% becomes 2x harder since you have to figure out wtf is actually going on.
The more I use AI for coding the more I realize that its a toy for vibe coding/fun projects. Its not for serious work.
When you work with a large codebase which have a very high complexity level, then the bugs put in there by AI will not worth the cost of the easily added features.
Many people also program and have no idea what a giant codebase looks like.
I know I don't. I have never been paid to write anything beyond a short script.
I actually can't even picture what a professional software engineer actually works on day to day.
From my perspective, it is completely mind blowing to write my own audio synth in python with Librosa. A library I didn't know existed before LLMs and now I have a full blown audio mangling tool that I would have never been able to figure out on my own.
It seems to me professional software engineering must be at least as different to vibe coding as my audio noodlings are to being a professional concert pianist. Both are audio and music related but really two different activities entirely.
I work on a stock market trading system in a big bank, in Hong Kong.
The code is split between a backend in Java (no GC allowed during trading) and C++ (for algos), a frontend in C# (as complex as the backend, used by 200 traders), and a "new" frontend in Javascript in infinite migration.
Most of the code was made before 2008 but that was the cvs to svn switch so we lost history before that. We have employees dating back 1997 who remembers that platform already existing.
It's made of millions of lines of code, hundreds of people worked on it, it does intricate things in 10 stock markets across Asia (we have no clue how the others in US or EU do, not really at least - it's not the same rules, market vendors, protocols etc)
Sometimes I need to configure new trading robots for random little thing we want to do automatically and I ask the AI the company is shoving down our throat. It is HOPELESS, literally hopeless. I had to write a review to my manager who will never pass it along up the ladder for fear of their response that was absolutely destructive. It cannot understand the code let alone write some, it cannot write the tests, it cannot generate configuration, it cannot help in anything. It's always wrong, it never gets it, it doesn't know what the fuck these 20 different repos of thousands of files are and how they connect to each other, why it's in so many languages, why it's so quirky sometimes.
Should we change it all to make it AI compatible, or give up ? Fuck do I know... When I started working on it 7 years ago coming from little startups doing little things, it took me a few weeks to totally get the philosophy of it all and be productive. It's really not that hard, it's just really really really really large, so you have to embrace certain ways of working (for instance, you'll do bugs, and you'll find them too late, and you'll apologize in post mortems, dont be paralized by it). AIs costing all that money to be so dumb and useless, are disappointing :(
This shit right here is why people hate AI hype proponents. It's like it never crosses their mind that someone who disagrees with them might just be an intelligent person who tried it and found it was lacking. No, it's always "you're either doing it wrong or weren't really trying". Do you not see how condescending and annoying that is to people?
I wonder if this issues isn't caused by people who aren't programmers, and now they can churn out AI generated stuff that they couldn't before. So to them, this is a magical new ability. Where as people who are already adept at their craft just see the slop. Same thing in other areas. In the before-times, you had to painstakingly handcraft your cat memes. Now a bot comes along and allows someone to make cat memes they didn't bother with before. But the real artisan cat memeists just roll their eyes.
AI is better than you at what you aren’t very good at. But once you are even mediocre at doing something you realize AI is wrong / pretty bad at doing most things and every once in awhile makes a baffling mistake.
There are some exceptions where AI is genuinely useful, but I have employees who try to use AI all the time for everything and their work is embarrassingly bad.
> If you haven’t had a mind blown moment with AI yet...
Results are stochastic. Some people the first time they use it will get the best possible results by chance. They will attribute their good outcome to their skill in using the thing. Others will try it and will get the worst possible response, and they will attribute their bad outcome to the machine being terrible. Either way, whether it's amazing or terrible is kind of an illusion. It's both.
> If you haven’t had a mind blown moment with AI yet, you aren’t doing it right or are anchoring in what you know vs discovering new tech.
Much of this boils down to people simply not understanding what’s really happening. Most people, including most software developers, don’t have the ability to understand these tools, their implications, or how they relate to their own intelligence.
In European consulting agencies the trend now is to make AI part of each RFP reply, you won't go through the sales team, if AI isn't crammed there as part of the solution being delivered, and we get evaluated for it.
This takes all the joy away, even traditional maintenance projects of big corps seems attractive nowadays.
I remember when everything had a have the word ‘digital’ in it. And I’m old enough to remember when ‘multimedia’ was a buzzword that was crammed into anywhere it would fit.
PC, Web and Smartphone hype was based on "we can now do [thing] never done before".
This time out it feels more like "we can do existing [thing], but reduce the cost of doing it by not employing people"
It all feels much more like a wealth grab for the corporations than a promise of improving a standard of living for end customers. Much closer to a Cloud or Server (replacing Mainframes) cycle.
>> This time out it feels more like "we can do existing [thing], but reduce the cost of doing it by not employing people"
I was doing RPA (robotic process automation) 8 years ago. Nobody wanted it in their departments. Whenever we would do presentations, we were told to never, ever, ever talk about this technology replacing people - it only removes the mundane work so teams can focus more on the bigger scope stuff. In the end, we did dozens and dozens of presentations and only two teams asked us to do some automation work for them.
The other leaders had no desire to use this technology because they were not only fearful of it replacing people on their teams, they were fearful it would impact their budgets negatively so they just quietly turned us down.
Unfortunately, you're right because as soon as this stuff gets automated and you find out 1/3rd of your team is doing those mundane tasks, you learn very quickly you can indeed remove those people since there won't be enough "big" initiatives to keep everybody busy enough.
The caveat was even on some of the biggest automations we did, you still needed a subset of people on the team you were working with to make sure the automations were running correctly and not breaking down. And when they did crash, since a lot of these were moving time sensitive data, it was like someone just stole the crown jewels and suddenly you need two war rooms and now you're ordering in for lunch.
Yes and no. PC, Web, etc advancements were also about lowering cost. It’s not that no one could do some thing, it’s that it was too expensive for most people, e.g. having a mobile phone in the 80’s.
Or hiring a mathematician to calculate what is now done in a spreadsheet.
"You should be using AI in your day to day job or you won't get promoted" is the 2025 equivalent of being forced to train the team that your job is being outsourced to.
> There's a dichotomy in the software world between real products (which have customers and use cases and make money by giving people things they need) and hype product
I think there is a broader dichotomy between the people-persuation-plane, and the real-world-facts plane. In the people-persuation plane, it is all about convincing someone of something, and hype plays here, and marketing, religion and political persuation too. In the real world plane, it is all about tangible outcomes, and working code or results play here, and gravity and electromagnetism too. Sometimes there is a reflex loop between the two.
I chose the engineering career because, what i produce is tangible, but I realize that a lot of my work is in the people-plane.
>a broader dichotomy between the people-persuation-plane, and the real-world-facts plane
This right here is the real thing which AI is deployed to upset.
The Enlightenment values which brought us the Industrial Revolution imply that the disparity between the people-persuasion-plane and the real-world-facts-plane should naturally decrease.
The implicit expectation here is that as civilization as a whole learns more about how the universe works, people would naturally become more rational, and thus more persuadable by reality-compliant arguments and less persuadable by reality-denying ones.
That's... not really what I've been seeing. That's not really what most of us have been seeing. Like, devastatingly not so.
My guess is that something became... saturated? I'd place it sometime around the 1970s, same time Bretton Woods ended, and the productivity/wages gap began to grow. Something pertaining to the shared-culture-plane. Maybe there's only so much "informed" people can become before some sort of phase shift occurs and the driving force behind decisions becomes some vague, ethically unaccountable ingroup intuition ("vibes", yo), rather than the kind of explicit, systematic reasoning which actually is available to any human, except for the weird fact how nobody seems to trust it very much any more.
I wonder if objectively Seattle got hit harder than SF in the last bust cycle. I don’t have a frame of comparison. But if the generational trauma was bigger then so too would the backlash against new bubbles.
I've never worked at Microsoft. However, I do have some experience with the company.
I worked building tools within the Microsoft ecosystem, both on the SQL Server side, and on the .NET and developer tooling side, and I spent some time working with the NTVS team at Microsoft many years ago, as well as attending plenty of Microsoft conferences and events, working with VSIP contacts, etc. I also know plenty of people who've worked at or partnered with Microsoft.
And to me this all reads like classic Microsoft. I mean, the article even says it: whatever you're doing, it needs to align with whatever the current key strategic priority is. Today that priority is AI, 12 years ago it was Azure, and on and on. And, yes, I'd imagine having to align everything you do to a single priority regardless of how natural that alignment is (or not) gets pretty exhausting, and I'd bet it's pretty easy to burn out on it if you're in an area of the business where this is more of a drag and doesn't seem like it delivers a lot of value. And you'll have to dogfood everything (another longtime Microsoft pattern) core to that priority even if it's crap compared with whatever else might be out there.
But I don't think it's new: it's simply part and parcel of working at Microsoft. And the thing is, as a strategy it's often served them well: Windows[0], Xbox, SQL Server, Visual Studio, Azure, Sharepoint, Office, etc. Doesn't always work, of course: Windows Phone went really badly, but it's striking that this kind of swing and a miss is relatively rare in Microsoft's history.
And so now, of course, they're doing it with AI. And, of course, they're a massive company, so there will be plenty of people there who really aren't having a good time with it. But, although it's far from a foregone conclusion, it would not be a surprise for Microsoft to come from behind and win by repeating their usual strategy... again.
[0] Don't overread this: I'm not necessarily saying I'm a huge fan. In fact I do think Windows, at is core, is a decent operating system, and has been for a very long time. On the back end it works well, and I have no complaints. But I viscerally despise Windows 11 as a desktop operating system. That's right: DESPISE. VISCERALLY. AT A MOLECULAR LEVEL.
> But moving toward one pole moves you away from the other.
My assumption detector twigged at that line. I think this is just replacing the dichotomy with a continuum between two states. But the hype proponents always hope - and in some cases they are right - that those two poles overlap. People make and lose fortunes on placing those bets and you don't necessarily have to be right or wrong in an absolute sense, just long enough that someone else will take over your load and hopefully at a higher valuation.
Engineers are not usually the ones placing the bets, which is why they're trying to stay away from hype driven tech (to them it is neutral with respect to the outcome but in case of a failure they lose their job, so better to work on things that are not hyped, it is simply safer). But as soon as engineers are placing bets they are just as irrational as every other class of investor.
I do assume that, I legitimately think it's the most important thing happening in the next decade in tech. There's going to be an incredible amount of traditional software written to make this possible (new databases, frameworks, etc.) and I think people should be able to see the opportunity, but the awful cultures in places like Microsoft are hindering this.
This somewhat reflects my sentiment to this article. It felt very condescending. This "self-limiting beliefs" and the implication that Seattle engineers are less than San Francisco engineers because they haven't bought into AI...well, neither have all the SF engineers.
One interesting take away from the article and the discussion is that there seem to be two kinds of engineers: those that buy into the hype and call it "AI," and those that see it for the fancy search engine it is and call it an "LLM." I'm pretty sure these days when someone mentions "AI" to me I roll my eyes. But if they say, "LLM," ok, let's have a discussion.
> So, no. I don't think "engineers don't try because they think they can't." I think engineers KNOW they CAN and resent being asked to look pretty and do nothing of value.
I understood “they think they can’t” to refer to the engineers thinking that management won’t allow them to, not to a lack of confidence in their own abilities.
>So, no. I don't think "engineers don't try because they think they can't." I think engineers KNOW they CAN and resent being asked to look pretty and do nothing of value.
the list of people who write code, use high quality LLM agents (not chatbots) like Claude, and report not just having success with the tools but watching the tools change how they think about programming, continues to grow. The sudden appearance of LLMs has had a really destabilizing effect on everything, and a vast portion of what LLMs can do and/or are being used for runs from intellectually stifling (using LLMs to write your term papers) to revolting (all kinds of non-artists/writers/musicians using LLM to suddenly think they are "creators" and displacing real artists, writers, and musicians) to utterly degenerate (political / sexual deepfakes of real people, generation of antivax propaganda, etc). Put on top of that the way corporate America is absolutely doing the very familiar "blockchain" dance on this and insisting everyone has to do AI all the time everywhere is a huge problem that hopefully will shake out some in the coming years.
But despite all that, for writing, refactoring, and debugging computer code, LLM agents are still completely game changing. All of these things are true at the same time. There's no way someone that works with real code all day could spent an honest few weeks with a tool like Claude and come away calling it "hype". someone might still not prefer it, or it's not for them, but to claim it's "hype", that's not possible.
> There's no way someone that works with real code all day could spent an honest few weeks with a tool like Claude and come away calling it "hype". someone might still not prefer it, or it's not for them, but to claim it's "hype", that's not possible.
I've tried implementing features with Claude Code Max and if I had let that go on for a week instead of just a couple of days I would've lost a week's worth of work (it was pretty immediately obvious that it was too slow at doing pretty much everything, and even the slightest interaction with the LLM caused very long round-trips that would add additional time, over and over and over again). It's possible people simply don't do the kind of things I do. On the extreme end of that, had I spent my days making CRUD apps I probably would've thought it was magic and a "game changer"... But I don't.
I actually don't have a problem believing that there are people who basically only need to write 25% of their code now; if all you're doing for work is gluing together libraries and writing boilerplate then of course an LLM is going to help with that, you're probably the 1000th person that day to ask for the same thing.
The one part I would say LLMs seem to help me with is medium-depth questions about DirectX12. Not really how to use it, but parts of the API itself. MSDN is good for learning about it, but I would concede that LLMs have been useful for just getting more composite knowledge of DX12.
P.S.:
I have found that very short completions, 1-3 lines, is a lot more productive for me personally than any kind of "generate this feature", or even function-sized generation. The reason is likely that LLMs just suck at the things I do, but they can figure out that a pattern exists in the pretty immediate context and just spit out that pattern with some context clues nearby. That remains my best experience with any and all LLM-assisted coding. I don't use it often because we don't allow LLMs for work, but I have a keybind for querying for a completion when I do side projects.
my current job /role combinations has me working in a variety of projects which feature tasks to be done in: Python/SQLAlchemy (which I maintain), Go, k8s, Ansible, Bash, Groovy, Java, Typescript, javascript, etc. If I'm doing an architecture-intensive thing in SQLAlchemy, obviously I'm not going to say "Claude here go do this feature for me". I will have it do things like write change notes (where I'll write out the changelog in the convoluted and overly technical way I can do in 10 seconds, and it produces something presentable and readable from it), set up test cases, and sometimes I will give it very specific instructions for a large refactoring that has a predictable pattern (basically, instead of me figuring out a complex search and replace or doing it manually). For stuff I do in Ansible and especially Groovy (a horrible language which heavily resists being lintable), these are very simple declarative playbooks or Jenkins pipeline jobs, I use Claude heavily to write out directives and such because it will do so without syntax errors and without me having to google every individual pattern or directive; it's much easier to check what it writes and debug from there. But I'm also not putting Claude in charge in these places, it's doing the boring stuff for me and doing it a lot faster and without my having to spend cognitive overhead (which is at a premium when you're in your late 50s like me).
> The one part I would say LLMs seem to help me with is medium-depth questions about DirectX12. Not really how to use it, but parts of the API itself. MSDN is good for learning about it, but I would concede that LLMs have been useful for just getting more composite knowledge of DX12.
see there you go, I have things like this I have to figure out many times per week. so many of them are one-off things I really dont need to learn deeply at the moment (like TypeScript). It's also very helpful to bounce off ideas, like when I need to achieve something in the Go/k8s realm, it can sanity check how I'm approaching a problem and often suggest other ways that I would not have considered (which it knows because it's been trained on millions of tech blogs).
> the list of people who write code, use high quality LLM agents (not chatbots) like Claude, and report not just having success with the tools but watching the tools change how they think about programming, continues to grow.
My company is basically writing blank cheques for "AI" (aka LLM, I hate the way we've poisoned AI as a term))tooling so that people can use any and all tooling they want and see what works and doesn't. This is a company with ~1500ish engineers, ranging from hardware engineers building POS devices to the junior frontenders building out our simplest UIs. There's also a whole lot more people who aren't technical, and they're also encouraged to use any and all AI tooling they can.
Despite the entire company trying to figure out how to use these effectively precisely because we're trying to look at things objectively and separate out the hype from the reality, the only people I've seen with any kind of praise so far (and this has been going on since the early ChatGPT days) have been people in Marketing and Sales, because for them it doesn't matter if the AI hallucinates some pure bullshit since that's 90% of their job anyway.
We have spent god knows how much time and resources trying to get these tools doing anything more useful than simple demos that get thrown out immediately, and it's just not there. No one is pushing 100x the code or features they were before, projects aren't finishing any faster than they were before, and nobody even bothers turning on the meeting transcription tools either anymore because more often than not it'll interpret things said in the meeting just plain wrong or even make up entire discussion points that were never had.
Just recently, like last week recently, we had some idiotic PR review bot from coderabbit or some other such company be activated. I've never seen so many people complain all at once on Slack, there was a thread with hundreds of individuals all saying how garbage it was and how much it was distracting from reviews. I didn't see a single person say they liked the tool, not 1 single person had anything good to say about it.
So as far as I'm concerned, it's just a MASSIVE fucking hype bubble that will ultimately spawn some tooling that is sorta useful for generating unimportant scripts, but little else.
never give an LLM to your junior engineers. The LLM itself is mostly like a junior engineer and will make a complete mess of things if not guided by someone with a lot of experience.
Basically if people are producing code or documentation that looks like an LLM wrote it, that's not really what I see as the model that makes these tools useful.
The last few years has revealed the extent to which HN is packed with middle-aged, conservative engineers who are letting their fear and anxiety drive their engineering decisions. It’s sad, I always thought of my fellow engineers as more open-minded.
> The last few years has revealed the extent to which HN is packed with middle-aged, conservative engineers who are letting their fear and anxiety drive their engineering decisions.
> Turns out experience can be self-limiting in the face of paradigm-shifting innovation.
It also turns out that experience can be what enables you to not waste time on trendy stuff which will never deliver on its promises. You are simply assuming that AI is a paradigm shift rather than a waste of time. Fine, but at least have the humility to acknowledge that reasonable people can disagree on this point instead of labeling everyone who disagrees with you as some out of touch fuddy-duddy.
There can be a bunch of crazy people trading each other various lumps of dog feces for increasing sums of cash, that doesn't mean dogshit is particularly valuable or substantive either.
I'd argue even dogshit has more practical use than Bitcoin, if no one paid money for Bitcoin. You can throw it for self-defence, compost it (under high heat to kill the germs), put it on your property to scare away raccoons (it works sometimes).
Bitcoin and other crypto coins have a practical use. You can use them to buy whatever is being sold on the darkweb with the main product categories being drugs and guns. I honestly believe much of the valuation of Crypto is tied to these marketplaces.
And by "dog feces," I assume you mean fiat currency, correct?
Cryptocurrency solves the money-printing problem we've had around the world since we left the gold standard. If governments stopped making their currencies worthless, then bitcoin would go to zero.
This seems to be almost purely bandwagon value, like preferring Coca-Cola to some other drink. There are other blockchains that are better technically along a bunch of dimensions, but they don't have the mindshare.
Bitcoin is probably unkillable. Even if were to crash, it won't be hard to round up enough true believers to boost it up again. But it's technically stagnant.
True, but then so is a lot of "tech". There were certainly, at least equivalent, social applications before and all throughout Facebooks dominance, but like Bitcoin the network effect becomes primary, after a minimum feature set.
For Bitcoin, it doesn't exactly seem to be a network effect? It's not like choosing a chat app because that's what your friends use.
Many other cryptocurrencies are popular enough to be easily tradable and have features to make them work better for trade. Also, you can speculate on different cryptocurrencies than your friends do.
Technically stagnant is a good thing; I'd prefer the term technically mature. It's accomplished what it set out to do, which is to be a decentralized, anonymous form of digital currency.
The only thing that MIGHT kill it is if governments stopped printing money.
Beanie Babies were trading pretty well, too, although it wasn't quite "solving sudokus for drugs", so I guess that's why they didn't have as much staying power.
Those are the main innovations tied to crypto trading. They do indeed have little to do with the blockchain or bitcoin itself, and do apply to any asset.
There are actually several startups whose pitch is to bring back those innovations to equities (note that this is different from tokenized equities).
Uh…
So the argument here is that
anticipated future value == meaningful value today?
The whole cryptocurrency world requires evangelical buy-in. But there is no directly created functional value other than a historic record of transactions and hypothetical decentralization. It doesn’t directly create value.
It’s a store of it - again, assuming enough people continue to buy into the narrative so that it doesn’t dramatically deflate when you need to recover your assets. States and other investors are helping make stability happen to maintain it as a value store, but you require the story propagating to achieve those ends.
You’re wasting your breath. Bitcoin will be at a million in 2050 and you’ll still get downvoted here for suggesting it’s anything other than a stupid bubble that’s about to burst any day now.
> There's a dichotomy in the software world between real products (which have customers and use cases and make money by giving people things they need) and hype products (which exist to get investors excited, so they'll fork over more money).
AI is not both of these things? There are no real AI products that have real customers and make money by giving people what they need?
> LLMs are a more substantive technology than blockchain ever was, but like blockchain, their potential has been greatly overstated.
What do you view as the potential that’s been stated?
Please don't do this, make up your own definitions.
Pretty much anything and everything that uses neural nets is AI. Just because you don't like how the definition has been since the beginning doesn't mean you get to reframe it.
In addition, if humans are not infallible oracles of wisdom, they wouldn't be an intelligence in your definition.
I also don't understand the LLM ⊄ AI people. Nobody was whining about pathfinding in video games being called AI lol. And I have to say LLMs are a lot smarter than A*.
Yes one needs some awareness of the technology. Computer vision: unambiguously AI, motion planning: there are classical algorithms but I believe tesla / waymo both use NNs here too.
Look I don't like the advertising of FSD, or musk himself, but we without a doubt have cars using significant amounts of AI that work quite well.
A way to state this point that you may find less uncharitable is that a lot of current LLM applications are just very thin shells around ChatGPT and the like.
In those cases the actual "new" technology (ie, not the underlying ai necessarily) is not as substantive and novel (to me at least) as a product whose internals are not just an (existing) llm.
(And I do want to clarify that, to me personally, this tendency towards 'thin-shell' products is kind of an inherent flaw with the current state of ai. Having a very flexible llm with broad applications means that you can just put Chatgpt in a lot of stuff and have it more or less work. With the caveat that what you get is rarely a better UX than what you'd get if you'd just prompted an llm yourself.
When someone isn't using llms, in my experience you get more bespoke engineering. The results might not be better than an llm, but obviously that bespoke code is much more interesting to me as a fellow programmer)
It's not just that AI is being pushed on to employees by the tech giants - this is true - but that the hype of AI as a life changing tech is not holding up and people within the industry can easily see this. The only life-changing thing it's doing is due to a self-fulfilling prophecy of eliminating jobs in the tech industry and outside by CEOs who have bet too much on AI. Everyone currently agrees that there is no return on all the money spent on AI. Some players may survive and do well in the future but for a majority there is only the prospect of pain, and this is what all the negativity is about.
More than this man. AI is making me re-appreciate part of the Marxist criticism of capitalism. The concept of worker alienation could be easily extended in new forms to the labor situation in an AI-based economy.
FWIW, humans derive a lot of their self-evaluation as people from labor.
Getting everyone to even agree that this is a problem is impossible. I'm open to the universe of solutions, as long as it isn't "Anthropic and OpenAI get another $100 billion dollars while we starve". We can probably start there.
Whether it's capitalism or communism or whatever China has currently - it's all people doing everything to give their own children every unfair advantage and lie about it.
Why did people flee to America from Europe? Because Europe was nepo baby land.
Now America is nepo baby land and very soon China will be nepo baby land.
It's all rather simple. Western 'culture' is convincing everyone the nepo babies running things are actually uber experts because they attended university. Lol.
Yeah, unfortunately Marx was right about people not realizing the problem, too. The proletariat drowns in false consciousness :(
In reality, the US is finally waking up to the fact that the "golden age" of capitalism in the US was built upon the lite socialism of the New Deal, and that all the bs economic opinions the average american has subscribed to over the past few decades was completely just propaganda and anyone with half a brain cell could see from miles away that since reagonomics we've had nothing but a system that leads to gross accumulation to the top and to the top alone and this is a sure fire way (variable maximization) in any complex system to produce instability and eventual collapse.
> humans derive a lot of their self-evaluation as people from labor.
We're conditioned to do so, in large part because this kind of work ethic makes exploitation easier. Doesn't mean that's our natural state, or a desirable one for that matter.
"AI-based economy" is too broad a brush to be painting with. From the Marxist perspective, the question you should be asking is: who owns the robots? and who owns the wealth that they generate?
All big corporate employees hate AI because it is incessantly pushed on them by clueless leadership and mostly makes their job harder. Seattle just happens to have a much larger percent of big tech employees than most other cities (>50% work for Microsoft or Amazon alone). In places like SF this gloom is balanced by the wide eyed optimism of employees of OpenAI, Anthropic, Nvidia, Google etc. and the thousands of startups piggybacking off of them hoping to make it big.
Definitely, AI sentiment is positive among most people at the small startup I work at in the Seattle area. I do see the "AI fatigue" too, I bet the majority is from using AI as a repeated layoff rationalization. Personally AI is a tool, one of the more useful ones (e.g. Claude and Gemini thinking models make quite helpful code reviewers once given a checklist) The hype often overshadows these benefits.
I feel like there is an absurd amount of negative rhetoric about how AI doesn't have any real world use cases in this comment thread.
I do believe that the product leadership is shoehorning it into every nook and cranny of the world right now and there are reasons to be annoyed by that but there are also countless incredible use cases that are mind blowing, that you can use it every day for.
I need to write about some absolutely life changing scenarios, including: got me thousands of dollars after it drafted a legal letter quoting laws I knew nothing about, saved me countless hours troubleshooting an RV electrical problem, found bugs in code that I wrote that were missed by everyone around me, my wife was impressed with my seemingly custom week long meal plan that fit her short term no soy/dairy allergy diet, helped me solve an issue with my house that a trained professional completely missed the mark on, completely designed and wrote code for a halloween robot decoration I had been trying to build for years, saves my wife hundreds of hours as an audio book narrator summarize characters for her audio books so she doesn't have to read the entire book before she narrates the voices.
I'm worried about some of the problems LLMs will create for humanity in the future but those are problems we can solve in the future too. Today it's quite amazing to have these tools at our disposal and as we add them in smart ways to systems that exist today, things will only get better.
Call me glass half full... but maybe it's because I don't live in Seattle
> I feel like there is an absurd amount of negative rhetoric about how AI doesn't have any real world use cases in this comment thread.
What I feel is people are denouncing the problems and describing them as not being worth the tradeoff, not necessarily saying it has zero use cases. On the other end of the spectrum we have claims such as:
> countless incredible use cases that are mind blowing, that you can use it every day for.
Maybe those blow your mind, but not everyone’s mind is blown so easily.
For every one of your cases, I can give you a counter example where doing the same went horribly wrong. From cases being dismissed due to non-existent laws being quoted, to people being poisoned by following LLM instructions.
> I'm worried about some of the problems LLMs will create for humanity in the future but those are problems we can solve in the future too.
No, they are not! We can’t keep making climate change worse and fix it later. We can’t keep spreading misinformation at this rate and fix it later. We can’t keep increasing mass surveillance at this rate and fix it later. That “fix it later” attitude is frankly naive. You are falling for the narrative that got us into shit in the first place. Nothing will be “fixed later”, the powerful actors will just extract whatever they can and bolt.
> and as we add them in smart ways to systems that exist today, things will only get better.
No, they will not. Things are getting worse now, it’s absurd to think it’s inevitable they’ll get better.
> I feel like there is an absurd amount of negative rhetoric about how AI doesn't have any real world use cases in this comment thread
Yep.
I feel like actually, being negative on AI is the common view now, even though every other HN commenter thinks they’re the only contrarian in the world to see the light and surely the masses must be misguided for not seeing it their way.
The same way people love to think they’re cooler than the masses by hating [famous pop artist]. “But that’s not real music!” they cry.
And that’s fine. Frankly, most of my AI skeptic friends are missing out on a skill that’s helped me a fair bit in my day to day at work and casually. Their loss.
Like it or not, LLMs are here to stay. The same way social media boomed and was here to stay, the same way e-commerce boomed and was here to stay… there’s now a whole new vertical that didn’t exist before.
Of course there will be washouts over time as the hype subsides, but who cares? LLMs are still wicked cool to me.
I don’t even work in AI, I just think they’re fascinating. The same way it was fascinating to me when I made a computer say “Hello, world!” for the first time.
I think the disconnect for me is that I want AI to do a bunch of mundane stuff in my job where it is likely to be discouraged so I can focus on my work. My employer's CEO just implemented an Elon-style "top 5" bi-weekly report. Would they find it acceptable for me to submit AI-generated writing? I just had to do my annual self and peer reviews. Is AI writing valid here? A company wanted to put me, a senior engineer, through a five stage interview process, including a software-graded Leetcode style assessment. Should I be able to use AI to complete it?
These aren't meant to be gotcha rhetorical questions, just parts of my professional life where AI _isn't_ desirable by those in power, even if they're some of the only real world use cases where I'd want to use it. As someone said upthread, I want AI to do my dishes and laundry so I can focus on leisure and creative pursuits (or, in my job, writing code). I don't want AI doing creative stuff for me so I can do dishes and laundry.
> I wanted her take on Wanderfugl , the AI-powered map I've been building full-time.
I can at least give you one piece of advice. Before you decide on a company or product name, take the time to speak it out loud so you can get a sense of how it sounds.
I grew up in Norway and there's this idea in Europe of someone who breaks from corporate culture and hikes and camps a lot (called wandervogel in german). I also liked how when pronounced in Norwegian or Swedish it sounds like wander full. I like the idea of someone who is full of wander.
In Swedish the G wouldn't be silent so it wouldn't really be all that much like "wonderful"; "vanderfugel" is the closest thing I could come up with for how I'd pronounce it with some leniency.
The weird thing is that half of the uses of the name on that landing page spell it as "Wanderfull". All of the mock-up screencaps use it, and at the bottom with "Be one of the first people shaping Wanderfull" etc.
Also, do it assuming different linguistic backgrounds. It could sound dramatically different by people that speak English but as second language, which are going to be a whole lot of your users, even if the application is in English.
Just FYI, I would read it out loud in English as “wander fuggle”. I would assume most Americans would pronounce the ‘g’.
I thought ‘wanderfugl’ was a throwback to ~15 years ago when it was fashionable to use a word but leave out vowels for no reason, like Flickr/
/Tumblr/Scribd/Blendr.
Anecdotally, lots of people in SF tech hate AI too. _Most_ people out of tech do.
But, enough of the people in tech have their future tied to AI that there are lot of vocal boosters.
It is not at all my experience working in local government (that is, in close contact with everybody else paying attention to local government) that non-tech people hate AI. It seems rather the opposite.
Managers everywhere love the idea of AI because it means they can replace expensive and inefficient human workers with cheap automation.
Among actual people (i.e. not managers) there seems to be a bit of a generation gap - my younger friends (Gen Z) are almost disturbingly enthusiastic about entrusting their every thought and action to ChatGPT; my older friends (young millennials and up) find it odious.
The median age of people working local politics is probably 55, and I've met more people (non-family, that is) over 70 doing this than in anything else, and all of them are (a) using AI for stuff and (b) psyched to see any new application of AI being put to use (for instance, a year or so ago, I used 4o to classify every minute spent in our village meetings according to broad subjects).
Or, drive through Worth and Bridgeview in IL, where all the middle eastern people in Chicago live, and notice all the AI billboards. Not billboards for AI, just, billboards obviously made with GenAI.
I think it's just not true that non-tech people are especially opposed to AI.
Mangers should realize that the thing AI might be best at is to replace them. Most of my managers don't understand the people they are managing and don't understand what the people they are managing are actually building. They job is to get a question from management that their reports can answer, format that answer for their boss and send the email. They job is to be the leader in a meeting to make sure it stays on track, not understand the content. AI can do all that shit without a problem.
I don't doubt that many love it. I'm just going based on SF non-tech people I know, who largely see it as the thing vaguely mentioned on every billboard and bus stop, the chatbot every tech company seems to be trying to wedge into every app, and the thing that makes misleading content on social media and enables cheating on school projects. But, sometimes it is good at summarizing videos and such. I probably have a biased sample of people who don't really try to make productive use of AI.
I can imagine reasons why non-tech people in SF would hate all tech. I work in tech and living in the middle of that was a big part of why I was in such a hurry to get out of there.
Frankly, tech deserves its bad reputation in SF (and worldwide, really).
One look at the dystopian billboards bragging about trying to replace humans with AI should make any sane human angry at what tech has done. Or the rising rents due to an influx of people working on mostly useless AI startups, 90% of which won't be around in 5 years. Or even how poorly many in tech behave in public and how poorly they treat service workers. That's just the tip of the iceberg, and just in SF alone.
I say all this as someone living in SF and working in tech. As a whole, we've brought the hate upon ourselves, and we deserve it.
There's a long list of things that have "replaced" humans all the way back to the ox drawn plow. It's not sane to be angry at any of those steps along the way. GenAI will likely not be any different.
It is absolutely sane to be angry at people's livelihoods being destroyed and most aspects of life being worsened just so a handful of multi-billionaires that already control society can become even richer.
Non-technical people that I know have rapidly embraced it as "better google where i don't have to do as much work to answer questions." This is in a non-work context so i don't know how much those people are using it to do their day job writing emails or whatever. A lot of these people are tech-using boomers - they already adjusted to Google/the internet, they don't know how it works, they just are like "oh, the internet got even better."
There's maybe a slow trend towards "that's not true, you should know better than to trust AI for that sort of question" in discussions when someone says something like "I asked AI how [xyz was done]" but it's definitely not enough yet to keep anyone from going to it as their first option for answering a question.
Anyone involved in government procurement loves AI, irrespective of what it even is, for the simple fact that they get to pointedly ask every single tech vendor for evidence that they have "leveraged efficiency gains from AI" in the form of a lower bid.
At least, that's my wife's experience working on a contract with a state government at a big tech vendor.
EDIT: Removed part of my post that pissed people off for some reason. shrug
It makes a lot of sense that someone casually coming in to use chatgpt for 30 minutes a week doesn't have any reason to think more deeply about what using that tool 'means' or where it came from. Honestly, they shouldn't have to think about it.
It’s one of those “people hate noticing AI-generated stuff, but everyone and their mom is using ChatGPT to make their works easier”. There are a lot of vocal boosters and vocal anti-boosters, but the general population is using it in a Google fashion and move on. Not everyone is thinking about AI-apocalypse every day.
Personally, I’m in-between the opinions. I hate when I’m consuming AI-generated stuff, but can see the use for myself for work or asking bunch of not-so-important questions to get general idea of stuff.
> enough of the people in tech have their future tied to AI that there are lot of vocal boosters
That's the presumption. There's no data on whether this is actually true or not. Most rational examinations show that it most likely isn't. The progress of the technology is simply too slow and no exponential growth is on the horizon.
Most of my FB contacts are not in tech. It is overwhelming viewed as a negative by them. To be clearer: I'm counting anyone who posts AI-generated pictures on FB as implicitly being pro-AI; if we neglect this portion the only non-negative posts about AI would be highly qualified "in some special cases it is useful" statements.
That’s fair. The bad behavior in the name of AI definitely isn’t limited to Seattle. I think the difference in SF is that there are people doing legitimately useful stuff with AI
I think this comment (and TFA) is really just painting with too broad of strokes. Of course there are going to be people in tech hubs that are very pro-AI, either because they are working with it directly and have had legitimately positive experiences or because they work with it and they begrudgingly see the writing on that wall for what it means for software professionals.
I can assure you, living in Seattle I still encounter a lot a AI boosters just as much as I encounter AI haters/skeptics
What’s so striking to me is these “vocal boosters” almost preach like televangelists the moment the subject comes up. It’s very crypto-esque (not a hot take at all I know). I’m just tired of watching these people shout down folks asking legitimate questions pertaining to matters like health and safety.
Health and safety seems irrelevant to me. I complain about cars, I point out "obscure" facts like that they are a major cause of lung related health problems for innocent bystanders, I don't actually ride in cars on any regular basis, I use them less in fact than I use AI. There were people at the car's introduction who made all the points I would make today.
The world is not at all about fairness of benefits and impacts to all people it is about a populist mass and what amuses them and makes their life convenient, hopefully without attending the relevant funerals themselves.
Honestly I don’t really know what to say to that, other than it seems rather relevant to me. I don’t really know what to elaborate on given we disagree on such a fundamental level.
Do you think the industry will stop because of your concern? If for example, AI does what it says on the box but causes goiters for prompt jockeys do you think the industry will stop then or offshore the role of AI jockey?
It's lovely that you care about health, but I have no idea why you think you are relevant to a society that is very much willing to risk extinction to avoid the slightest upset or delay to consumer convenience measured progress.
From my PoV you are trolling with virtue signalling and thought terminating memes.. You don't want to discuss why every(?) technological introduction so far has ignored priorities such as your sentiments and any devil's adovocate must be the devil..
The members of HN are actually a pretty strongly biased sample towards people who get the omelet when the eggs get broken.
People not being assholes and having opinions is not "trolling with virtue signaling". Even where people do virtue signal, it is significant improvement over "vice signaling" which you seem to be doing and expecting others to do.
I have an “enabling suicidal ideation” concern for starters.
To be honest I’m kind of surprised I need to explain what this means so my guess is you’re just baiting/being opaque, but I’ll give you the benefit of the doubt and answer your question taken at face value: There have been plenty of high profile incidents in the news over the past year or two, as well as multiple behavioral health studies showing that we need to think critically about how these systems are deployed. If you are unable to find them I’ll locate them for you and link them, but I don’t want to get bogged down in “source wars.” So please look first (search “AI psychosis” to start) and then hit me up if you really can’t find anything.
I am not against the use of LLM’s, but like social media and other technologies before it, we need to actually think about the societal implications. We make this mistake time and time again.
All the Ai companies are taking those concerns seriously though. Every major chat service has guardrails in place that shutdown sessions which appear to be violating such content restrictions.
If your concerns are things like AI psychosis, then I think it is fair to say that the tradeoffs are not yet clear enough to call this. There are benefits and bad consequences for every new technology. Some are a net positive on the balance, others are not. If we outlawed every new technology because someone, somewhere was hurt, nothing would ever be approved for general use.
Strangely I've found the only people who are super excited about AI are executive level boomers. My mom loves AI and uses it to do her job, which of course has poor results. All the younger people I know hate AI. Perhaps it's also a generational dofference.
As a Seattle SWE, I'd say most of my coworkers do hate all the time-wasting AI stuff being shoved down our throats. There are a few evangelical AI boosters I do work with, but I keep catching mistakes in their code that they didn't used to make. Large suites of elegant looking unit tests, but the unit tests include large amounts of code duplicating functionality of the test framework for no reason, and I've even seen unit tests that mock the actual function under test. New features that actually already exist with more sane APIs. Code that is a tangled web of spaghetti. These people largely think AI is improving their speed but then their code isn't making it past code review. I worry about teams with less stringent code review cultures, modifying or improving these systems is going to be a major pain.
As someone on a team with a less stringent code review culture, AI generated code creates more work when used indiscriminately. Good enough to get approved but full of non-obvious errors that cause expensive rework which only gets prioritized once the shortcomings become painfully obvious (usually) months after the original work was “completed” and once the original author has forgotten the details, or worse, left the team entirely. Not to say AI generated code is not occasionally valuable, just not for anything that is intended to be correct and maintainable indefinitely by other developers. The real challenge is people using AI generated code as a mechanism to avoid fully understanding the problem that needs to be solved.
Exactly it’s the non-obvious errors that are easy to miss—doubly so if you are just scanning the code. Those errors can create very hard to find bugs.
So between the debugging and many times you need to reprompt and redo (if you bother at all, but then that adds debugging time) is any time actually saved?
I think the dust hasn’t settled yet because no one has shipped mostly AI generated code for a non-trivial application. They couldn’t have with its current state. So it’s still unknown whether building on incredibly shaky ground will actually work in real life (I personally doubt it).
In fairness I’ve seen humans make that mistake. We had a complete outage in the testing of a product once and a couple of tests were still green. Turns it they tested nothing and never had.
Those slops already existed, but AI scales them by an order of magnitude.
I guess the same can be said of any technology, but AI is just a more powerful tool overall. Using languages as an example - lets say duck typing allowed a 10% productivity boost, but also introduced 5% more mistakes/problems. AI (claims to) allow a 10x productivity boost, but also ~10x mistakes/problems.
I've interfaced with some AI generated code and after several examples of finding subtle and yet very wrong bugs I now find that I digest code that I suspect coming from AI (or an AI loving coworker) with much much more scrutiny than I used to. I've frankly lost trust in any kind of care for quality or due diligence from some coworkers.
I see how the use of AI is useful, but I feel that the practitioners of AI-as-coding-agent are running away from the real work. How can you tell me about the system you say you have created if you don't have the patience to make it or think about it deeply in the first place?
Would you rather consume a bowl of soup with a fly in it, or a 50 gallon drum with 1,000 flies in it? In which scenario are you more likely to fish out all the flies before you eat one?
Alas, the flies are not floating on the surface. They are deeply mixed in, almost as if the machine that generated the soup wanted desperately to appear to be doing an excellent job making fly-free soup.
I think the dynamic is different - before, they were writing and testing the functions and features as they went. Now, (some of) my coworkers just push a PR for the first or second thing copilot suggested. They generate code, test it once, it works that time, and then they ship it. So when I am looking through the PR it's effectively the _first_ time a human has actually looked over the suggested code.
Anecdote: In the 2 months after my org pushed copilot down to everyone the number of warnings in the codebase of our main project went from 2 to 65. I eventually cleaned those up and created a github action that rejects any PR if it emits new warnings, but it created a lot of pushback initially.
Then when you've taken an hour to be the first person to understand how their code works from top to bottom and point out obvious bugs, problems and design improvements (no, I don't think this component needs 8 useEffects added to it which deal exclusively with global state that's only relevant 2 layers down, which are effectively treating React components like an event handling system for data - don't believe people who tell you LLMs are good at React, if you see a useEffect with an obvious LLM comment above it, it's likely to be buggy or unnecessary), your questions about it are answered with an immediate flurry of commits and it's back to square one.
Yep, and if you're lucky they actually paste your comments back into the LLM. A lot of times it seems like they just prompted for some generic changes, and the next revision has tons of changes from the first draft. Your job basically becomes playing reviewer to someone else's interactions with an LLM.
It's about as productive as people who reply to questions with "ChatGPT says <...>" except they're getting paid to do it.
I wonder if there’s a way to measure the cost of such code and associate it with the individuals incurring it. Unless this shows on reports, managers will continue believing LLMs are magic time saving machines writing perfect code.
As another Seattle SWE, I'll go against the grain and say that I think AI is going to change the nature of the market for labor for SWE's and my guess would be for the negative. People need to remember that the ability of AI in code generation today is the worst that it ever will be, and it's only going to improve from here. If you were to just judge by the sentiment on HN, you would think no coder worth their weight was using this in the real world—but my experience on a few teams over the last two years has been exactly the opposite—people are often embarrassed to admit it but they are using it all the time. There are many engineers at Meta that "no longer code" by hand and do literally all of their problem solving with AI.
I remember last year or even earlier this year feeling like the models had plateau'd and I was of the mindset that these tools would probably forever just augment SWEs without fully replacing them. But with Opus 4.5, gemini 3, et al., these models are incredibly powerful and more and more SWEs are leaning on them more and more—a trend that may slow down or speed up—but is never going to backslide. I think people that don't generally see this are fooling themselves.
Sure, there are problem areas—it misses stuff, there are subtle bugs, it's not good for every codebase, for every language, for every scenario. There is some sloppiness that is hard to catch. But this is true with humans too. Just remember, the ability of the models today is the worst that it will ever be—it's only going to get better. And it doesn't need to be perfect to rapidly change the job market for SWE's—it's good enough to do enough of the tasks for enough mid-level SWEs at enough companies to reshape the market.
I'm sure I'll get downvoted to hell for this comment; but I think SWEs (and everyone else for that matter) would best practice some fiscal austerity amongst themselves because I would imagine the chance of many of us being on the losing side of this within the next decade is non-trivial. I mean, they've made all of the progress up to now in essentially the last 5 years and the models are already incredibly capable.
This has been exactly my mindset as well (another Seattle SWE/DS). The baseline capability has been improving and compounding, not getting worse. It'd actually be quite convenient if AI's capabilities stayed exactly where they are now; the real problems come if AI does work.
I'm extremely skeptical of the argument that this will end up creating jobs just like other technological advances did. I'm sure that will happen around the edges, but this is the first time thinking itself is being commodified, even if it's rudimentary in its current state. It feels very different from automating physical labor: most folks don't dream of working on an assembly line. But I'm not sure what's left if white collar work and creative work are automated en masse for "efficiency's" sake. Most folks like feeling like they're contributing towards something, despite some people who would rather do nothing.
To me it is clear that this is going to have negative effects on SWE and DS labor, and I'm unsure if I'll have a career in 5 years despite being a senior with a great track record. So, agreed. Save what you can.
> I'm extremely skeptical of the argument that this will end up creating jobs just like other technological advances did. I'm sure that will happen around the edges, but this is the first time thinking itself is being commodified, even if it's rudimentary in its current state. It feels very different from automating physical labor: most folks don't dream of working on an assembly line.
Most people do not dream of working most white collar jobs. Many people dream of meaningful physical labor. And many people who worked in mines did not dream of being told to learn to code.
Exactly. For example, what happens to open source projects where developers don't have access to the latest proprietary dev tools? Or, what happens to projects like Spring if AI tools can generate framework code from scratch? I've seen maven builds on Java projects that pull in hundreds or even thousands of libraries. 99% of that code is never even used.
The real changes to jobs will be driven by considerations like these. Not saying this will happen but you can't rule it out either.
I keep getting blown away by AI (specifically Claude Code with the latest models). What it does is literally science fiction. If you told someone 5 years ago that AI can find and fix a bug in some complex code with almost zero human intervention nobody would believe you, but this is the reality today. It can find bugs, it can fix bugs, it can refactor code, it can write code. Yes, not perfect, but with a well organized code base, and with careful prompting, it rivals humans in many tasks (certainly outperforms them in some aspects).
As you're also saying this is the worst it will ever be. There is only one direction, the question is the acceleration/velocity.
Where I'm not sure I agree is with the perception this automatically means we're all going to be out of a job. It's possible there would be more software engineering jobs. It's not clear. Someone still has to catch the bad approaches, the big mistakes, etc. There is going to be a lot more software produced with these tools than ever.
I think whether you are right or wrong it makes sense to hedge your bets. I suspect many people here are feeling some sense of fear (career, future implications, etc); I certainly do on some of these points and I think that's a rational response to be aware of the risk of the future unknown.
In general I think -> if I was not personally invested in this situation (i.e. another man on the street) what would be my immediate reaction to this? Would I still become a software engineer as an example? Even if it is doesn't come to past, given what I know now, would I take that bet with my life/career?
I think if people were honest with themselves sadly the answer for many would probably be "no". Most other professions wouldn't do this to themselves either; SWE is quite unique in this regard.
> Just remember, the ability of the models today is the worst that it will ever be—it's only going to get better.
This is the ultimate hypester’s motte to retreat to whenever the bailey of claimed utility of a technology falls. It’a trivially true of literally any technology, but also completely meaningless on its own.
> code generation today is the worst that it ever will be, and it's only going to improve from here.
I'm also of the mindset that even if this is not true, that is, even if current state of LLMs is best that it ever will be, AI still would be helpful. It is already great at writing self contained scripts, and efficiency with large codebases has already improved.
> I would imagine the chance of many of us being on the losing side of this within the next decade is non-trivial.
Yes, this is worrisome. Though its ironic that almost every serious software engineer at some point in time in their possibly early childhood / career when programming was more for fun than work, thought of how cool it would be for a computer program to write a computer program. And now when we have the capability, in front of our eyes, we're afraid of it.
But, one thing humans are really good at is adaptability. We adapt to circumstances / situation -- good or bad. Even if the worst happens, people loose jobs, for a short term it will be negatively impactful for the families, however, over a period of time, humans will adapt to the situation, adapt to coexist with AI, and find next endeavour to conquer.
Rejecting AI is not the solution. Using it as any other tool, is. A tool that, if used correctly, by the right person, can indeed produce faster results.
I mean, some are good at adaptability, while others get completely left in the dust. Look at the rust belt: jobs have left, and everyone there is desperate for a handout. Trump is busy trying to engineer a recession in the US—when recessions happen, companies at the margin go belly-up and the fat is trimmed from the workforce. With the inroads that AI is making into the workforce, it could be the first restructuring where we see massive losses in jobs.
> People need to remember that the ability of AI in code generation today is the worst that it ever will be, and it's only going to improve from here.
I sure hope so. But until the hallucination problem is solved, there's still going to be a lot of toxic waste generated. We have got to get AI systems which know when they don't know something and don't try to fake it.
> I mean, they've made all of the progress up to now in essentially the last 5 years
I have to challenge this one, the research on natural language generation and machine learning dates back to the 50s, it just it only recently came together at scale in a way that became useful, but tons of the hardest progress was made over many decades, and very little innovation happened in the last 5 years. The innovation has mostly been bigger scale, better data, minor architectural tweaks, and reinforcement learning with human feedback and other such fine tuning.
We're definitely in the territory of splitting hairs; but I think most of what people call modern AI is the result of the transformer paper. Of course this was built off the back of decades of research.
> People need to remember that the ability of AI in code generation today is the worst that it ever will be
I've been reading this since 2023 and yet it hasn't really improved all that much. The same things are still problems that were problems back then. And if anything the improvement is slowing down, not speeding up.
I suspect unless we have real AGI we won't have human-level coding from AIs.
Pretty much. Someone on our team put out a code review for some new feature and then bounced for a 2 week vacation. One of our junior engineers approved it. Despite the fact that it was in a section of dead code that wasn’t supposed to even be enabled yet, it managed to break our test environment. Took senior engineers a day to figure out how that was even possible before reverting. We had another couple engineers take a look to see what needs to be done to fix the bug. All of them came away with the conclusion that it was 1,000 lines of pure AI-generated slop with no redeemable value. Trying to fix it would take more work than just re-implenting from scratch.
pretty sure the process I've seen most places is more like: one junior approves, one senior approves, then the owner manually merges.
so your process seems inadequate to me, agents or not.
also, was it tagged as generated? that seems like an obvious safety feature. As a junior, I might be thinking: 'my senior colleague sure knows lots of this stuff', but all it would take to dispel my illusion is an agent tag on the PR.
> pretty sure the process I've seen most places is more like: one junior approves, one senior approves, then the owner manually merges.
Yeah that’s what I think we need to enforce. To answer your question, it was not tagged as AI generated. Frankly, I think we should ban AI-generated code outright, though labeling it as such would be a good compromise.
My hot take is that the evangelical people don't really like AI either they're just scared. I think you have to be outside of big tech to appreciate AI
Exactly. I think it’s pretty clear that software engineering is an “intelligence complete” problem. If you can automatically solve SWE than you can automatically solve pretty much all knowledge work.
The difference is that unlike SWEs, the people doing all that bullshit work are much better at networking, so they will (collectively) find a reason why they shouldn't be replaced with AI and push it through.
SWEs could do so as well, if only we were unionized.
I see it like the hype of js/node and whatever module tech is glued to it when it was new from the perspective of someone who didn't code js. Sum of F's given is still zero.
People hate what the corporations want AI to be and people hate when AI is used the way corporations seem to think it should be used, because the executives at these companies have no taste and no vision for the future of being human. And that is what people think of when they hear “AI”.
I still think there’s a third path, one that makes people’s lives better with thoughtful, respectful, and human-first use of AI. But for some reason there aren’t many people working on that.
> I still think there’s a third path, one that makes people’s lives better with thoughtful, respectful, and human-first use of AI. But for some reason there aren’t many people working on that.
I am thinking about this third path a lot, but the reality is that it wouldn't make AI more interesting than any other tool humans use to go about their daily lives. Does one get this obsessed about screwdrivers?
The issue is that a human-first world where technology is subservient to our needs is very incompatible with our current society. The AI hype is part and parcel of the capitalistic mode of production where humans are ultimately judged for their ability to produce more commodities; the goal has always been to improve productivity to make more for cheaper — this time, the quest for efficiency has found a viable replacement for many humans activities.
40+ year software developer here (not saying this for ego, just that I've been doing this a long time and I've seen a lot of things). Here's how I've re-framed things in my mind due to AI:
The centralization of 'power' in AI will be the entities that want to run the bigger, more general-purpose models. Fine. So be it. Knock yourself out. Good luck building the data centers and finding power.
After that, AI now becomes a 'field leveler', and I say this with the utmost of sincerity and confidence. Need a supply chain system? Goodbye big boys. Goodbye vendor lock. Now there will be dozens, if not hundreds of small teams that can provide this for you at a fraction of the cost. Accounting you ask? Goodbye Intuit. We'll whip up what you need and you'll be off and running and you can kiss the global monsters goodbye. You get the idea.
This is a defining moment and it's awesome. Sure there will be some initial pain. Mindsets will have to change. Priorities will shift. My fundamental point is, all of the big boy/fat cats rushing in the AI race are literally rushing to completely undermine their own leverage and power. Each and every day I am amazed at what I can do, after a lifetime of staring at screens, with a $25.00/month Claude Code license. I am reaching out and lifting my neighbors, who have no technical experience at all, into a completely new playing field where they compete against entities they had no chance of competing with ever before.
Forgive my optimism, but it's hard not to be as I can now use my experience, with a tight group of trusted friends and colleagues, and a little bit of coding help (which goes a long way) to go wherever we want to go now.
Ex-Google here; there are many people both current and past-Google that feel the same way as the composite coworker in the linked post.
I haven't escaped this mindset myself. I'm convinced there are a small number of places where LLMs make truly effective tools (see: generation of "must be plausible, need not be accurate" data, e.g. concept art or crowd animations in movies), a large number of places where LLMs make apparently-effective tools that have negative long-term consequences (see: anything involving learning a new skill, anything where correctness is critical), and a large number of places where LLMs are simply ineffective from the get-go but will increasingly be rammed down consumers' throats.
Accordingly I tend to be overly skeptical of AI proponents and anything touching AI. It would be nice if I was more rational, but I'm not; I want everyone working on AI and making money from AI to crash and burn hard. (See also: cryptocurrency)
My friends at Google are some of the most negative about the potential of AI to improve software development. I was always surprised by this and assumed internally at Google would be one of the first places to adopt these.
I've generally found an inverse correlation between "understands AI" and "exuberance for AI".
I'm the only person at my current company who has had experience at multiple AI companies (the rest have never worked on it in a production environment, one of our projects is literally something I got paid to deliver customers at another startup), has written professionally about the topic, and worked directly with some big names in the space. Unsurprisingly, I have nothing to do with any of our AI efforts.
One of the members of our leadership team, who I don't believe understands matrix multiplication, genuinely believes he's about to transcend human identity by merging with AI. He's publicly discussed how hard it is to maintain friendship with normal humans who can't keep up.
Now I absolutely think AI is useful, but these people don't want AI to be useful they want it to be something that anyone who understands it knows it can't be.
It's getting to the point where I genuinely feel I'm witnessing some sort of mass hysteria event. I keep getting introduced to people who have almost no understanding of the fundamentals of how LLMs work who have the most radically fantastic ideas about what they are capable of on a level I have ever experienced in my fairly long technical career.
I'm sure you're already familiar with the ELIZA effect [0], but you should be a bit skeptical of what you are seeing with your eyes, especially when it comes to language. Humans have an incredible weakness to be tricked by language.
You should be doubly skeptically ever since RLHF has become standard as the model has literally been optimized to give you answers you find most pleasing.
The best way to measure of course is with evaluations, and I have done professional LLM model evaluation work for about 2 years. I've seen (and written) tons of evals and they both impress me and inform my skepticism about the limitations of LLMs. I've also seen countless times where people are convinced "with their eyes" they've found a prompt trick that improves the results, only to be shown that this doesn't pan out when run on a full eval suite.
As an aside: What's fascinating is that it seems our visual system is much more skeptical, an eyeball being slightly off created by a diffusion model will immediately set off alarms where enough clever word play from an LLM will make us drop our guard.
We get around this a bit when using it to write code since we have unit tests and can verify that it's making correct changes and adhering to an architecture. It has truly become much more capable in the last year. This technology is so flexible that it can be used in ways no eval will ever touch and still perform well. You can't just rely on what the labs say about it, you have to USE it.
Interesting observation about the visual system. Truth be told, we get the visual feedback about the world at a much higher data rat AND the visual about the world is usually much higher correlated with reality, whereas the language is a virtual byproduct of cognition and communication.
No one understands how LLMs work. But some people manage to delude themselves into thinking that they do.
One key thing that people prefer not to think about is that LLMs aren't created by humans. They are created by an inhuman optimization algorithm that humans have learned to invoke and feed with data and computation.
Humans have a say in what it does and how, but "a say" is about the extent of it. The rest is a black box - incomprehensible products of a poorly understood mathematical process. The kind of thing you have to research just to get some small glimpses of how it does what it does.
Expecting those humans to understand how LLMs work is a bit like expecting a woman to know how humans work because she made a human once.
I work in a space where I get to build and optimise AI tools for my own and my team's use pretty much daily. As such I focus mainly on AI'ing the crap out of boring & time-consuming stuff that doesn't interest any of us any more, and luckily enough there's a whole lot of low hanging fruit in that space where AI is a genuine time, cost and sanity saver.
However any activity that requires directed conscious thought and decision making where the end state isn't clearly definable up front tends to be really difficult for AI. So much of that work relies on a level of intuition and knowledge that is very hard to explain to a layman - let alone eidetic idiots like most AIs.
One example is trying to get AI to identify security IT incidents in real time and take proactive action. Skilled practitioners can fairly easily use AI to detect anomalous events in near real time, but getting AI to take the next step to work out which combinations of "anomalous" activities equate to "likely security incident" is much harder. A reasonably competent human can usually do that relatively quickly, but often can't explain how they do it.
Working out what action is appropriate once the "likely security incident" has been identified is another task that a reasonably competent human can do, but where AIs are hopeless. In most cases, a competent human is WAAAY better at identifying a reasonable way forward based on insufficient knowledge. In those cases, a good decision made quickly is preferable to a perfect decision made slowly, and humans understand this fairly intuitively.
> I've generally found an inverse correlation between "understands AI" and "exuberance for AI".
Few years ago I had this exact observation regarding self driving cars. Non/semi engineers who worked in the tech industry were very bullish about self driving cars, believing every and ETA spewed by Musk, engineers were cautious optimistically or pessimistically depending on their understanding of AI, LiDAR, etc.
This completely explains why so many engineers are skeptical of AI while so many managers embrace it: The engineers are the ones who understand it.
(BTW, if you're an engineer who thinks you don't understand AI or are not qualified to work on it, think again. It's just linear algebra, and linear algebra is not that hard. Once you spend a day studying it, you'll think "Is that all there is to it?" The only difficult part of AI is learning PyTorch, since all the AI papers are written in terms of Python nowadays instead of -- you know -- math.)
I've been building neural net systems since the late 1980s. And yes they work and they do useful things when you have modern amounts of compute available, but they are not the second coming of $DEITY.
Linear algebra cannot be learned in a day. Maybe multiplying matrices when the dimensions allow but there is far more to linear algebra than knowing how to multiply matrices. Knowing when and why is far more interesting. Knowing how to decompose them. Knowing what a non-singular matrix is and why it’s special and so on. Once you know what’s found in a basic lower devision linear algebra class, one can move it linear programming and learn about cost functions and optimization or numerical analysis.
PyTorch is just a calculator. If I handed someone a Ti-84 they wouldn’t magically know how to bust out statistics on it…
> This completely explains why so many engineers are skeptical of AI while so many managers embrace it: The engineers are the ones who understand it.
Curiously some Feynman chap reported that several NASA engineers put the chance of the Challenger going kablooie—an untechnical term for rapid unscheduled deconstruction, which the Challenger had then just recently exhibited—at 1 in 200, or so, while the manager said, after some prevarications—"weaseled" is Feynman's term—that the chance was 1 in 100,000 with 100% confidence.
I mostly disagree with this. Lots of things correlate weakly with other things, often in confusing and overlapping ways. For instance, expertise can also correlate with resistance to change. Ego can correlate with protection of the status quo and dismissal of people who don't have the "right" credentials. Love of craft can correlate with distaste for automation of said craft (regardless of the effectiveness of the automation). Threat to personal financial stability can correlate with resistance (regardless of technical merit). Potential for personal profit can correlate with support (regardless of technical merit). Understanding neural nets can correlate both with exuberance and skepticism in slightly different populations.
Correlations are interesting but when examined only individually they are not nearly as meaningful as they might seem. Which one you latch onto as "the truth" probably says more about what tribe you value or want to be part of than anything fundamental about technology or society or people in general.
I think there is a correlation between when you can you expect from something when I know their internals vs someone that doesn’t know but is not like who knows internals is much much better.
Example: many people created websites without a clue of how they really work. And got millions of people on it. Or had crazy ideas to do things with them.
At the same time there are devs that know how internals work but can’t get 1 user.
pc manufacturers never were able to even imagine what random people were able to do with their pc.
This to say that even if you know internals you can claim you know better, but doesn’t mean it’s absolute.
Sometimes knowing the fundamentals it’s a limitation. Will limit your imagination.
I'm a big fan of the concept of 初心 (Japanese: Shoshin aka "beginners mind" [0] ) and largely agree with Sazuki's famous quote:
> “In the beginner’s mind there are many possibilities, but in the expert’s there are few”
Experts do tend to be limited in what they see as possible. But I don't think that allows carte blanche belief that a fancy Markov Chain will let you transcend humanity. I would argue one of the key concepts of "beginners mind" is not radical assurance in what's possible but unbounded curiosity and willingness to explore with an open mind. Right now we see this in the Stable Diffusion community: there are tons of people who also don't understand matrix multiplication that are doing incredible work through pure experimentation. There's a huge gap between "I wonder what will happen if I just mix these models together" and "we're just a few years from surrendering our will to AI". None of the people I'm concerned about have what I would consider an "open mind" about the topic of AI. They are sure of what they know and to disagree is to invite complete rejection. Hardly a principle of beginners mind.
Additionally:
> pc manufacturers never were able to even imagine what random people were able to do with their pc.
Belies a deep ignorance of the history of personal computing. Honestly, I don't think modern computing has still ever returned to the ambition of what was being dreampt up, by experts, at Xerox PARC. The demos on the Xerox Alto in the early 1970s are still ambitious in some senses. And, as much as I'm not a huge fan, Gates and Jobs absolutely had grand visions for what the PC would be.
I think this is what is blunted by mass education and most textbooks. We need to discover it again if we want to enjoy our profession with all the signals flowing from social media about all the great things other people are achieving. Staying stupid and hungry really helps.
I think this is more about mechanistic understanding vs fundamental insight kind of situation. The linear algebra picture is currently very mechanistic since it only tells us what the computations are. There are research groups trying to go beyond that but the insight from these efforts are currently very limited.
However, the probabilistic view is very much clearer. You can have many explorable insights, both potentially true and false, by jıst understanding the loss functions, what the model is sampling from, what is the marginal or conditional distributions are and so on. Generative AI models are beautiful at that level. It is truly mind blowing that in 2025, we are able to sample from the megapixel image distributions conditioned on the NLP text prompts.
If you dig ml/vision papers from old, you will see that formulation-wise they actually did, but they lacked the data, compute, and the mechanistic machinery provided by the transformer architecture. The wheels of progress are slow and requires many rotations to finally reach somewhere.
It's definitely interesting to look at people's mental models around AI.
I don't know shit about the math that makes it work, but my mental model is basically - "A LLM is an additional tool in my toolbox which performs summarization, classification and text transformation tasks for me imperfectly, but overall pretty well."
Probably lots of flaws in that model but I just try to think like an engineer who's attempting to get a job done and staying up to date on his tooling.
But as you say there are people who have been fooled by the "AI" angle of all this, and they think they're witnessing the birth of a machine god or something. The example that really makes me throw up my hands is r/MyBoyfriendIsAI where you have women agreeing to marry the LLM and other nonsense that is unfathomable to the mentally well.
There's always been a subset of humans who believe unimaginably stupid things, like that there's a guy in the sky who throws lightning bolts when he's angry, or whatever. The interesting (as in frightening) trend in modernity is that instead of these moron cults forming around natural phenomena we're increasingly forming them around things that are human made. Sometimes we form them around the state and human leaders, increasingly we're forming them around technologies, in line with Arthur C. Clarke's third law - that "Any sufficiently advanced technology is indistinguishable from magic."
If I sound harsh it's because I am, we don't want these moron cults to win, the outcome would be terrible, some high tech version of the Dark Ages. Yet at this moment we have business and political leaders and countless run-of-the-mill tech world grifters who are leaning into the moron cult version of AI rather than encouraging people to just see it as another tool in the box.
Google has good engineers. Generally I've noticed the better someone is at coding the more critical they are of AI generated code. Which make sense honestly. It's easier to spot flaws the more expert you are. This doesn't mean they don't use AI gen code, just they are more careful with when an where.
Yes, because they're more likely to understand that the computer isn't this magical black box, and that just because we've made ELIZA marginally better, doesn't mean it's actually good. Anecdata, but the people I've seen be dazzled by AI the most are people with little to no programming experience. They're also the ones most likely to look on computer experts with disdain.
Well yeah. And because when an expert looks at the code chatgpt produces, the flaws are more obvious. It programs with the skill of the median programmer on GitHub. For beginners and people who do cookie cutter work, this can be incredible because it writes the same or better code they could write, fast and for free. But for experts, the code it produces is consistently worse than what we can do. At best my pride demands I fix all its flaws before shipping. More commonly, it’s a waste of time to ask it to help, and I need to code the solution from scratch myself anyway.
I use it for throwaway prototypes and demos. And whenever I’m thrust into a language I don’t know that well, or to help me debug weird issues outside my area of expertise. But when I go deep on a problem, it’s often worse than useless.
This is why AI is the perfect management Rorschach test.
To management (out of IC roles for long enough to lose their technical expertise), it looks perfect!
To ICs, the flaws are apparent!
So inevitably management greenlights new AI projects* and behaviors, and then everyone is in the 'This was my idea, so it can't fail' CYA scenario.
* Add in a dash of management consulting advice here, and note that management consultants' core product was already literally 'something that looks plausible enough to make execs spend money on it'
In my experience (with ChatGPT 5.1 as of late) is that the AI follows a problem->solution internal logic and doesn't think and try to structure its code.
If you ask for an endpoint to a CRUD API, it'll make one. If you ask for 5, it'll repeat the same code 5 times and modify it for the use case.
A dev wouldn't do this, they would try to figure out the common parts of code, pull them out into helpers, and try to make as little duplicated code as possible.
I feel like the AI has a strong bias towards adding things, and not removing them. The most obviously wrong thing is with CSS - when I try to do some styling, it gets 90% of the way there, but there's almost always something that's not quite right.
Then I tell the AI to fix a style, since that div is getting clipped or not correctly centered etc.
It almost always keeps adding properties, and after 2-3 tries and an incredibly bloated style, I delete the thing and take a step back and think logically about how to properly lay this out with flexbox.
> If you ask for an endpoint to a CRUD API, it'll make one. If you ask for 5, it'll repeat the same code 5 times and modify it for the use case.
>
>A dev wouldn't do this, they would try to figure out the common parts of code, pull them out into helpers, and try to make as little duplicated code as possible.
>
>I feel like the AI has a strong bias towards adding things, and not removing them.
I suspect this is because an LLM doesn't build a mental model of the code base like a dev does. It can decide to look at certain files, and maybe you can improve this by putting a broad architecture overview of a system in an agents.md file, I don't have much experience with that.
But for now, I'm finding it most useful still think in terms of code architecture, and give it small steps that are part of that architecture, and then iterate based on your own review of AI generated code. I don't have the confidence in it to just let some agent plan, and then run for tens of minutes or even hours building out a feature. I want to be in the loop earlier to set the direction.
A good system prompt goes a long way with the latest models. Even just something as simple as "use DRY principles whenever possible." or prompting a plan-implement-evaluate cycle gets pretty good results, at least for tasks that are doing things that AI is well trained on like CRUD APIs.
> If you ask for an endpoint to a CRUD API, it'll make one. If you ask for 5, it'll repeat the same code 5 times and modify it for the use case.
I don’t think this is an inherent issue to the technology. Duplicate code detectors have been around for ages. Given an AI agent a tool which calls one, and ask it to reduce duplication, it will start refactoring.
Of course, there is a risk of going too far in the other direction-refactorings which technically reduce duplication but which have unacceptable costs (you can be too DRY). But some possible solutions: (a) ask it to judge if the refactoring is worth it or not - if it judges no, just ignore the duplication and move on; (b) get a human to review the decision in (a); (c) if AI repeatedly makes wrong decision (according to human), prompt engineering, or maybe even just some hardcoded heuristics
It actually is somewhat a limit of the technology. LLMs can't go back and modify their own output, later tokens are always dependent on earlier tokens and they can't do anything out of order. "Thinking" helps somewhat by allowing some iteration before they give the user actual output, but that requires them to write it the long way and THEN refactor it without being asked, which is both very expensive and something they have to recognize the user wants.
Coding agents can edit their own output - because their output is tool calls to read and write files, and so it can write a file, run some check on it, modify the file to try to make it pass, run the check again, etc
Sorry but from where I sit, this is only marginally closes gap from AI to truly senior engineers.
Basically human junior engineers start by writing code in a very procedural and literal style with duplicate logic all over the place because that's the first step in adapting human intelligence to learning how to program. Then the programmer realizes this leads to things becoming unmaintainable and so they start to learn the abstraction techniques of functions, etc. An LLM doesn't have to learn any of that, because they already know all languages and mechanical technique in their corpus, so this beginning journey never applies.
But what the junior programmer has that the LLM doesn't, is an innate common sense understanding of human goals that are driving the creation of the code to begin with, and that serves them through their entire progression from junior to senior. As you point out, code can be "too DRY", but why? Senior engineers understand that DRYing up code is not a style issue, its more about maintainability and understanding what is likely to change, and what will be the apparent effects to human stakeholders who depend on the software. Basically do these things map to things that are conceptually the same for human users and are unlikely to diverge in the future. This is also a surprisingly deep question as perhaps every human stakeholder will swear up and down they are the same, but nevertheless 6 months from now a problem arises that requires them to diverge. At this point there is now a cognitive overhead and dissonance of explaining that divergence of the users who were heretofore perfectly satisfied with one domain concept.
Ultimately the value function for success of a specific code factoring style depends on a lot of implicit context and assumptions that are baked into the heads of various stakeholders for the specific use case and can change based on myriad outside factors that are not visible to an LLM. Senior engineers understand the map is not the territory, for LLMs there is no territory.
I’m not suggesting AIs can replace senior engineers (I don’t want to be replaced!)
But, senior engineers can supervise the AI, notice when it makes suboptimal decisions, intervene to address that somehow (by editing prompts or providing new tools)… and the idea is gradually the AI will do better.
Rather than replacing engineers with AIs, engineers can use AIs to deliver more in the same amount of time
Which I think points out the biggest issue with current AI - knowledge workers in any profession at any skill level tend to get the impression that AI is very impressive, but is prone to fail at real world tasks unpredictably, thus the mental model of 'junior engineer' or any human that does its simple tasks by itself reliably, is wrong.
AI operating at all levels needs to be constantly supervised.
Which would still make AI a worthwhile technology, as a tool, as many have remarked before me.
The problem is, companies are pushing for agentic AI instead of one that can do repetitve, short horizon tasks in a fast and reliable manner.
Sure. My point was AI was already 25% of the way there even with their verbose messy style. I think with your suggestions (style guidance, human in the loop, etc) we get at most 30% of the way there.
Bad code is only really bad if it needs to be maintained.
If your AI reliably generates working code from a detailed prompt, the prompt is now the source that needs to be maintained. There is no important reason to even look at the generated code
> the prompt is now the source that needs to be maintained
The inference response to the prompt is not deterministic. In fact, it’s probably chaotic since small changes to the prompt can produce large changes to the inference.
The C compiler will still make working programs every time, so long as your code isn’t broken. But sometimes the code chatgpt produces won’t work. Or it'll kinda work but you’ll get weird, different bugs each time you generate it. No thanks.
I think this might be plausible in the future, but it needs a lot more tooling. For starters you need to be able to run the prompt through the exact same model so you can reproduce a "build".
Even the exact same model isn't enough. There are several sources of nondeterminism in LLMs. These would all need to be squashed or seeded - which as far as I know isn't a feature that openai / anthropic / etc provide.
Well.. except the AI models are nondeterministic. If you ask an AI the same prompt 20 times, you'll get 20 different answers. Some of them might work, some probably won't. It usually takes a human to tell which are which and fix problems & refactor. If you keep the prompt, you can't manually modify the generated code afterwards (since it'll be regenerated). Even if you get the AI to write all the code correctly, there's no guarantee it'll do the same thing next time.
> It programs with the skill of the median programmer on GitHub
This is a common intuition but it's provably false.
The fact that LLMs are trained on a corpus does not mean their output represents the median skill level of the corpus.
Eighteen months ago GPT-4 was outperforming 85% of human participants in coding contests. And people who participate in coding contests are already well above the median skill level on Github.
And capability has gone way up in the last 18 months.
The best argument I've yet heard against the effectiveness of AI tools for SW dev is the absence of an explosion of shovelware over the past 1-2 years.
Basically, if the tools are even half as good as some proponents claim, wouldn't you expect at least a significant increase in simple games on Steam or apps in app stores over that time frame? But we're not seeing that.
Interesting approach. I can think of one more explanation the author didn't consider: what if software development time wasn't the bottleneck to what he analyzed? The chart for Google Play app submissions, for example, goes down because Google made it much more difficult to publish apps on their store in ways unrelated to software quality. In that case, it wouldn't matter whether AI tools could write a billion production-ready apps, because the limiting factor is Google's submission requirements.
There are other charts besides Google play. Particularly insightful id the steam chart as steam is already full of shovelware and, in my experience, many developers wish they were making games but the pay is bad.
GitHub repos is pretty interesting too but it could be that people just aren't committing this stuff. Showing zero increase is unexpected though.
I've had this same thought for some time. There should have been an explosion in startups, new product from established companies, new apps by the dozen every day. If LLMs can now reliably turn an idea into an application, where are they?
The argument against this is that shovelware has a distinctly different distribution model now.
App stores have quality hurdles that didn’t exist in the diskette days. The types of people making low quality software now can self publish (and in fact do, often), but they get drowned out by established big dogs or the ever-shifting firehose of our social zeitgeist if you are not where they are.
Anyone who has been on Reddit this year in any software adjacent sub has seen hundreds (at minimum) of posts about “feedback on my app” or slop posts doing a god awful job of digging for market insights on pain points.
The core problem with this guy’s argument is that he’s looking in the wrong places - where a SWE would distribute their stuff, not a normie - and then drawing the wrong conclusions. And I am telling you, normies are out there, right now, upchucking some of the sloppiest of slop software you could ever imagine with wanton abandon.
I don't think this disproves my claim, for several reasons.
First, I don't know where those human participants came from, but if you pick people off the street or from a college campus, they aren't going to be the world's best programmers. On the other hand, github users are on average more skilled than the average CS student. Even students and beginners who use github usually don't have much code there. If the LLMs are weighted to treat every line of code about same, they'd pick up more lines of code from prolific developers (who are often more experienced) than they would from beginners.
Also in a coding contest, you're under time pressure. Even when your code works, its often ugly and thrown together. On github, the only code I check in is code that solves whatever problem I set out to solve. I suspect everyone writes better code on github than we do in programming competitions. I suspect if you gave the competitors functionally unlimited time to do the programming competition, many more would outperform GPT-4.
Programming contests also usually require that you write a fully self contained program which has been very well specified. The program usually doesn't need any error handling, or need to be maintained. (And if it does need error handling, the cases are all fully specified in the problem description). Relatively speaking, LLMs are pretty good at these kind of problems - where I want some throwaway code that'll work today and get deleted tomorrow.
But most software I write isn't like that. And LLMs struggle to write maintainable software in large projects. Most problems aren't so well specified. And for most code, you end up spending more effort maintaining the code over its lifetime than it takes to write in the first place. Chatgpt usually writes code that is a headache to maintain. It doesn't write or use local utility functions. It doesn't factor its code well. The code is often overly verbose. It often writes code that's very poorly optimized. Or the code contains quite obvious bugs for unexpected input - like overflow errors or boundary conditions. And the code it produces very rarely handles errors correctly. None of these problems really matter in programming competitions. But it does matter a lot more when writing real software. These problems make LLMs much less useful at work.
Or even solving problems that business need to solve, generally speaking.
This complete misunderstand of what software engineering even is is the major reason so many engineers are fed up with the clueless leaders foisting AI tools upon their orgs because they apparently lack the critical reasoning skills to be able to distinguish marketing speak from reality.
> The fact that LLMs are trained on a corpus does not mean their output represents the median skill level of the corpus.
It does, by default. Try asking ChatGPT to implement quicksort in JavaScript, the result will be dogshit. Of course it can do better if you guide it, but that implies you recognize dogshit, or at least that you use some sort of prompting technique that will veer it off the beaten path.
I asked the free version of ChatGPT to implement quicksort in JS. I can't really see much wrong with it, but maybe I'm missing something? (Ugh, I just can't get HN to format code right... pastebin here: https://pastebin.com/tjaibW1x)
----
function quickSortInPlace(arr, left = 0, right = arr.length - 1) {
if (left < right) {
const pivotIndex = partition(arr, left, right);
quickSortInPlace(arr, left, pivotIndex - 1);
quickSortInPlace(arr, pivotIndex + 1, right);
}
return arr;
}
function partition(arr, left, right) {
const pivot = arr[right];
let i = left;
for (let j = left; j < right; j++) {
if (arr[j] < pivot) {
[arr[i], arr[j]] = [arr[j], arr[i]];
i++;
}
}
[arr[i], arr[right]] = [arr[right], arr[i]]; // Move pivot into place
return i;
}
This is exactly the level of code I've come to expect from chatgpt. Its about the level of code I'd want from a smart CS student. But I'd hope to never use this in production:
- It always uses the last item as a pivot, which will give it pathological O(n^2) performance if the list is sorted. Passing an already sorted list to a sort function is a very common case. Good quicksort implementations will use a random pivot, or at least the middle pivot so re-sorting lists is fast.
- If you pass already sorted data, the recursive call to quickSortInPlace will take up stack space proportional to the size of the array. So if you pass a large sorted array, not only will the function take n^2 time, it might also generate a stack overflow and crash.
- This code: ... = [arr[j], arr[i]]; Creates an array and immediately destructures it. This is - or at least used to be - quite slow. I'd avoid doing that in the body of quicksort's inner loop.
- There's no way to pass a custom comparator, which is essential in real code.
I just tried in firefox:
// Sort an array of 1 million sorted elements
arr = Array(1e6).fill(0).map((_, i) => i)
console.time('x')
quickSortInPlace(arr)
console.timeEnd('x')
My computer ran for about a minute then the javascript virtual machine crashed:
Uncaught InternalError: too much recursion
This is about the quality of quicksort implementation I'd expect to see in a CS class, or in a random package in npm. If someone on my team committed this, I'd tell them to go rewrite it properly. (Or just use a standard library function - which wouldn't have these flaws.)
OK, you just added requirements the previous poster had not mentioned. Firstly, how often do you really need to sort a million elements in a browser anyway? I expect that sort of heavy lifting would usually be done on the server, where you'd also want to do things like paging.
Secondly, if a standard implementation was to be used, that's essentially a No-Op. AI will reuse library functions where possible by default and agents will even "npm install" them for you. This is purely the result of my prompt, which was simply "Can you write a QuickSort implementation in JS?"
In any case, to incorporate your feedback, I simply added "that needs to sort an array of a million elements and accepts a custom comparator?" to the initial prompt and reran in a new session, and this is what I got in less than 5 seconds. It runs in about 160ms on Chrome:
How long would your team-mate have taken? What else would you change? If you have further requirements, seriously, you can just add those to the prompt and try it for yourself for free. I'd honestly be very curious to see where it fails.
However, this exchange is very illustrative: I feel like a lot of the negativity is because people expect AI to read their minds and then hold it against it when it doesn't.
> OK, you just added requirements the previous poster had not mentioned.
Lol of course! The real requirements for a piece of software are never specified in full ahead of time. Figuring out the spec is half the job.
> Firstly, how often do you really need to sort a million elements in a browser anyway? I expect that sort of heavy lifting would usually be done on the server
Who said anything about the browser? I run javascript on the server all the time.
Don't defend these bugs. 1 million items just isn't very many items for a sort function. On my computer, the built in javascript sort function can sort 1 million sorted items in 9ms. I'd expect any competent quicksort implementation to be able to do something similar. Hanging for 1 minute then crashing is a bug.
If you want a use case, consider the very common case of sorting user-supplied data. If I can send a JSON payload to your server and make it hang for 1 minute then crash, you've got a problem.
> If you have further requirements, seriously, you can just add those to the prompt and try it for yourself for free. [..] How long would your team-mate have taken?
We've gotta compare like for like here. How long does it take to debug code like this when an AI generates it? It took me about 25 minutes to discover & verify those problems. That was careful work. Then you reprompted it, and then you tested the new code to see if it fixed the problems. How long did that take, all added together? We also haven't tested the new code for correctness or to see if it has new bugs. Given its a complete rewrite, there's a good chance chatgpt introduced new issues. I've also had plenty of instances where I've spotted a problem and chatgpt apologises then completely fails to fix the problem I've spotted. Especially lifetime issues in rust - its really bad at those!
The question is this: Is this back and forth process faster or slower than programming quicksort by hand? I'm really not sure. Once we've reviewed and tested this code, and fixed any other problems in it, we're probably looking at about an hour of work all up. I could probably implement quicksort at a similar quality in a similar amount of time. I find writing code is usually less stressful than reviewing code, because mistakes while programming are usually obvious. But mistakes while reviewing are invisible. Neither you nor anyone else in this thread spotted the pathological behavior this implementation had with sorted data. Finding problems like that by just looking is hard.
Quicksort is also the best case for LLMs. Its a well understood, well specified problem with a simple, well known solution. There isn't any existing code it needs to integrate with. But those aren't the sort of problems I want chatgpt's help solving. If I could just use a library, I'm already doing that. I want chatgpt to solve problems its probably never seen before, with all the context of the problem I'm trying to solve, to fit in with all the code we've already written. It often takes 5-10 minutes of typing and copy+pasting just to write a suitable prompt. And in those cases, the code chatgpt produces is often much, much worse.
> I feel like a lot of the negativity is because people expect AI to read their minds and then hold it against it when it doesn't.
Yes exactly! As a senior developer, my job is to solve the problem people actually have, not the problem they tell me about. So yes of course I want it to read my mind! Actually turning a clear spec into working software is the easy part. ChatGPT is increasingly good at doing the work of a junior developer. But as a senior dev / tech lead, I also need to figure out what problems we're even solving, and what the best approach is. ChatGPT doesn't help much when it comes to this kind of work.
(By the way, that is basically a perfect definition of the difference between a junior and senior developer. Junior devs are only responsible for taking a spec and turning it into working software. Senior devs are responsible for reading everyone's mind, and turning that into useful software.)
And don't get me wrong. I'm not anti chatgpt. I use it all the time, for all sorts of things. I'd love to use it more for production grade code in large codebases if I could. But bugs like this matter. I don't want to spend my time babysitting chatgpt. Programming is easy. By the time I have a clear specification in my head, its often easier to just type out the code myself.
I saw an ad for Lovable. The very first thing I noticed was an exchange where the promoter asked the AI to fix a horizontal scroll bar that was present on his product listing page. This is a common issue with web development, especially so for beginners. The AI’s solution? Hide overflow on the X axis. Probably the most common incorrect solution used by new programmers.
But to the untrained eye the AI did everything correctly.
Yes. The people who are amazed with AI were never that good at a particular subject area in the first place - I dont care who you are. You were not good enough - how do I know this? Well I know economics, corporate finance, accounting et al very deeply. Ive engaged with LLMS for years now and still they cannot get below the surface level and are not improving further than this.
Its easy to recall information, but something entirely different to do something with that information. Which is what those subject ares are all about - taking something (like a theory) and applying it in a disciplined manner given the context.
Thats not to diminish what LLMs can do. But lets get real.
I am not a great (some would argue, not even good) programmer, and I find a lot of issues with LLM generated code. Even Claude pro does really weird dumb stuff.
It works both ways. If you are good, it's also easier to spot moments of brilliance from AI agent when it saves you hours of googling, reading docs, some trial and error while you pour yourself cup of coffee and ponder the next steps. You can spot when a single tab press saved you minutes.
Yes. Love it for quick explorations of available options, reviewing my work, having it propose tests, getting its help with debugging, and all kinds of general subject matter questions. I don’t trust it to write anything important but it can help with a sketch.
Engineers at Google are much less likely to be doing green-field generation of large amounts of code . It's much more incremental, carefully measured changes to mature, complex software stacks, and done within the Google ecosystem, which is heavily divergent from the OSS-focused world of startups, where most training data comes from
AI is optimized to solve a problem no matter what it takes. It will try to solve one problem by creating 10 more.
I think long time/term agentic AI is just snake oil at this point. AI works best if you can segment your task into 5-10 minutes chunks, including the AI generating time, correcting time and engineer review time. To put it another way, a 10 minute sync with human is necessary, otherwise it will go astray.
Then it just makes software engineering into bothering supervisor job. Yes I typed less, but I didn’t feel the thrill of doing so.
> it just makes software engineering into bothering supervisor job.
I'm pretty sure this is the entire enthusiasm from C-level for AI in a nutshell. Until AI SWE resisted being mashed into a replaceable cog job that they don't have to think/care about. AI is the magic beans that are just tantalizingly out of reach and boy do they want it.
Luckily for us, technologies like SQL made similar promises (for more limited domains) and C suites couldn't be bothered to learn that stuff either.
Ultimately they are mostly just clueless, so we will either end up with legions of way shittier companies than we have today (because we let them get away with offloading a bunch of work to tools they rms int understand and accepting low quality output) or we will eventually realize the continued importance of human expertise.
But every version of AI for almost a century had this property, right down from the first vocoders that were going to replace entire callcenters to convolutional AI that was going to give us self-driving cars. Yes, a century, vocoders were 1930s technology, but they can essentially read the time aloud.
... except they didn't. In fact most AI tech were good for a nice demo and little else.
In some cases, really unfairly. For instance, convnet map matching doesn't work well not because it doesn't work well, but because you can't explain to humans when it won't work well. It's unpredictable, like a human. If you ask a human to map a building in heavy fog they may come back with "sorry". SLAM with lidar is "better", except no, it's a LOT worse. But when it fails it's very clear why it fails because it's a very visual algorithm. People expect of AIs that they can replace humans but that doesn't work, because people also demand AIs never say no, never fail, like the Star Trek computer (the only problem the star trek computer ever has is that it is misunderstood or follows policy too well). If you have a delivery person occasionally they will radically modify the process, or refuse to deliver. No CEO is ever going to allow an AI drone to change the process and No CEO will ever accept "no" from an AI drone. More generally, no business person seems to ever accept a 99% AI solution, and all AI solutions are 99%, or actually mostly less.
AI winters. I get the impression another one is coming, and I can feel it's going to be a cold one. But in 10 years, LLMs will be in a lot of stuff, like with every other AI winter. A lot of stuff ... but a lot less than CEOs are declaring it will be in today.
Yeah but Google won’t expect you to use AI tools developed outside Google and trained on primarily OSS code. It would expect you to use the Google internal AI tools trained on google3, no?
There are plenty of good tasks left, but they're often one-off/internal tooling.
Last one at work: "Hey, here are the symptoms for a bug, they appeared in <release XYZ> - go figure out the CL range and which 10 CLs I should inspect first to see if they're the cause"
(Well suited to AI, because worst case I've looked at 10 CLs in vain, and best case it saved me from manually scanning through several 1000 CLs - the EV is net positive)
It works for code generation as well, but not in a "just do my job" way, more in a "find which haystack the needle is in, and what the rough shape of the new needle is". Blind vibecoding is a non-starter. But... it's a non-starter for greenfields too, it's just that the FO of FAFO is a bit more delayed.
My internal mnemonic for targeting AI correctly is 'It's easier to change a problem into something AI is good at, than it is to change AI into something that fits every problem.'
But unfortunately the nuances in the former require understanding strengths and weaknesses of current AI systems, which is a conversation the industry doesn't want to have while it's still riding the froth of a hype cycle.
Aka 'any current weaknesses in AI systems are just temporary growing pains before an AGI future'
I had a VP of a revenue cycle team tell me that his expectation was that they could fling their spreadsheets and Word docs on how to do calculations at an AI powered vendor, and AI would be able to (and I direct quote) "just figure it all out."
That's when I realized how far down the rabbit hole marketing to non-technical folks on this was.
I think it’s a fair point that google has more stakeholders with a serious investment in some flubbed AI generated code not tanking their share value, but I’m not sure the rest of it is all that different from what engineer at $SOME_STARTUP does after the first ~8monthes the company is around. Maybe some folks throwing shit at a wall to find PMF are really getting a lot out of this, but most of us are maintaining and augmenting something we don’t want to break.
Excuse the throwaway. It's not even just the employees, but it doesn't even seem like the technical leadership seriously cares about internal AI use. Before I left all they pushed was code generation, but my work was 80% understanding 5-20 year old code and 20% actual development. If they put any noticeable effort into an LLM that could answer "show me all users of Proto.field that would be affected by X", my life would've been changed for the better, but I don't think the technical leadership understands this, or they don't want to spare the TPUs.
When I started at my post-Google job, I felt so vindicated when my new TL recommended that I use an LLM to catch up if no one was available to answer my questions.
Being forced to adopt tools regardless of fit to workflow (and being smart enough to understand the limitations of the tools despite management's claims) correlates very well to being negative on them.
From the outside, the AI push at Google very closely resembles the death march that Google+ but immensely more intense from the entire tech ecosystem following suit.
I notice that expert tends to be pretty bimodal. e.g. chef either enjoy really well made food or some version of scrappy fast food comfort they grew up eating.
Bimodal here suggests either/or which I don’t think is correct for either chefs or code enjoyers. I think experts tend to eschew snobbery more and can see the value in comfort food, quick and dirty AI prototypes or boilerplate, or say cheap and drinkable wine, while also being able to appreciate what the truly high-end looks like.
It’s the mid-range with pretensions that gets squeezed out. I absolutely do not need a $40 bottle of wine to accompany my takeout curry, I definitely don’t need truffle slices added to my carbonara, and I don’t need to hand-roll conceptually simple code.
Working on our mega huge code basis with lots of custom tooling and bleeding edge stuff hasn't been the best for for AI generated code compared to most companies.
I do think AI as a rubber ducky / research assistant type has been overall helpful as a SWE.
People who've spent their life perfecting a craft are exactly the people you'd expect would be most negative about something genuinely disrupting that craft. There is significant precedent for this. It's happened repeatedly in history. Really smart, talented people routinely and in fact quite predictably resist technology that disrupts their craft, often even at great personal cost within their own lifetime.
Yes you get it. Obviously “writing code” will die. It will hold on in legacy systems that need bespoke maintenance, like COBOL systems have today. There will be artisanal coders, like there are artisanal blacksmiths, who do it the old fashioned way, and we will smile and encourage them. Within 20 years, writing code syntax will be like writing assembly: something they make you do in school, something that your dad reminds you about the good old days.
I talked to someone who was in denial about this, until he said he had conflated writing code with solving problems. Solving problems isn’t going anywhere! Solving problems: you observe a problem, write out a solution, implement that solution, measure the problem again, consider your metrics, then iterate.
“Implement it” can mean writing code, like the past 40 years, but it hasn’t always been. Before coding, it was economics and physics majors, who studied and implemented scientific management. For the next 20 years, it will be “describe the tool to Claude code and use the result”.
But Claude cannot code at all, it's gonna shit the bed and it learns only on human coders to be able to even know an example is a solution rather than a malware...
Every greenfield project uses claude code to write 90+% of code. Every YC startup for the past six months says AI writes 90+% of their code. Claude code writes 90+% of my code. That’s today.
It works great. I have a faster iteration cycle. For existing large codebases, AI modifications will continue to be okay-ish. But new companies with a faster iteration cycle will outcompete olds ones, and so in the long run most codebases will use the same “in-distribution” tech stacks and architecture and design principles that AI is good at.
I don't know that i consider recognizing the limitations of a tool to be resistance to the idea. It makes sense that experts would recognize those limitations most acutely -- my $30 harbor freight circular saw is a lifesaver for me when I'm doing slapdash work in my shed, but it'd be a critical liability for a professional carpenter needing precision cuts. That doesn't mean the professional carpenter is resistant to the idea of using power saws, just that they necessarily must be more discerning than I do.
It's the latest tech holy war. Tabs vs Spaces but more existential. I'm usually anti hype and I've been convinced of AI's use over and over when it comes to coding. And whenever I talk about it, I see that I come across as an evangelist. Some people appreciate that, online I get a lot of push back despite having tangible examples of how it has been useful.
I don't see it that way. Tabs, spaces, curly brace placement, Vim, Emacs, VSCode, etc are largely aesthetic choices with some marginal unproven cognitive implications.
I find people mostly prefer what they are used to, and if your preference was so superior then how could so many people build fantastic software using the method you don't like?
AI isn't like that. AI is a bunch of people telling me this product can do wonderful things that will change society and replace workers, yet almost every time I use it, it falls far short of that promise. AI is certainly not reliable enough for me to jeopardize the quality of my work by using it heavily.
You can vibe-code a throwaway UI for investigating some complex data in less than 30 minutes. The code quality doesn't matter, and it will make your life much easier.
Rinse and repeat for many "one-off" tasks.
It's not going away, you need to learn how to use it. shrugs shoulders
The issue is people trying to use these AI tools to investigate complex data not the throwaway UI part.
I work as the non-software kind of engineer at an industrial plant there is starting to emerge a trend of people who just blindly trust the output of AI chat sessions without understanding what the chat bot is echoing at them which is wasteful of their time and in some cases my time.
This not not new in the past I have experienced engineers who use (abuse) statistics/regression tools etc. Without understanding what the output was telling them but it is getting worse now.
It is not uncommon to hear something like: "Oh I investigated that problem and this particular issue we experienced was because of reasons x, y and z."
Then when you push back because what they've said sounds highly unlikely it boils down to. "I don't know that is what the AI told me".
Then if they are sufficiently optimistic they'll go back and prompt it with "please supply evidence for your conclusion" or some similar prompt and it will supply paragraphs of plausible sounding text but when you dig into what it is saying there are inconsistencies or made up citations. I've seen it say things that were straight up incorrect and went against Laws of Thermodynamics for example.
It has become the new "I threw the kitchen sink into a multivariate regression and X emerged as significant - therefore we should address x"
I'm not a complete skeptic I think AI has some value, for example if you use it as a more powerful search engine by asking it something like "What are some suggested techniques for investigating x" or "What are the limitations of Method Y" etc. It can point you to the right place assist you with research, it might find papers from other fields or similar. But it is not something you should be relying on to do all of the research for you.
But how do you know you're getting the correct picture from that throwaway UI? A little while back there was an blog posted where the author wrote an article praising AI for his vibe-coded earth-viewer app that used Vulkan to render inside a GUI window. Unfortunately, that wasn't the case and AI just copied from somewhere and inserted code for a rudimentary software rendering. The AI couldn't do what was asked because it had seldom been done. Nobody on the internet ever discussed that particular objective, so it wasn't in the training set.
The lesson to learn is that these are "large-language models." That means it can regurgitate what someone else has done before textually, but not actually create something novel. So it's fine if someone on the internet has posted or talked about a quick UI in whatever particular toolkit you're using to analyze data. But it'll throw out BS if you ask for something brand new. I suspect a lot of AI users are web developers who write a lot of repetitive rote boilerplate, and that's the kind of thing these LLMs really thrive with.
> But how do you know you're getting the correct picture from that throwaway UI?
You get the AI to generate code that lets you spot-check individual data points :-)
Most of my work these days is in fact that kind of code. I'm working on something research-y that requires a lot of visualization, and at this point I've actually produced more throwaway code than code in the project.
Here's an example: I had ChatGPT generate some relatively straightforward but cumbersome geometric code. Saved me 30 - 60 minutes right there, but to be sure, I had it generate tests, which all passed. Another 30 minutes saved.
I reviewed the code and the tests and felt it needed more edge cases, which I added manually. However, these started failing and it was really cumbersome to make sense of a bunch of coordinates in arrays.
So I had it generate code to visualize my test cases! That instantly showed me that some assertions in my manually added edge cases were incorrect, which became a quick fix.
The answer to "how do you trust AI" is human in the loop... AND MOAR AI!!! ;-)
And then people create non-throwaway things with it and your job, performance report, bonus, and healthcare are tied to being compared to those people who just do what management says without arguing about the correct application of the tool.
If you keep your job, it's now tied to maintaining the garbage those coworkers checked in.
I’m an AI skeptic. I like seeing what UIs it spits out, though, which defeats the blank page staring into my soul fear nicely. I don’t even use the code, just take inspiration from the layouts.
Yeah, it helps a lot to make first steps, to overcome writers block, to make you put into words what you'd like to have built.
At one point you might take over, ask it for specific refactors you'd do but are too lazy to do yourself. Or even toss it away entirely and start fresh with better understanding. Yourself or again with agent.
They're good questions! The problem is that I've tried to talk to the people who are getting real value from it, and often the answer ends up being that the value is not as real as they think. One guy gave an excited presentation about how AI let him write 7k LOC per day, expounded for an entire session about how the rest of us should follow in his shoes, and then clarified only in Q&A that reviewers couldn't keep up so he exempted himself from code review.
I’m starting to believe there are situations where the human code review is genuinely not necessary. Here’s a concrete example of something that’s been blowing my mind. I have 25 years of professional coding experience but it’s almost all web, with a few years of iOS in the objective C era. I’m also an amateur electronic musician. A couple of weeks ago I was thinking about this plugin that I used to love until the company that made it went under. I’ve long considered trying to make a replacement but I don’t know the first thing about DSP or C++.
You know where this is going. I asked Claude if audio plugins were well represented in its training data, it said yes, off I went. I can’t review the code because I lack the expertise. It’s all C++ with a lot of math and the only math I’ve needed since college is addition and calculating percentages. However, I can have intelligent discussions about design and architecture and music UX. That’s been enough to get me a functional plugin that already does more in some respects than the original. I am (we are?) making it steadily more performant. It has only crashed twice and each time I just pasted the dump into Claude and it fixed the root cause.
Long story short: if you can verify the outcome, do you need to review the code? It helps that no one dies or gets underpaid if my audio plugin crashes. But still, you can’t tell me this isn’t remarkable. I think it’s clear there will be a massive proliferation of niche software.
I don’t think I’ve ever seen someone seriously argue that personal throwaway projects need thorough code reviews of their vibe code. The problem comes in when I’m maintaining a 20 year old code base used by anywhere from 1M to 1B users.
In other words you can’t vibe code in an environment where evaluating “does this code work” is an existential question. This is the case where 7k LOC/day becomes terrifying.
Until we get much better at automatically proving correctness of programs we will need review.
My point about my experience with this plugin isn’t that it’s a throwaway or meaningless project. My point is that it might be enough in some cases to verify output without verifying code. Another example: I had to import tens of thousands of records of relational data. I got AI to write the code for the import. All I verified was that the data was imported correctly. I didn’t even look at the code.
In this context I meant throwaway as "low stakes" not "meaningless". Again, evaluating the output of a database import like that could be existensial for your company given the context. Not to mention there's many cases where evaluating the output isn't feasible for a human.
Human code review does not prove correctness. Almost every software service out there contains bugs. Humans have struggled for decades to reliably produce correct software at scale and speed. Overall, humans have a pretty terrible track record of producing bug-free correct code no matter how much they double-check and review their code along the way.
So the solution is to stop doing code reviews and just YOLO-merge everything? After all, everything is fucked already, how much worse could it get?
For the record, there are examples where human code review and design guidelines can lead to very low-bug code. NASA published their internal guidelines for producing safety-critical code[1]. The problem is that the development cost of software when using such processes is too high for most companies, and most companies don't actually produce safety-critical software.
My experience with the vast majority of LLM code submitted to projects I maintain is that it has subtle bugs that I managed to find through fairly cursory human review. The copilot code review feature on GitHub also tends to miss actual bugs and report nonexistent bugs, making it worse than useless. So in my view, the death of the benefits of human code review have been wildly exaggerated.
No, that's not what I wrote, and it's not the correct conclusion. What I wrote (and what you, in fact, also wrote) is that in reality we generally do not actually need provably correct software except in rare cases (e.g., safety-critical applications). Suggesting that human review cannot be reduced or phased out at all until we can automatically prove correctness is wrong, because fully 100% correct and bug-free software is not needed for the vast majority of code being produced. That does not mean we immediately throw out all human review, but the bar for making changes for how we review code is certainly much lower than the above poster suggested.
I don't really buy your premise. What you're suggesting is that all code has bugs, and those bugs have equal severity and distribution regardless of any forethought or rigor put into the code.
You're right, human review and thorough design are a poor approximation of proving assumptions about your code. Yes bugs still exist. No you won't be able to prove the correctness of your code.
However, I can pretty confidently assume that malloc will work when I call it. I can pretty confidently assume that my thoroughly tested linked list will work when I call it. I can pretty confidently assume that following RAII will avoid most memory leaks.
Not all software needs meticulous careful human review. But I believe that the compounding cost of abstractions being lost and invariants being given up can be massive. I don't see any other way to attempt to maintain those other than human review or proven correctness.
I did suggest all code has bugs (up to some limit -- while I wasn't careful to specify this, as discussed above, there does exist an extraordinary level of caution and review that if used can approximate perfect bug-free code, as in your malloc example and in the example of NASA, but that standard is not currently applied to 99.9% of human-generated and human-reviewed code, and it doesn't need to be). I did not suggest anything else you said I suggested, so I'm not sure why you made those parts up.
"Not all software needs meticulous careful human review" is exactly the point. The question of exactly what software needs that kind of review is one whose answer I expect to change over the next 5-10 years. We are already at the point where it's so easy to produce small but highly non-trivial one-off applications that one needn't examine the code at all -- I completely agree with the above poster that we're rapidly discovering new examples of software development where output-verification is all you need, just like right now you don't hand-inspect the machine code generated by your compiler. The question is how far that will be able to go, and I don't think anybody really knows right now, except that we are not yet at the threshold. You keep bringing up examples where the stakes are "existential", but you're underestimating how much software development does not have anything close to existential stakes.
I agree that's remarkable, and I do expect a proliferation of LLM-assisted development in similar niches where verification is easy and correctness isn't critical. But I don't think most software developers today are in such niches.
Most enterprise software I use has serious defects. Professional CAD software for infrastructure is awful. Many are just incremental improvements piled upon software from the 1990s. Bugs last for decades because nobody can understand how the program works so they just work on one more little VBA plugin at a time. Meanwhile, the capabilities of these programs have fallen completely behind game studios with no budget and no business plan. Where are the results of this human excellence and code quality process? There are 10s of thousands of new CVEs every year from code hand crafted by artisans on their very own MacBooks. How? Perhaps there is the tiny possibility that maybe code quality is mostly an aesthetic judgment that nobody can really define, and just maybe this effort is mostly spent on vague concepts like maintainability or preferential decisions instead of the basics: does it meet the specification? Is the performance getting better or worse?
This is the game changer for me: I don’t have to evaluate tens or hundreds of market options that fit my problem. I tell the machine to solve it, and if it works, then I’m happy. If it doesn’t I throw it away. All in a few minutes and for a few cents. Code is going the way of the disposable diaper, and, if you ever washed a cloth diaper you will know, that’s a good thing.
> I tell the machine to solve it, and if it works, then I’m happy. If it doesn’t I throw it away.
What happens when it seems to work, and you walk away happy, but discover three months later that your circular components don't line up because the LLM-written CAD software used an over-rounded PI = 3.14? I don't work in industrial design, but I faced a somewhat similar issue where an LLM-written component looked fine to everyone until final integration forced us to rewrite it almost entirely.
This is basically me at my job right now. My boss used Claude Code in his spare time to write a "proof of concept" Electron app. It mostly worked but had some weird edge case behaviors. Now it's handed off to me, and fixing those edge cases is requiring me to refactor basically every single thing Claude touched. Vast majority I'm just tossing and redoing from scratch.
The original code "looks" fine, and it works pretty well even, but an LLM cannot avoid critical oversights along the way, and is fundamentally designed to its mistakes look as plausibly correct as possible. This makes correcting the problems down the line much more annoying (unless you can afford to live with the bugs and keep slapping on more band aids, i guess)
Most people don't have a problem with using genai for stuff like throwaway UI's. That's not even remotely relevant to the criticisms. People reject having it forced down their throats by companies who are desperate to make us totally reliant on it to justify their insane investments. And people reject the evangelicals who claim that it's going to replace developers because it can spit out mostly working boilerplate.
It's like watching somebody argue that code linting is going to change the face of the world and the rebuttals to the skeptics are arguing that akshually code linting is quite useful....
I have found value for one off tasks. I forget the exact situation, but I wanted to do some data transformation, something that would normally take me a half hour of awk/sed/bash or python scripting. AI spit it out right away.
> You can vibe-code a throwaway UI for investigating some complex data in less than 30 minutes. The code quality doesn't matter, and it will make your life much easier.
I think the throwaway part is important here and people are missing it, particularly for non-programmers.
There's a lot of roles in the business world that would make great use of ephemeral little apps like this to do a specific task, then throw it away. Usually just running locally on someone's machine, or at most shared with a couple other folks in your department.
Code doesn't have to be good, hell it doesn't even have to be secure, and certainly doesn't need to look pretty. It just needs to work.
There's not enough engineering staff or time to turn every manager's pet excel sheet project into a temporary app, so LLMs make perfect sense here.
I'd go as far to say more effort should be put into ephemeral apps as a use case for LLMs over focusing on trying to use them in areas where a more permanent, high quality solution is needed.
Perhaps. But does it matter? There is a million tools to investigate complex data already. Are you suggesting it is more useful to develop a new tool from scratch, using LLM-type tools, than it is to use a mature tool for data analysis?
If you don't know how to analyze data, and flat out refuse to invest in learning the skill, then I guess that could be really useful. Those users are likely the ones most enthusiastic about AI. But are those users close to as productive as someone who learns a mature tool? Not even close.
Lots of people appreciate an LLM to generate boiler plate code and establish frameworks for their data structures. But that's code that probably shouldn't be there in the first place. Vibe coding a game can be done impressively quick, but have you tried using a game construction kit? That's much faster still.
One thing people often don't realize or ignore: these LLMs are trained on the internet, the entire internet.
There's a shit-ton of bad and inefficient code on the internet. Lots of it. And it was used to train these LLMs as much as the good code.
In other words, the LLMs are great if you're OK with mediocrity at best. Mediocrity is occasionally good enough, but it can spell death for a company when key parts of it are mediocre.
I'm afraid a lot of the executives who fantasize about replacing humans with AI are going to have to learn this the hard way.
Except when your AI psychosis PM / manager sees your throwaway vibe-coded garbage and demands it gets shipped to customers.
It's infinitely worse when your PM / manager vibe-codes some disgusting garbage, sees that it kind of looks like a real thing that solves about half of the requirements (badly) and demands engineers ship that and "fix the few remaining bugs later".
I would say it is like that. No one HAS to use AI. But the shared goal is to get a change to the codebase to achieve a desired outcome. Some will outsource a significant part of that to AI, some won't.
And its tricky because I'm trying not to appeal to emotion despite being fascinated with how this tool has enabled me to do things in a short amount of time that it would have taken me weeks of grinding to get to and improves my communication with stakeholders. That feels world changing. Specifically my world and the day-to-day roll I play when it comes to getting things done.
I think it is fine that it fell short of your expectations. It often does for me as well but it's when it gets me 80% of the way there in less than a day's work, then my mind is blown. It's an imperfect tool and I'm sorry for saying this but so are we. Treat its imperfections in the same way you would with a JR developer- feedback, reframing, restrictions, and iterate.
My partner (IT analyst) works for a company owned by a multinational big corporation, and she got told during a meeting with her manager that use of AI is going to become mandatory next year. That's going to be a thing across the board.
And have you called a large company for any reason lately? Could be your telco provider, your bank, public transport company, whatever. You call them, because online contact means haggling with an AI chatbot first to finally give up and shunt you over to an actual person who can help, and contact forms and e-mail have been killed off. Calling is not exactly as bad, but step one nowadays is 'please describe what you're calling for', where some LLM will try to parse that, fail miserably, and then shunt you to an actual person.
> My partner (IT analyst) works for a company owned by a multinational big corporation, and she got told during a meeting with her manager that use of AI is going to become mandatory next year. That's going to be a thing across the board.
My multinational big corporation employer has reporting about how much each employee uses AI, with a naughty list of employees who aren't meeting their quota of AI usage.
Nothing says "this product is useful" quite like forcing people to use it and punishing people who don't. If it was that good, there'd be organic demand to use it. People would be begging to use it, going around their boss's back to use it.
The fact that companies have to force you to use it with quotas and threats is damning.
> My multinational big corporation employer has reporting about how much each employee uses AI, with a naughty list of employees who aren't meeting their quota of AI usage.
“Why don’t you just make the minimum 37 pieces of flAIr?”
Yeah. Well. There are company that require TPS reports, too.
It's mostly a sign leadership has lost reasoning capability if it's mandatory.
But no, reporting isn't necessarily the problem. There are plenty of places that use reporting to drive a conversation on what's broken, and why it's broken for their workflow, and then use that to drive improvement.
It's only a problem if the leadership stance is "Haha! We found underpants gnome step 2! Make underpants number go up, and we are geniuses". Sadly not as rare as one would hope, but still stupid.
> And have you called a large company for any reason lately? Could be your telco provider, your bank, public transport company, whatever. You call them, because online contact means haggling with an AI chatbot first to finally give up and shunt you over to an actual person who can help, and contact forms and e-mail have been killed off. Calling is not exactly as bad, but step one nowadays is 'please describe what you're calling for', where some LLM will try to parse that, fail miserably, and then shunt you to an actual person
All of this predates LLMs (what “AI” means today) becoming a useful product. All of this happened already with previous generations of “AI”.
It was just even shittier than the version we have today.
It was also shittier than the version we had before it (human receptionists).
This is what I always think of when I imagine how AI will change the world and daily life. Automation doesn't have to be better (for the customer, for the person using it, for society) in order to push out the alternatives. If the automation is cheap enough, it can be worse for everyone, and still change everything. Those are the niches in ehich I'm most certain will be here to stay— because sometimes, it hardly matters if it's any good.
It isn't a universal thing. I have no doubt there is a job out there that that isn't a requirement. I think the issue is the C-level folks are seeing how more productive someone might be and making it a demand. That to me is the wrong approach. If you demonstrate and build interest, the adoption will happen.
As opposed to reaching, say, somebody in an offshored call center with an utterly undecipherable accent reading a script at you? Without any room for deviation?
> But the shared goal is to get a change to the codebase to achieve a desired outcome.
I'd argue that's not true. It's more of a stated goal. The actual goal is to achieve the desired outcome in a way that has manageable, understood side effects, and that can be maintained and built upon over time by all capable team members.
The difference between what business folks see as the "output" of software developers (code) and what (good) software developers actually deliver over time is significant. AI can definitely do the former. The latter is less clear. This is one of the fundamental disconnects in discussions about AI in software development.
In my personal use case, I work at a company that has SO MUCH process and documentation for coding standards. I made an AI agent that knows all that and used it to update legacy code to the new standard in a day. Something that would have taken weeks if not more. If your desire is manageable code, make that a requirement.
I'm going to say this next thing as someone with a lot of negative bias about corporations. I was laid off from Twitter when Elon bought the company and at a second company that was hemorrhaging users.
Our job isn't to write code, it's to make the machine do the thing. All the effort for clean, manageable, etc is purely in the interest of the programmer but at the end of the day, launching the feature that pulls in money is the point.
It's not just about coding standards. It's about, over time, having a team of people with a built-up set of knowledge about how things work and how they're expected to work. You don't get that by vibe coding and reviewing numerous PRs written by other people (or chatbots).
If everyone on your team is doing that, it's not long before huge chunks of your codebase are conceptually like stuff that was written a long time ago by people who left the company. Except those people may have actually known what they were doing. The AI chatbots are generating stuff that seems to plausibly work well enough based on however they were prompted.
There are intangible parts of software development that are difficult to measure but incredibly valuable beyond the code itself.
> Our job isn't to write code, it's to make the machine do the thing. All the effort for clean, manageable, etc is purely in the interest of the programmer but at the end of the day, launching the feature that pulls in money is the point.
This could be the vibe coder mantra. And it's true on day one. Once you've got reasonably complex software being maintained by one or more teams of developers who all need to be able to fix bugs and add features without breaking things, it's not quite as simple as "make the machine do the thing."
How did you verify that your AI agent performed the update correctly? I've experienced a number of cases where an AI agent made a change that seemed right at first glance, maybe even passed code review, but fell apart completely when it came time to build on top of it.
> made a change that seemed right at first glance, maybe even passed code review, but fell apart completely when it came time to build on top of it
Maybe I'm not understanding you're point, but this is the kind of thing that happens in software teams all the time and is one of those "that's why they call it work" realities of the job.
If something "seems right/passed review/fell apart" then that's the reviewer's fault right? Which happens, all the time! Reviewers tend to fall back to tropes and "is there tests ok great" and whatever their hobbyhorses tend to be, ignoring others. It's ok because "at least it's getting reviewed" and the sausage gets made.
If AI slashed the amount of time to get a solution past review, it buys you time to retroactively fix too, and a good attitude when you tell it that PR 1234 is why we're in this mess.
> If something "seems right/passed review/fell apart" then that's the reviewer's fault right?
No, it's the author's fault. The point of a code review is not to ensure correctness, it is to improve code quality (correctness, maintainability, style consistency, reuse of existing functions, knowledge transfer, etc).
I mean, that's just not true when you're talking about varying levels of experience. Review is _very_ important with juniors, obviously. If you as sr eng let a junior put code in the codebase that messes up later, you share that blame for sure.
Unit tests, manual testing the final product, PR with two approvals needed (and one was from the most anal retentive reviewer at the company who is heavily invested in the changes I made), and QA.
>AI is certainly not reliable enough for me to jeopardize the quality of my work by using it heavily.
I mean this in sincerity, and not at all snarky, but - have you considered that you haven't used the tools correctly or effectively? I find that I can get what I need from chatbots (and refuse to call them AI until we have general AI just to be contrary) if I spend a couple of minutes considering constraints and being careful with my prompt language.
When I've come across people in my real life who say they get no value from chatbots, it's because they're asking poorly formed questions, or haven't thought through the problem entirely. Working with chatbots is like working with a very bright lab puppy. They're willing to do whatever you want, but they'll definitely piss on the floor unless you tell them not to.
It would be helpful if you would relate your own bad experiences and how you overcame them. Leading off with "do it better" isn't very instructive. Unfortunately there's no useful training for much of anything in our industry, much less AI.
I prefer to use LLM as a sock puppet to filter out implausible options in my problem space and to help me recall how to do boilerplate things. Like you, I think, I also tend to write multi-paragraph prompts repeating myself and calling back on every aspect to continuously hone in on the true subject I am interested in.
I don't trust LLM's enough to operate on my behalf agentically yet. And, LLM is uncreative and hallucinatory as heck whenever it strays into novel territory, which makes it a dangerous tool.
> have you considered that you haven't used the tools correctly or effectively?
The problem is that this comes off just as tone-deaf as "you're holding it wrong." In my experience, when people promote AI, its sold as just having a regular conversation and then the AI does thing. And when that doesn't work, the promoter goes into system prompts, MCP, agent files, etc and entire workflows that are required to get it to do the correct thing. It ends up feeling like you're being lied to, even if there's some benefit out there.
There's also the fact that all programming workflows are not the same. I've found some areas where AI works well, but a lot of my work it does not. Usually things that wouldn't show up in a simple Google search back before it was enshittified are pretty spotty.
I suspect AI appeals very strongly to a certain personality type who revels in all the details in getting a proper agentic coding environment bootstrapped for AI to run amok in, and then supervises/guides the results.
Then there’s people like me, who you’d probably term as an old soul, who looks at all that and says, “I have to change my workflow, my environment, and babysit it? It is faster to simply just do the work.” My relationship with tech is I like using as little as possible, and what I use needs to be predictable and do something for me. AI doesn’t always work for me.
Yes, this rings true, it took me over a month to actually get to at least 1x of my usual productivity with Claude Code. There is a ton of setup and ton of things to learn and try to see what works. What to watch out for and how to babysit it so it doesn't go off the rails (quite heavy handed approach works best for me). It's kind of like a shitty, but very fast and very knowledgable junior developer. At this moment it still maybe isn't "worth it" for a lot of devs if productivity (and developer ergonomics) is the only goal, but it is clear to me that this is where the industry is heading and I think every dev will eventually have to get on board. These tools really just started to be somewhat decent this year. I'm 100% sure that in a year or two it will be the default for everyone in a way that you simply won't be able to compete without it at all. It would be like using a shovel instead of an excavator. Remember, right now is the worst it'll ever be.
> In my experience, when people promote AI, its sold as just having a regular conversation and then the AI does thing.
This is almost the complete opposite of my experience. I hear expressions about improvements and optimism for the future, but almost all of the discussion from active people productivly using AI is about identifying the limits and seeing what benefits you can find within those limits.
They are not useless and they are also not a panacea. It feels like a lot of people consider those the only available options.
AI is okay (not great) at generating low- to mid-skill code. If you are working in a high-skill software domain that requires pervasive state-of-the-art or first-principles implementation then AI produces consistently terrible code. It frequently is flatly incorrect about esoteric technical details that really matter.
It can't reason from first principles and there isn't training data for a lot of state-of-the-art computer science and code implementations. Nothing you can prompt will make it produce non-naive output because it doesn't have that capability.
AI works for a lot of things because, if we are honest, AI generated slop is replacing human generated slop. But not all software is slop and there are software domains where slop is not even an option.
I think it's more a continuation of IDE versus pure editor.
More precisely:
In one side, it's the "tools that build up critical mass" philosophy. AI firmly resides here.
On the other, it's the "all you need is brain and plain text" philosophy. We don't see much AI in this camp.
One thing I learned is that you should never underestimate the "all you need is brain and plain text" camp. That philosophy survived many, many "fatal blows" and has come up on top several times. It has one unique feature: resilience to bloat, something that the current smart tools camp is obviously overlooking.
I'm probably one of the people that would say AI (at least LLMs) isn't all its cracked up to be and even I have examples where it has been useful to me.
I think the feeling stems from the exaggeration of the value it provides combined with a large number of internal corporate LLMs being absolute trash.
The overvaluation is seen in effect everywhere from the stock market, the price of RAM, the cost of energy as well as IP theft issues etc etc. AI has taken over and yet it still feels like just a really good fuzzy search. Like yeah I can search something 10x faster than before but might get a bad answer every now and then.
Yeah its been useful (so have many other things). No it's not worth building trillion dollar data centers for. I would be happier if the spend went towards manufacturing or semiconductor fabs.
Similar experience. I think it's become an identity politics concept. To those who consider themselves to be anti AI, the concept of the tool having any use is haram.
It feels awkward living in the "LLMs are a useful tool for some tasks" experience. I suspect this is because the two tribes are the loudest.
Right this is what I can’t quite understand. A lot of HN folks appear to have been burned by e.g. horrible corporate or business ideas by non technical people that don’t understand AI, that is completely understandable. What I never understand is the population of coders that don’t see any value in coding agents or are aggressively against them, or people that deride LLMs as failing to be able to do X (or hallucinate etc) and are therefore useless and every thing is AI Slop, without recognizing that what we can do today is almost unrecognizeable from the world of 3 years ago. The progress has moved astoundingly fast and the sheer amount of capital and competition and pressure means the train is not slowing down. Predictions of “2025 is the year of coding agents” from a chorus of otherwise unpalatable CEOs was in fact absolutely true…
> Predictions of “2025 is the year of coding agents” from a chorus of otherwise unpalatable CEOs was in fact absolutely true…
... but maybe not in the way that these CEOs had hoped.[0]
Part of the AI fatigue is that busy, competent devs are getting swarmed with massive amounts of slop from not-very-good developers. Or product managers getting 5 paragraphs of GenAI bug reports instead of a clear and concise explanation.
I have high hopes for AI and think generative tooling is extremely useful in the right hands. But it is extremely concerning that AI is allowing some of the worst, least competent people to generate an order of magnitude more "content" with little awareness of how bad it is.
> What I never understand is the population of coders that don’t see any value in coding agents or are aggressively against them, or people that deride LLMs as failing to be able to do X (or hallucinate etc) and are therefore useless and every thing is AI Slop, without recognizing that what we can do today is almost unrecognizeable from the world of 3 years ago.
I don't recognize that because it isn't true. I try the LLMs every now and then, and they still make the same stupid hallucinations that ChatGPT did on day 1. AI hype proponents love to make claims that the tech has improved a ton, but based on my experience trying to use it those claims are completely baseless.
> I try the LLMs every now and then, and they still make the same stupid hallucinations that ChatGPT did on day 1.
One of the tests I sometimes do of LLMs is a geometry puzzle:
You're on the equator facing south. You move forward 10,000 km along the surface of the Earth. You are rotate 90° clockwise. You move another 10,000 km forward along the surface of the earth. Rotate another 90° clockwise, then move another 10,000 km forward along the surface of the Earth.
Where are you now, and what direction are you facing?
They all used to get this wrong all the time. Now the best ones sometimes don't. (That said, only one to succed just as I write this comment was DeepSeek; the first I saw succeed was one of ChatGPT's models but that's now back to the usual error they all used to make).
Anecdotes are of course a bad way to study this kind of thing.
Unfortunately, so are the benchmarks, because the models have quickly saturated most of them, including traditional IQ tests (on the plus side, this has demonstrated that IQ tests are definitely a learnable skill, as LLMs loose 40-50 IQ points when going from public to private IQ tests) and stuff like the maths olympiad.
Right now, AFAICT the only open benchmarks are the METR time horizon metric, the ARC-AGI family of tests, and the "make me an SVG of ${…}" stuff inspired by Simon Willison's pelican on a bike.
Out of interest, was your intended answer "where you started, facing east"?
FWIW, Claude Opus 4.5 gets this right for me, assuming that is the intended answer. On request, it also gave me a Mathematica program which (after I fixed some trivial exceptions due to errors in units) informs me that using the ITRF00 datum the actual answer is 0.0177593 degrees north and 0.168379 west of where you started (about 11.7 miles away from the starting point) and your rotation is 89.98 degrees rather than 90.
(ChatGPT 5.1 Thinking, for me, get the wrong answer because it correctly gets near the South Pole and then follows a line of latitude 200 times round the South Pole for the second leg, which strikes me as a flatly incorrect interpretation of the words "move forward along the surface of the earth". Was that the "usual error they all used to make"?)
> Out of interest, was your intended answer "where you started, facing east"?
Or anything close to it so long as the logic is right, yes. I care about the reasoning failure, not the small difference between the exact quarter-circumferences of these great circles and 10,000km; (Not that it really matters, but now you've said the answer, this test becomes even less reliable than it already was).
> FWIW, Claude Opus 4.5 gets this right for me, assuming that is the intended answer.
Like I said, now the best ones sometimes don't [always get it wrong].
For me yesterday, Claude (albeit Sonnet 4.5, because my testing is cheap) avoided the south pole issue, but then got the third leg wrong and ended up at the north pole. A while back ChatGPT 5 (I looked the result up) got the answer right, yesterday GPT-5-thinking-mini (auto-selected by the system) got it wrong same way as you report on the south pole but then also got the equator wrong and ended up near the north pole.
"Never" to "unreliable success" is still an improvement.
My boss has been passing off Claude generated code and documentation to me all year. It is consistently garbage. It consistently hallucinates. I consistently have to rewrite most, if not all, of what I'm handed.
I do also try and use Claude Code for certain tasks. More often than not, i regret it, but I've started to zero in on tasks it's helpful with (configuration and debugging, not so much coding).
But it's very easy then for me to hear people saying that AI gives them so much useful code, and for me to assume that they are like my boss: not examining that code carefully, or not holding their output to particularly high standards, or aren't responsible for the maintenance and thus don't need to care. That doesn't mean they're lying, but it doesn't mean they're right.
> it hasn't worked for you, everyone else must be lying?
Well, some non-zero amount of you are probably very financially invested in AI, so lying is not out of the question
Or you simply have blinders on because of your financial investments. After all, emotional investment often follows financial investment
Or, you're just not as good as you think you are. Maybe you're talking to people who are much better at building software than you are, and they find the stuff the AI builds does not impress them, while you are not as skilled so you are impressed by it.
There are lots of reasons someone might disagree without thinking everyone else is lying
There is zero guarantee that these tools will continue to be there. Those of us who are skeptical of the value of the tools may find them somewhat useful, but are quite wary of ripping up the workflows we've built for ourselves over decade(s)(+) in favor of something that might be 10-20% more useful, but could be taken away or charged greater fees or literally collapse in functionality at any moment, leaving us suddenly crippled. I'll keep the thing I know works, I know will always be there (because it's open source, etc), even if it means I'm slightly less productive over the next X amount of time otherwise.
What would you imagine a plausible scenario would possibly be that your tools would be taken away or “collapse in functionality”? I would say Claude right now has probably made worse code and wasted time than if I had coded things myself, but it’s because this is like the first few hundred days of this. Open weight models are also worse but they will never go away and improve steadily as well. I am all for people doing whatever works for them I just don’t get the negativity or the skepticism when you look at the progress over what has been almost zero time. It’s crappy now in many respects but it’s like saying “my car is slow” in the one millisecond after I floor the gas pedal
My understanding is that all the big AI companies are currently offering services at a loss, doing the classic Silicon Valley playbook of burning investor cache to get big, and then hope to make a profit later. So any service you depend on could crash out of the race, and if one emerges as a victorious monopoly and you rely on them, they can charge you almost whatever they like.
To my mind, the 'only just started' argument is wearing off. It's software, it moves fast anyway, and all the giants of the tech world have been feverishly throwing money at AI for the last couple of years. I don't buy that we're still just at the beginning of some huge exponential improvement.
My understanding is they make a loss overall due to the spending on training new models, that the API costs are profit making if considered in isolation. That said, this is based on guestimates based on hosting costs of open-weight models, owing to a lack of financial transparancey everywhere for the secret-weights models.
> What would you imagine a plausible scenario would possibly be that your tools would be taken away or “collapse in functionality”?
Simple. The company providing the tool needs actual earning suddenly. Therefore, they need to raise the prices. They also need users to spend more tokens, so they will make the tool respond in a way that requires more refinement. After all, the latter is exactly what happened with google search.
At this point, that is pretty normal software cycle - try to attract crowd by being free or cheap, then lock features behind paywall. Then simultaneously raise prices more and more while making the product worst.
This literally NEEDS to happen, because these companies do not have any other path to profitability. So, it will happen at some point.
Sure but you’re forgetting that competition exists. If anthropic investors suddenly say “enough” and demand positive cash flow it wouldn’t be that hard, everyone is capturing users for flywheels and capex for model improvements because if they don’t they will be guaranteed to lose.
It’s going to definitely be crappy, remember Google in 2003 with relevant results and no endless SEO , or Amazon reviews being reliable, or Uber being simple and cheap, etc. once growth phase ends monetization begins and experience declines but this is guard railed by the fact that there are many players.
Comsidering what I described is how tech companies actually function and functioned in the past, theoretical competition wont help.
They are competing themselves into massive unprofitability. Eventually they will die or do the above in cooperation. Maybe there will bw minor snandal about it, but that sort of collution is not prosecuted or seriously investigated if done by big companies.
So, it will happen exactly as it always happens with tech.
AI is in a hype bubble that will crash just like every other bubble. The underlying uses are there but just like Dot Com, Tulips, subprime mortgages, and even Sir Isaac Newton's failings with the South Sea Company the financial side will fall.
This will cause bankruptcies and huge job losses. The argument for and against AI doesn't really matter in the end, because the finances don't make a lick of sense.
Ok sure the bubble/non-bubble stuff, fine, but in terms of “things I’d like to be a part of” it’s hard to imagine a more transformative technology (not to again turn off the anti-hype crowd). But ok, say it’s 1997, you don’t like the valuations you see. But as a tech person you’re not excited by browsers, the internet, the possibilities? You don’t want to be a part of that even if it means a bubble pops? I also hear a lot of people argue “finances don’t make a lick of sense” but i don’t think things are that cut and dried and I don’t see this as obvious. I don’t think really many people know how things will evolve and what size a market correction or bubble would have.
What precisely about AI is transformative, compared to the internet? E-mail replaced so much of faxing, phoning and physical mail. Online shopping replaced going to stores and hoping they have what you want, and hoping it is in stock, and hoping it is a good price. It replaced travel agents to a significant degree and reoriented many industries. It was the vehicle that killed CDs and physical media in general.
With AI I can... generate slop. Sometimes that is helpful, but it isn't yet at the point where it's replacing anything for me aside from making google searches take a bit less time on things that I don't need a definitive answer for.
It's popping up in my music streams now and then, and I generally hate it. Mushy-mouthed fake vocals over fake instruments. It pops up online and aside from the occasional meme I hate it there too. It pops up all over blogs and emails and I profoundly hate it there, given that it encourages the actual author to silence themselves and replaces their thoughts with bland drivel.
Every single software product I use begs me to use their AI integration, and instead of "no" I'm given the option of "not now", despite me not needing it, and so I'm constantly being pestered about it by something.
> With AI I can... generate slop. Sometimes that is helpful, but it isn't yet at the point where it's replacing anything for me aside from making google searches take a bit less time on things that I don't need a definitive answer for.
I think this is probably the disconnect, this seems so wildly different from my experience. Not only that, I’ll grant that there are a ton of limitations still but surely you’d concede that there has been an incredible amount of progress in a very short time? Like I can’t imagine someone who sits down with Claude like I do and gets up and says “this is crap and a fad and won’t go anywhere”.
As for generated content, I again agree with you and you’d be surprised to learn that _execs_ agree with you but look at models from 1, 2, 3 years ago and tell me you don’t see a frightening progression of quality. If you want to say “I’ll believe it when I see it” that’s fine but my god just look at the trajectory.
For AI slop text, once again agree, once again I think we all have to figure out how to use it, but it is great for e.g. helping me rewrite a wordy message quickly, making a paper or a doc more readable, combining my notes into something polished, etc, and it’s getting better and better and better.
So I disagree it has made everything worse but I definitely agree that it has made a lot of things worse and we have a lot of Pets.com ideas that are totally not viable today, but the point I think people are maybe missing (?) is that it’s not about where we are it’s about the velocity and the future. You may be terrified and nauseated by $1T in capex on AI infra, fine but what that tells you is the scale is going to grow even further _in addition_ to the methodological / algorithmic improvements to tackle things like continual learning, robustness, higher quality multimodal generation with e.g. true narrative consistency, etc etc etc. in 5 years I don’t think many people will think of “slop” so negatively
Where you see exponential growth in capability and value, I see the early stages of logarithmic growth.
A similar thing played out a bit with IoT and voice controlled systems like Alexa. They've got their places, but nobody needs or wants the Amazon Dash buttons, or for Alexa to do your shopping for you.
Setting an alarm or adding a note to a list is fine, remote monitoring is fine, but when it comes to things that really matter like spending money autonomously, it completely falls flat.
Long story short, I see a fad that will fall into the background of what people actually do, rather than becoming the medium that they do it by.
My experience is the productivity gains are negative to neutral. Someone else basically wrote that the total "work" was simply being moved from one bucket to another. (I can't find the original link.)
Example: you might spend less time on initial development, but more time on code review and rework. That has been my personal experience.
The thing that changed my view on LLMs was solo traveling for 6 months after leaving Microsoft. There were a lot of points on the trip where I was in a lot of trouble (severe food sickness, stolen items, missed flights) where I don't know how I would have solved those problems without chatGPT helping.
This is one of the most depressing things I have ever read on Hacker News. You claim to have become so de-actualized as a human being that you cannot fulfill the most basic items of Maslow’s Hierarchy of Needs (food, health, personal security, shelter, transportation) without the aid of an LLM.
IDK I got really sick in a foreign country, I wasn't sure how to get to the hospital and I was alone in a hotel room. I don't really know how using chatgpt to help me isn't actualizing.
We used to have Google search and Google maps which solved this problem of finding information about symptoms and finding medical centers near you. LLM doesn’t make anything better it just confidently asserts things about medicine that may be wrong and always need to be verified with the real sources anyway.
Growing up in the internet age (I'm 28 now) it took me until well into my 20s to realize how many classes of problems can be solved in 30 seconds on a phone call vs hours on a computer.
The hotel owner eventually half carried me to the hospital because I got so weak from dehydration, though I'm glad I left my hotel room when I did I had difficulty avoiding fainting.
Rafael was the absolute best. He also made sure the hospital saw me right away since I was so weak. But once I was hooked up I used ChatGPT to scan the ivs they had me hooked up to since I had no idea what they were pumping into me since it was all in Spanish.
I can't imagine being in this situation and thinking "I will ask ChatGPT" instead of "I will ask the people at the front desk of this hotel I'm staying at"
I saw a post on Reddit the other day where the user was posting screenshots from ChatGPT about how they were using ChatGPT as a “Human OS” and outsourcing all decisions and information to ChatGPT. It made me queasy.
god forbid you outsource easy things to technology - that's how humanity has been progressing since forever. but sure, throw away your calculator and do it by hand if that makes you feel any better.
Well, if your calculator has a loose wire that sometimes flips a random bit somewhere, you might find that a slide rule that is consistently correct has a certain value.
Extremely uncharitable reading. Plausibly they were in a foreign country where they didn't speak any of the language and didn't know how anything worked. This kind of situation was never easy for anyone.
I have been in situations where either I or someone in my party was sick and needed medical care in a foreign country where I didn't speak the language. In all cases I used my brain to figure out a solution quickly without the aid of CharGPT, and the trip continued on.
This falls in the category of life skills or maybe just "adulting." Sure, maybe ChatGPT can be considered a life skill, but you need others compiled into your brain to fall back on when it fails. If ChatGPT is the only skill you have, what do you do if your phone gets stolen?
What a strange thing to say.. ChatGPT is not a skill, its just a tool. It helps much faster and better than google searching. Why the fuss about it?
Would you say the same to someone using Google?
"Sure, maybe Google can be considered a life skill, but you need others compiled into your brain to fall back on when it fails. If Google is the only skill you have, what do you do if your phone gets stolen?"
Your post is actually one of the most patronising things I have read. The person just used ChatGPT like Google to solve their problems and your reply is about Maslow's Hierarchy of Needs?
This is what people used to use Google for; I remember so many times between 2000-2020 that Google saved my bacon for exactly those things (travel plans, self-diagnosis, navigating local bureaucracies, etc.)
It's a sad commentary on the state of search results and the Internet now that ChatGPT is superior, particularly since pre-knowledge-panel/AI-overview Google was superior in several ways (not hallucinating, for one, and being able to triangulate multiple sources to tell the truth).
Severe food sickness? I know WebMD rightly gets a lot of hate, but this is one thing where it would be good for.
Stolen items? Depending on the items and the place, possibly police.
Missed flights? Customer service agent at the airport for your airline or call the airline help line.
Is it true that it's bad for learning new skills? My gut tells me it's useful as long as I don't use it to cheat the learning process and I mainly use it for things like follow up questions.
It is, it can be an enormous learning accelerator for new skills, for both adults and genuinely curious kids. The gap between low and high performancer will explode. I can tell you that if I had LLMs I would've finished schooling at least 25% quicker, while learning much more. When I say this on HN some are quick to point out the fallibility of LLMs, ignoring that the huge majority of human teachers are many times more fallible. Now this is a privileged place where many have been taught by what is indeed the global top 0.1% of teachers and professors, so it makes more sense that people would respond this way. Another source of these responses is simply fear.
In e.g. the US, it's a huge net negative because kids aren't probably taught these values and the required discipline. So the overwhelming majority does use it to cheat the learning process.
I can't tell you if this is the same inside e.g. China. I'm fairly sure it's not nearly as bad though as kids there derive much less benefit from cheating on homework/the learning process, as they're more singularly judged on standardized tests where AI is not available.
I don't get this line of thinking. Never in my life have I heard the reasoning "replacing effort is the problem" when talking about children who are able to afford 24/7 brilliant private tutors. Having access to that has always been seen as an enormous privilege.
I learnt the most from bad teachers#, but only when motivated. I was forced to go away and really understand things rather than get a sufficient understanding from the teacher. I had to put much more effort in. Teachers don't replace effort, and I see no reasons LLMs will change that. What they do though is reduced the time to finding the relevant content, but I expect at some poorly defined cost.
# The truly good teachers were primarily motivation agents, providing enough content, but doing so in a way that meant I fully engaged.
I think what it comes down to, and where many people get confused, is separating the technology itself from how we use it. The technology itself is incredible for learning new skills, but at the same time it incentivizes people to not learn. Just because you have an LLM doesn't mean you can skip the hard parts of doing textbook exercises and thinking hard about what you are learning. It's a bit similar to passively watching youtube videos. You'd think that having all these amazing university lectures available on youtube makes people learn much faster, but in reality in makes people lazy because they believe they can passively sit there, watch a video, do nothing else, and expect that to replace a classroom education. That's not how humans learn. But it's not because youtube videos or LLMs are bad learning tools, it's because people use them as mental shortcut where they shouldn't.
I fully agree, but to be fair these chatbots hack our reward systems. They present a cost/benefit ratio where for much less effort than doing it ourselves we get a much better result than doing it ourselves (assuming this is a skill not yet learned). I think the analogy to calculators is a good one if you're careful with what you're considering: calculators did indeed make people worse at mental math, yet mental math can indeed be replaced with calculators for most people with no great loss. Chatbots are indeed making people worse at mental... well, everything. Thinking in general. I do not believe that thinking can be replaced with AI for most people with no great loss.
I found it useful for learning to write prose. There's nothing quite like instantaneous feedback when learning. The downside was that I hit the limit of the LLM's capabilities really quickly. They're just not that good at writing prose (overly flowery and often nonsensical).
LLMs were great for getting started though. If you've never tried writing before, then learning a few patterns goes a long way. ("He verbed, verbing a noun.")
My friends and I have always wondered as we've gotten older what's going to be the new tech that the younger generation seems to know and understand innately while the older generations remain clueless and always need help navigating (like computers/internet for my parents' generation and above). I am convinced that thing is AI.
Kids growing up today are using AI for everything, whether or not that's sanctioned or if it's ultimately helpful or harmful to their intellectual growth. I think the jury is still out on that. But I do remember growing up in the 90s, spending a lot of time on the computer, older people would remark how I'll have no social skills, I won't be able to write cursive or do arithmetic in my head, won't learn any real skills, etc, turns out I did just fine and now those same people always have to call me for help when they run into the smallest issue with technology.
I think a lot of people here are going to become roadkill if they refuse to learn how to use these new tools. I just built a web app in 3 weeks with only prompts to Claude Code, I didn't write a single line of code, and it works great. It's pretty basic, but probably would have taken me 3+ months instead of 3 weeks doing it the old fashioned way. If you tried it once a year ago and have written it off, a lot has changed since then and the tools continue to improve every month. I really think that eventually no one will be checking code just like hardly anyone checks the assembly output of a compiler anymore.
You have to understand how the context window works, how to establish guardrails so you're not wasting time repeating the same things over and over again, force it to check its own work with lots of tests, etc. It's really a game changer when you can just say in one prompt "write me an admin dashboard that displays users, sessions, and orders with a table and chart going back 30 days" or "wire up my site for google analytics, my tag code is XXXXXXX" and it just works.
The thing is, Claude Code is great for unimportant casual projects, and genuinely very bad at working in big, complex, established projects. The latter of course being the ones most people actually work on.
Well either it's bad at it, or everyone on my team is bad at prompting. Given how dedicated my boss has been to using Claude for everything for the past year and the output continuing to be garbage, though, i don't think it's a lack of effort on the team's part, i have to believe Claude just isn't good at my job.
As context size increases, AI becomes exponentially dumber. Most established software is far, FAR too large for AI. But small, greenfield projects are amazing for something like Claude Code.
This is why I argue that the impact of LLMs is in the tail. Its all the small to midsize shops that want something done, but don't have money to hire a programmer. Its small tasks, like pushing data around, writing a quick interface to help day to day jobs in niche jobs and technical problems. Its the ability to quickly generate prototype logos and scripts for small scale ad campaigns, for solving Nancy's Excel issue, etc. Big companies have big software and code stacks with tons of dependencies. Small shops have little project needs that solve significant issues facing their operations, but will unlikely become large enough that things like scaling issues, maintenance, integration, are ever a problem at all. Its a tail, but its long in small to midsize businesses. In research labs, which I have personal experience, AI is rapidly making feasible more ambitious projects, quicker timelines, and better code, generally.
I was going to try having an AI agent analyze a well-established open source project. I was thinking of trying something like Bitcoin Core or an open-source JavaScript library, something that has had a lot of human eyes on it. To me, that seems like a good use case, as some of those projects can get pretty complex in what they're aiming to accomplish. Just the sheer amount of complexity involved in Bitcoin, for instance, would be a good candidate for having an AI agent explain the code to you as you're reviewing it. A lot of those projects are fairly well-written as they are, with the higher-level concepts being the more difficult thing to grasp.
Not attempting to claim anything against your company, but I've worked for enterprises where code bases were a complete mess and even the product itself didn't have a clear goal. That's likely not the ideal candidate for AI systems to augment.
I basically agree. OK: Small focused models for specific use cases, small models like the new mistral-3-3B that I found today to be good at tool use I and thus for building narrow ranged applications.
I have been mostly been paid to work on AI projects since 1982, but I want to pull my hair out and scream over the big push in the USA to develop super-AGI. Such a waste of resources and such a hit on society that needs resources used for better purposes.
As a gamedev, there's nothing I hate more than AI concept art. It's always soulless. The best thing about games is there's no limit to human imagination, and you can make whatever you want. But when we leave the imagination stage to a computer then leave the final brushing up to humans, we're getting the order completely backwards. It's bonkers and just disgusting to me.
That said, game engine documentation is often pretty hard to navigate. Most of the best information is some YouTube video recorded by some savant 15 year old with a busted microphone. And you need to skim through 30 minutes of video until you find what you need. The biggest problem is not knowing what you don't know, so it's hard to know where to begin. There are a lot of things you may think you need to spend 2 days implementing, but the engine may have a single function and a couple built in settings to do it.
Where LLMs shine is that I can ask a dumb question about this stuff, and can be pointed in the right direction pretty quickly. The implementation it spits out is often awful (if not unusable), but I can ask a question and it'll name drop the specific function and setting names that'll save me a lot of work. And from there, I know what to look up and it's a clear path from there.
And gamedev is a very strong case of not needing a correct solution. You just need things to feel right for most cases. Games that are rough around the edges have character. So LLM assistance for implementation (not art) can be handy.
> [...] a large number of places where LLMs make apparently-effective tools that have negative long-term consequences (see: anything involving learning a new skill, [...]
Don't people learn from imperfect teachers all the time?
Yes, they do. In fact, imperfect teachers can sometimes induce more learning than more perfect ones. And that's what is insidious about learning from AI. It looks like something we've seen before, something where we know how to make it useful and take advantage even of the gaps and inadequacies.
AI can be effective for learning a new skill, but you have to be constantly on your guard to prevent it from hacking your brain and making you helpless and useless. AI isn't the parent holding your bicycle and giving you a push and letting go when you're ready. It's the welded-on training wheels that become larger and more structurally necessary until the bike can't roll forward at all without them. It feeds you the lie that all you need is the theory, you don't ever need to apply it because the AI will do that for you so don't worry your pretty little head over it. AI teaches you that if something requires effort, you're just not relying on the AI enough. The path to success goes only through AI, and those people who try to build their own skills without it are suckers because the AI can effortlessly create things 100x bigger and better and more complex.
Personally, I still believe that human + AI hybrids have enormous potential. It's just that using AI constantly pushes away from beneficial hybridization and towards dependency. You have to constantly fight against your innate impulses, because it hacks them to your detriment.
I'd actually like to see an AI trained to not give answers, but to search out the point where they get you 90% of the way there and then steadfastly refuse to give you the last 10%. An AI built with the goal not of producing artifacts or answers, but of producing learning and growth in the user. (Then again, I'd like to see the same thing in an educational system...)
> Personally, I still believe that human + AI hybrids have enormous potential. [...]
That was true in chess for a long time, but since at least 20 years or so, approximately anytime the human deviates from what the AI suggests, it's a mistake.
Not sure if that's also category #2 or a new one, but also: Places where AI is at risk of effectively becoming a drug and being actively harmful for the user: Virtual friends/spouses, delusion-confirming sycophants, etc.
I also would like to see AI end up dying off except for a few niches, but I find myself using it more and more. It is not a productivity boost in the way I end up using it, interestingly. Actually I think it is actively harming my continued development, though that could just be me getting older, or perhaps massive anxiety from joblessness. Still, I can't help but ask it if everything I do is a good idea. Even in the SO era I would try to find a reference for every little choice I made to determine if it was a good or bad practice.
This includes IME the initial stages of art creation (the planning, not generating, stage). It's kind of like having someone to bounce ideas off of at 3am. It's a convenient way of trigging your own brain to be inspired.
I also hoped it would crash and burn. The real value added usecases will remain. The overhyped crap won't.
But the shockwave will cause a huge recession and all those investors that put up trillions will not take their losses. Rich people never get poorer. One way or another us consumers will end up paying for their mistakes. Either by huge inflation, job losses, energy costs, service enshittification whatever. We're already seeing the memory crisis having huge knock on effects with next year's phones being much more expensive. That's one of the ways we are going to be paying for this circus.
I really see value in it too, sure. But the amount of investment that goes into it is insane. It's not that valuable by far. LLMs are not good for everything and the next big thing is still a big question mark. AI is dragged in by the hair into usecases where it doesn't belong. The same shit we saw with blockchains, but now on a world crashing scale. It's very scary seeing so much insanity.
But anyway whatever I think doesn't matter. Whatever happens will happen.
Very new ex-MSFT here.
I couldn’t relate more with your friend. That’s exactly what happened. I left Microsoft about 5 weeks ago and it’s been really hard to detox from that culture.
AI pushed down everywhere. Sometimes shitty-AI that needed to be proved at all cost because it should live up to the hype.
I was in one of such AI-orgs and even there several teams felt the pressure from SLT and a culture drift to a dysfunctional environment.
Such pressure to use AI at all costs, as other fellows from Google mentioned, has been a secret ingredient to a bitter burnout. I’m going to therapy and under medication now to recover from it.
(Fellow ex-msftie here too; but I left for a startup almost exactly 10 years ago, and I miss how that older culture is apparently gone).
What I don't understand is where the AI irrationality is coming from: the C-suite (still in B37?) are all incredibly smart people who must surely be aware of the damage this top-down policy is having on morale, product-quality, and how the company is viewed by its own customers - and yet, they do.
I'm not going to pretend things were being run perfectly when I was at MS: there were plenty of slow-motion mistakes playing-out right in front of us all[1] - and as I look back, yes, I was definitely frustrated at these clear obvious mistakes and their resultant unimaginable waste of human effort and capital investment.
Actually, come to think about it... maybe perhaps things really haven't changed as much? Clearly something neurotoxic got into the Talking Rain cans sometime around 2010-2011 - then was temporarily abated in 2014-2015; then came back twice as hard in 2022.
-------
[1]: Windows 8 and the Start Screen; the SurfaceRT; Visual Studio 2012 with SHOUTY MENUS and monochrome toolbar icons; the laggy and sluggish Office 2013; the crazy simultaneous development of entirely separate new reimplementations of the Office apps for iOS, Android, WinRT, the web. While ignoring the clear market-demand for cloud-y version of Active Directory without on-prem DCs (instead we got Entra, then InTune).
FWIW: I realized this year that there are whole cohorts of management people who have absolutely zero relationship with the words that they speak. Literal tabula rasas who convert their thoughts to new words with no attachment to past statements/goals.
Put another way: Liars exist and operate all around you in the top tier of the FAANGS rn.
> none of it had anything to do with what I built. She talked about Copilot 365. And Microsoft AI. And every miserable AI tool she's forced to use at work. My product barely featured. Her reaction wasn't about me at all. It was about her entire environment.
She was given two context clues. AI. And maps. Maps work, which means all the information in an "AI-powered map" descriptor rests on the adjective.
The product website isn't convincing either. It's only in private beta, and the first example shows 'A scenic walking tour of Venice' as the desired trip. I'll readily believe LLMs will gladly give you some sort of itinerary for walking in Venice, including all highlights people write and post about a lot on social media to show how great their life is. But if you asked anyone knowledgable about travel in that region, the counter questions would be 'Why Venice specifically? I thought you hated crowds — have you considered less crowded alternatives where you will be appreciated more as a tourist? Have you actually been to Italy at all?'.
LLMs are always going to give you the most plausible thing for your query, and will likely just rehash the same destinations from hundreds of listicles and status signalling social media posts.
She probably understood this from the minimal description given.
> I'll readily believe LLMs will gladly give you some sort of itinerary for walking in Venice
I tried this in Crotone in September. The suggested walking tour was shit. The facts weren't remarkable. The stops were stupid and stupidly laid out. The whole experience was dumb and only redeeming because I was vacationing with a friend who founded on the of the AI companies.
> if you asked anyone knowledgable about travel in that region, the counter questions would be 'Why Venice specifically?
In the region? Because it's a gorgeous city with beautiful architecture, history and festivals?
> In the region? Because it's a gorgeous city with beautiful architecture, history and festivals?
That would be a great answer to continue from. Would you come for the Biennale specifically? Do you care greatly about sustainability? Would you enjoy yourself more in a different gorgeous city without the mass-tourism problem if that meant you would feel more welcome? Is there a way you can visit Venice without contributing to the issue as much? Off-season perhaps?
Venice is unique, but there are a lot of gorgeous places in the region, from Verona to Trieste.
If it’s your first time going to Italy you absolutely should visit Venice. The crowds are unpleasant, but so what?
Are you going to avoid Rome too? Only go to little provincial villages?
Why should you absolutely visit Venice? It's not just the crowds that are unpleasant, you are actively contributing to a problem.
No, you don't have to avoid Rome — it's not as bad as Venice, and can support more people — but plan ahead and don't just do a tour of all the 'must see' highlights. Look into the off season if you are a history buff with a hyperfocus on Rome — you won't be able to finish your list otherwise due to all the pointless waiting around.
And yes, visit provincial villages and eat in an authentic Italian restaurant where tourists are mostly other Italians. Experience the difference. But you are not limited to villages. Italy is huge, and there are a lot of cities with remarkable museums, world-renowned festivals, great cuisine, and where your money is more than welcome and your stay won't be marred by extreme crowds and pushy con artists in faux Roman gladiator gear.
As a place with a high density of people with agency to influence the outcome, I think it's important for people here to acknowledge that much of what the negative people think is probably 100% true.
There will absolutely some cases where AI is used well. But probably the larger fraction will be where AI does not give better service, experience or tool. It will be used to give a cheaper but shittier one. This will be a big win for the company or service implementing it, but it will suck for literally everybody else involved.
I really believe there's huge value in implementing AI pervasively. However it's going to be really hard work and probably take 5 years to do it well. We need to take an engineering and human centred approach and do it steadily and incrementally over time. The current semi-religious fervour about implementing it rapidly and recklessly is going to be very harmful in the longer term.
From someone who's mostly avoided the AI craze so far, I think it comes down to a combination of two things.
1) "AI" is in many ways like the unreliable coworker so many of us have had in the past - maybe someone who talked a good game in interviews, but after you'd worked with them for a while you realize that you have to double-check everything they do for stupid/careless problems. In the worst case, you also have to do some hand-holding as they ask you for help with things that they should know how to do. They can produce good output but they can't be trusted to produce good (or even marginal) output so they're a net time sink.
2) In a frightening number of companies right now, that problem coworker is the owner's or manager's relative and cannot be avoided.
So boom, there you go, bad coworkers and a toxic culture that not just protects but promotes them.
Instead of admitting you built the wrong thing you denigrate a friend and someone whom you admire. Instead of reconsidering the value of AI you immediately double down.
This is a product of hurt feelings and not solid logic.
I like AI to the extent that it can quickly solve well-worn, what I've taken to calling "embarrassingly solved problems", in your environment, like "make an animation subsystem for my program". A Qt timeline is not hard, but it is tedious, so the AI can do it.
And it turns out that there are some embarrassingly solved problems, like rudimentary multiplayer games, that look more impressive than they really are when you get down to it.
More challenging prompts like "change the surface generation algorithm my program uses from Marching Cubes to Flying Edges", for which there are only a handful of toy examples, VTK's implementation, and the paper, result in an avalanche of shit. Wasted hours, quickly becoming wasted days.
I feel the same way about those embarrassingly solved problems! Though oftentimes the trick is knowing what to ask for. I remember grinding for weeks on a front end but until I realized what the problem was (not the exact bug just what the general concept should be) Claude then fixed it in 10 seconds.
Thanks for the post - it's work to write and synthesize, and I always appreciate it!
My first reaction was "replace 'AI' with the word 'Cloud'" ca 2012 at MS; what's novel here?
With that in mind, I'm not sure there is anything novel about how your friend is feeling or the organizational dynamics, or in fact how large corporations go after business opportunities; on those terms, I think your friends' feelings are a little boring, or at least don't give us any new market data.
In MS in that era, there was a massive gold rush inside the org to Cloud-ify everything and move to Azure - people who did well at that prospered, people who did not, ... often did not. This sort of internal marketplace is endemic, and probably a good thing at large tech companies - from the senior leadership side, seeing how employees vote with their feet is valuable - as is, often, the directional leadership you get from a Satya who has MUCH more information than someone on the ground in any mid-level role.
While I'm sure there were many naysayers about the Cloud in 2012, they were wrong, full stop. Azure is immensely valuable. It was right to dig in on it and compete with AWS.
I personally think Satya's got a really interesting hyper scaling strategy right now -- build out national-security-friendly datacenters all over the world -- and I think that's going to pay -- but I could be wrong, and his strategy might be much more sophisticated and diverse than that; either way, I'm pretty sure Seattleites who hate how AI has disrupted their orgs and changed power politics and winners and losers in-house will have to roll with the program over the next five years and figure out where they stand and what they want to work on.
It does feel like without a compelling AI product Microsoft isn't super differentiated. Maybe Satya is right that scale is a differentiation, but I don't think people are as trapped in an AI ecosystem as they were in Azure.
Their hyper scale data centers are super compelling. And they get OpenAI IP for some time. I don’t think we’ve really seen what they want to launch on the product side yet.
Satya mentioned recently that computer use agents use like 5x the windows license time on azure over a single person - they see a lotttt of inference growth coming and its multiplicative in that it uses their compute and azure infra.
Lol. You don't think that Microsoft has _a_ compelling AI product? The new version of 365 Copilot is objectively compelling, even if it is a work in progress. And Github Copilot is also objectively compelling.
Moving to the Cloud proved to be a pretty nice moneymaker far faster and more concretely than AI has been for these companies. It's a fair comparison regarding corporate pushes but not anything more than that.
There has always been a lot of Microsoft hate, but now its a whole new level. Windows now really sucks, My new laptop is all Linux for the first time ever. I dont see why this company is still so valuable. Most people only use a browser now and some ios apps, there is no need for Windows or Microsoft (and of course Azure is never anyone's first choice). Steam makes the gamers happy to leave too.
I live in Seattle, and got laid off from Microsoft as a PM in Jan of this year.
Tried in early 2024 to demonstrate how we could leverage smaller models (such as Mixtral) to improve documentation and tailor code samples for our auth libraries.
The usual “fiefdom” politics took over and the project never gained steam. I do feel like I was put in a certain “non-AI” category and my career stalled, even though I took the time to build AI-integrated prototypes and present them to leadership.
It’s hard to put on a smile and go through interviews right now. It feels like the hard-earned skills we bring to the table are being so hastily devalued, and for what exactly?
I’m glad it resonated. I’ve found a lot of people in Microsoft have some shared struggles right now. It’s really hard get excited about jobs after that, but you only need one job to be the right fit. It sounds like you were working on some great stuff and you should keep pursuing that interest in the meantime. You never know where it might lead you.
> But then I realized this was bigger than one conversation. Every time I shared Wanderfugl with a Seattle engineer, I got the same reflexive, critical, negative response. This wasn't true in Bali, Tokyo, Paris, or San Francisco—people were curious, engaged, wanted to understand what I was building. But in Seattle? Instant hostility the moment they heard "AI."
So what's different between Seattle and San Francisco? Does Seattle have more employee-workers and San Francisco has more people hustling for their startup?
I assume Bali (being a vacation destination) is full people who are wealthy enough to feel they're insulated from whatever will happen.
I live in Seattle now, and have lived in San Francisco as well.
Seattle has more “normal” people and the overall rhetoric about how life “should be” is in many ways resistant to tech. There’s a lot to like about the city, but it absolutely does not have a hustle culture. I’ve honestly found it depressing coming from the East Coast.
Tangent aside, my point is that Seattle has far more of a comparison ground of “you all are building shit that doesn’t make the world better, it just devalues the human”. I think LLMs have (some) strong use cases, but it is impossible to argue that some of the societal downsides we see aren't ripe for hatred - and Seattle will latch on to that in a heartbeat.
Western Washington is very much a "work to live" place, and in a lot of ways there's a feedback loop to ensure it stays that way: surrounded by fellow "work to live" folks who would far rather just get our work done well and head out to the mountains, forests, and seas, the hustle bros will usually leave within a few years. I've watched it happen with quite a number of type-A folks. Exceptions for folks who make it into certain orgs in Amazon or into startup leadership, those seem to be safe places for hustlers around here.
Anyway. I think you're spot on with the "you all are building shit that doesn't make the world better, it just devalues the human" vibe. Regardless of what employers in WA may force folks to build, that's the mentality here, and AI evangelists don't make many friends... nor did blockchain evangelists, or evangelists of any of the spin-off hype trains ("Web3", NFTs, etc). I guess the "cloud" hype train stuck here, but that happened before I moved out west.
Seattle has always been a second-mover when it comes to hype and reality distortion. There is a lot more echo chamber fervor (and, more importantly, lots of available FOMO money to burn) in SF around whatever the latest hotness is.
My SF friends think they have a shot at working at a company whose AI products are good (cursor, anthropic, etc.), so that removes a lot of the hopelessness.
Working for a month out of Bali was wonderful, it's mostly Australians and Dutch people working remotely. Especially those who ran their own businesses were super encouraging (though maybe that's just because entrepreneurs are more supportive of other entrepreneurs).
in the first paragraph, he drops a link to the startup he's working on:
> I wanted her take on Wanderfugl, the AI-powered map I've been building full-time.
this seems to me like pretty obvious engagement-bait / stealth marketing - write a provocative blog post that will get shared widely, and some fraction of those people will click through to see what the product is all about.
but, apparently it's working because this thread is currently at 400+ comments after 3 hours.
'If you could classify your project as "AI," you were safe and prestigious. If you couldn't, you were nobody. Overnight, most engineers got rebranded as "not AI talent."'
It hits weirdly close to home. Our leadership did not technically mandate use, but 'strongly encourages' it. I did not even have my review yet, but I know that once we get to the goals part, use of AI tools will be an actual metric ( which is.. in my head somewhere between skeptic and evangelist.. dumb ).
But the 'AI talent' part fits. For mundane stuff like data model, I need full committee approval from people, who don't get it anyway ( and whose entire contribution is: 'what other companies are doing' ).
The full quote from that section is worth repeating here.
---------
"If you could classify your project as "AI," you were safe and prestigious. If you couldn't, you were nobody. Overnight, most engineers got rebranded as "not AI talent." And then came the final insult: everyone was forced to use Microsoft's AI tools whether they worked or not.
Copilot for Word. Copilot for PowerPoint. Copilot for email. Copilot for code. Worse than the tools they replaced. Worse than competitors' tools. Sometimes worse than doing the work manually.
But you weren't allowed to fix them—that was the AI org's turf. You were supposed to use them, fail to see productivity gains, and keep quiet.
Meanwhile, AI teams became a protected class. Everyone else saw comp stagnate, stock refreshers evaporate, and performance reviews tank. And if your team failed to meet expectations? Clearly you weren't "embracing AI." "
------------
On the one hand, if you were going to bet big on AI, there are aspects of this approach that make sense. e.g. Force everyone to use the company's no-good AI tools so that they become good. However, not permitting employees outside of the "AI org" to fix things neatly nixes the gains you might see while incurring the full cost.
It sounds like MS's management, the same as many other tech corp's, has become caught up in a conceptual bubble of "AI as panacea". If that bubble doesn't pop soon, MS's products could wind up in a very bad place. There are some very real threats to some of MS's core incumbencies right now (e.g. from Valve).
I know of at least one bigco that will no longer hire anyone, period, who doesn't have at least 6 months of experience using genai to code and isn't enthusiastic about genai. No exceptions. I assume this is probably true of other companies too.
I think it makes some amount of sense if you've decided you want to be "an AI company", but it also makes me wary. Apocryphally Google for a long period of time struggled to hire some people because they weren't an 'ideal culture fit'. i.e. you're trying to hire someone to fix Linux kernel bugs you hit in production, but they don't know enough about Java or Python to pass the interview gauntlet...
Like any tool, the longer you use it the better you learn where you can extract value from it and where you can't, where you can leverage it and where you shouldn't. Because your behaviour is linked to what you get out of the LLM, this can be quite individual in nature, and you have to learn to work with it through trial and error. But in the end engineers do appear to become more productive 'pairing' with an LLM, so it's no surprise companies are favouring LLM-savvy engineers.
> But in the end engineers do appear to become more productive 'pairing' with an LLM
Quite the opposite: LLMs reduce productivity, they don't increase it. They merely give the illusion of productivity because you can generate code real fast, but that isn't actually useful when you spend time fixing all the mistakes it made. It is absolutely insane that companies are stupid enough to require people use something which cripples them.
So far, for me, it's just an annoying tool that gets worse outcomes potentially faster than just doing it by hand.
It doesn't matter how much I use it. It's still just an annoying tool that makes mistakes which you try to correct by arguing with it but then eventually just fix it yourself. At best it can get you 80% there.
I’m all for neurodivergent acceptance but it has caused monumentally obnoxious people like this to assume everyone else is the problem. A little self awareness would solve a lot of problems.
Its not. I know some ex bay area devs who are the same mind, and i'm not too far off.
I think its definitely stronger in MS as my friend on the inside tells me, than most places.
There are alot of elements to it, one being profits at all costs, the greater economy, FOMO, and a resentment of engineers and technical people who have been practicing, what execs i can guess only see as alchemy, for a long time. They've decided that they are now done with that and that everyone must use the new sauce, because reasons. Sadly until things like logon buttons dis-appear and customers get pissed, it won't self-correct.
I just wish we could present the best version of ourselves and as long as deadlines are met, it'll all work out, but some have decided for scorched-earth. I suppose its a natural reaction to always want to be on the cutting edge, even before the cake has left the oven.
I think reading the room is required here. You and your friend can both be right at the same time. You want to build an AI-enabled app, and indeed there's plenty of opportunity for it I'm sure. And your friend can hate what it's done to their job stability and the industry. Also, totally unrelated, but is the meaning or etymology behind the app name Wanderfugl? I initially read it as Wanderfungl.
There's a great non-AI point in this article - Seattle has great engineers. In pursuing startups, Seattle engineers are relatively unambitious compared to the Bay Area. By that I mean there's less "shooting for unicorns" and a comparatively more reserved startup culture and environment.
I'm not sure why. I don't think it's access to capital, but I'd love to hear thoughts.
My pet theory is that most of the investor class in seattle is ex microsoft and ex amazon. Neither microsoft nor amazon are really big splashy unicorns. Amazon's greatest innovation (aws) isn't even their original line of business and is now 'boring'. No doubt they've innovated all over their business in both little and big ways, but not splashy ways, hell every time amazon tries to splash they seem to fall on their ass more often than not (look at their various cancelled hardware lines, their game studios, etc. Alexa still chugs on, but she's not getting appreciably better to the end user over even the last 10 years).
Microsoft is the same, a generally very practical company just trying to practical company stuff.
All the guys that made their bones, vested and rested and now want to turn some of that windfall into investments likely don't have the kind of risk tolerance it takes to fund a potential unicorn. All smart people I'm sure, smart enough to negotiate big windfalls from ms/az but far less risk tollerant than a guy in SF who made their investment nestegg building some risky unicorn.
I'm not surprised you're getting bad reactions from people who aren't already bought-in. You're starting from a firm "I'm right! They're wrong!" with no attempt to understand the other side. I'm sure that comes across not just in your writing
I'm a former AI-hater and sceptic. I do B2B consultancy/development work for my clients.
I understand why people are irritated by this.
However, recently I tried the GitHub Copilot agent with VS Code using Claude Opus 4.5. It literally implemented, tested and fixed entire new features in minutes, that otherwise would have taken days or even weeks of routine repetitive work from me. All while mimicking style and patterns in my existing codebase which made me instantly understand exactly what it was doing. I found it to be an insane productivity boost and I can see how it might be affecting hiring processes in numerous industries, especially in software engineering space.
It's satisfying to hear that Microsoft engineers hate Microsoft's AI offerings as much as I do.
Visual Studio is great. IntelliSense is great. Nothing open-source works on our giant legacy C++ codebase. IntelliSense does.
Claude is great. Claude can't deal with millions of lines of C++.
You know what would be great? If Microsoft gave Claude the ability to semantic search the same way that I can with Ctrl-, in Visual Studio. You know what would be even better? If it could also set breakpoints and inspect stuff in the Debugger.
You know what Microsoft has done? Added a setting to Visual Studio where I can replace the IntelliSense auto-complete UI, that provides real information determined from semantic analysis of the codebase and allows me to cycle through a menu of possibilities, with an auto-complete UI that gives me a single suggestion of complete bullshit.
Can't you put the AI people and the Visual Studio people in a fucking room together? Figure out how LLMs can augment your already-really-good-before-AI product? How to leverage your existing products to let Claude do stuff that Claude Code can't do?
So, they were laid-off because they stubbornly resisted adopting AI? Remember those who were laid-off in the 90s because they resisted working at a computer because they hated computers?
History repeating.
I don't think the root cause here is AI. It's the repeated pattern of resistance to massive technological change by system-level incentives. This story has happened again and again throughout recent history.
I expect it to settle out in a few years where:
1. The fiduciary duties of company shareholders will bring them to a point of stopping to chase AI hype and instead derive an understanding of whether it's driving real top-line value for their business or not.
2. Mid to senior career engineers will have no choice but to level up their AI skills to stay relevant in the modern workforce.
It's probably good if some portion of the engineering culture is irrationally against AI and like refuses to adopt it sort of amish style. There's probably a ton still good work that can only be done if every aspect of a product/thing is given focused human attention to it, some that might out-compete AI aided ones.
I think you hit the nail in the head there. There's absolutely nothing we can do with AI that we can't do without it. And the level of understanding of a large codebase that a solid group of engineers has is paramount to moving fast once the product is live.
Seattle is the type of economy that is heavily threatened by AI. Desk jobs that Claude basically already knows how to do, and just needs to be integrated into the existing systems to have impact.
Most people in Seattle "tech" are middle management with no discernible skills other than organizational deckchair arrangement. It is a place to optimize for work-life balance, and not take risk - this is why the region, despite its technology density, has such a disproportionately small startup scene.
AI IS a huge threat to a place like this and I am not optimistic about the ability for people to adapt.
When companies mandate tools regardless of effectiveness and punish engineers for not using them, it's governance failure dressed as innovation.
The distinction between "real products" (solving actual problems) and "hype products" (exciting investors) reflects a pragmatic engineering perspective.
The situation seems less about AI itself and more about corporate dysfunction using AI as cover for broader organizational failures.
We have these weekly rah rah AI meetings where we swap tips on what we've achieved with copilot and devin. Mostly crickets but everyone is talking with lots of enthusiasm. Its starting to get silly though now, most people can't even get the tools to do anything useful more than trivial things we used to see on stack overflow.
"I said, Imagine how cool would this be if we had like, a 10-foot wall. It’s interactive and it’s historical. And you could talk to Martin Luther King, and you could say, ‘Well, Dr, Martin Luther King, I’ve always wanted to meet you. What was your day like today? What did you have for breakfast?’ And he comes back and he talks to you right now."
The thing about dismissing AI in 2025 is that it's on par with dismissing the wearable computing group at MIT in the 1980s.
But admittedly, if one had tried to productize their stuff in the 1980s it would have been hilarious. So the rewards here are going to go to the people who read the right tea leaves and follow the right path to what's inevitable.
In the short term, a lot of not so smart, people are going to lose a lot of money believing some of the ludicrous short-term claims. But when has that not been the case?
This is not the right time of year to pitch in Seattle. The days are short and the people are cranky. But if they want to keep hating on AI as a technology because of Microsoft and Amazon, let them, and build your AI technology somewhere else. San Francisco thinks the AGI is coming any day now so it all balances out, no?
> After a pause I tried to share how much better I've been feeling—how AI tools helped me learn faster, how much they accelerated my work on Wanderfugl. I didn't fully grok how tone deaf I was being though. She's drowning in resentment.
Here's the deal. Everyone I know who is infatuated with AI shares things AI told them with me, unsolicited, and it's always so amazingly garbage, but they don't see it or they apologize it away [1]. And this garbage is being shoved in my face from every angle --- my browser added it, my search engine added it, my desktop OS added it, my mobile OS added it, some of my banks are pushing it, AI comment slop is ruining discussion forums everywhere (even more than they already were, which is impressive!). In the mean time, AI is sucking up all the GPUs, all the RAM, and all the kWH.
If AI is actually working for you, great, but you're going to have to show it. Otherwise, I'm just going to go into my cave and come out in 5 years and hope things got better.
[1] Just a couple days ago, my spouse was complaining to her friend about a change that Facebook made, and her friend pasted an AI suggestion for how to fix it with like 7 steps that were all fabricated. That isn't helpful at all. It's even less helpful than if the friend just suggested to contact support and/or delete the facebook account.
I've recently found that it can be a useful substitute for stackoverflow. It does occasionally make shit up, but stackoverflow and forums searching also has a decently high miss rate as well, so that doesn't piss me off too much. And it's usually immediately obvious when a method doesn't exist, so it doesn't waste a lot of time for each incident.
Specifically I was using Gemini to answer questions about Godot specifically for C# (not gdscript or using the IDE, where documentation and forums support are stronger), and it was mostly quite good for that.
I just picked up an old gamecube. it's refreshing to play purely offline content from an age without any AI art of any kind. some games, like animal crossing, will break in 2031 though, so there's only a good 5 more years left to enjoy it.
Well, the Gamecube is probably fine, but the Dreamcast was thinking, so watch out :P
I know Animal Crossing is sensitive to the RTC, but could you set the clock back 28 years and go from there? You'll have the same days of the week and what not, just the year number will be wrong.
My previous software job was for a Seattle-based team within Amazon's customer support org.
I consider it divine intervention that I departed shortly before LLMs got big. I can't imagine the unholy machinations my former team has been tasked with working on since I left.
I'm surprised that nobody at the tech companies seems to realize basic psychology: The harder you try to force something on people, the less they want it.
He describes his startup as an ai-oriented map... to me that sounds amazing and totally at my alley. But then it's actually about trip planning... to me is too constrained and specific. What I would love is a map type experience that gives me an AI type interface for interesting things in any given area that might be near me and worth checking out.
And not just for travel by the way... I love just exploring maps and seeing a place.. I'd love to learn more about a place kind of like a mesh between Wikipedia and a map and AI could help
Was this written by AI? It sounds like the writing style of an elementary school student. Almost entirely made of really simple sentence structures, and for whatever reason I find it really annoying to read.
Whenever I see "everyone", and broad statements that try to paint an entire geography based on one company "Microsoft" I'm suspect of the motives of the author at worst, or just dismissive of the premise at best.
I see what the author is saying here, but they're painting with an overly broad brush. The whole "San Francisco still thinks it can change the world" also is annoying.
I am from the Seattle area, so I do take it a bit personally, but this isn't exactly my experience here.
I think we should be honest and consistent about losing our jobs to AI. For decades we justified automation by saying things like: "Sure, the toothpaste tube machine replaced 30 workers, but someone will need to maintain and operate it." And whenever someone pointed out that one mechanic doesn't replace those 30 lost jobs, everyone went quiet.
Now we're using the same logic again: "Well, you just need to learn to use the AI before someone else does."
And if anyone doubts that the world can move on without the software engineer, remember that it moved on just fine after eliminating the toothpaste tube fillers. The world kept turning, just a little colder and more indifferent each time another role disappeared.
Maybe instead of pretending this time is different, we should focus on writing the best epitaph we can.
The only clear applications for AI in software engineering are for throwaway code, which interestingly enough isn't used in software engineering at all, or for when you're researching how to do something, for which it's not as reliable as reading the docs.
They should focus more on data engineering/science and other similar fields which is a lot more about those, but since there are often no tests there, that's a bit too risky.
Most codebases are pretty proprietary and so out of distribution for the AI which causes poor performance and you really have to fight some of the training to use internal libraries and conventions.
Still useful but certainly not PhD-level when it imports X, you remind it's instructions are to use Y, it apologizes, imports Y but then immediately imports X again.
So when your project gets cancelled for AI and haven't gotten a raise while AI researchers in the same company are getting generational wealth- it does feel pretty bad.
According to demos, AI coding tools are allowing neophytes to instantly create working apps and websites with mere descriptions of what they want. According to devs, they're 10x as productive because certain time-consuming tasks are condensed like unit test writing, code reviews, and code refactors and clean-up. So we're to assume that in the age when the typical App Store offers a million apps we'll never be interested in, soon that number will be a billion.
In comes Wanderfugl. A tool for traveling that I will never need, where just trying to figure out what it does used more time than I wanted to spend on it. Now with AI, there will be several shiny new travel apps like Wanderfugl for you to learn and choose from literally every time you go on another vacation.
Wanderfugl may be wonderful, and an achievement. But the reaction of this Seattleite is "What's the point anymore?" This is why I am uninterested in the AI coding trend. It's just a part of a lot of new stuff I don't need.
It reads like it's AI-edited, which is deliciously ironic.
(Protip: if you're going to use em—dashes—everywhere, either learn to use them appropriately, or be prepared to be blasted for AI—ification of your writing.)
For me it is that they are wrongly used in this piece. Em dashes as appositives have the feel of interruption—like this—and are to be used very sparingly. They're a big bump in the narrative's flow, and are to be used only when you want a big bump. Otherwise appositives should be set off with commas, when the appositive is critical to the narrative, or parentheses (for when it isn't). Clause changes are similar—the em dash is the biggest interruption. Colons have a sense of finality: you were building up to this: and now it is here. Semicolons are for when you really can't break two clauses into two sentences with a full stop; a full stop is better most of the time. Like this. And so full stops should be your default clause splice when you're revising.
Having em-dashes everywhere—but each one or pair is used correctly—smacks of AI writing—AI has figured out how to use them, what they're for, and when they fit—but has not figured out how to revise text so that the overall flow of the text and overall density of them is correct—that is, low, because they're heavy emphasis—real interruptions.
(Also the quirky three-point bullet list with a three-point recitation at the end with bolded leadoffs to each bullet point and a final punchy closer sentence is totally an AI thing too.)
But, hey, I guess I fit the stereotype!—I'm in Seattle and I hate AI, too.
> Semicolons are for when you really can't break two clauses into two sentences with a full stop; a full stop is better most of the time.
IIRC (it's been a while) there are 2 cases where a semi-colon is acceptable. One is when connecting two closely-related independent clauses (i.e. they could be two complete sentences on their own, or joined by a conjunction). The other is when separating items in a list, when the items themselves contain commas.
Also, for sheer delightful perversity, I ran the above comment through Copilot/ChatGPT and asked it to revise, and this is what I got. Note the text structuring and how it has changed! (And how my punctuation games are gone, but we expected that.)
>>>
For me, the issue is that they’re misused in this piece. Em dashes used as appositives carry the feel of interruption—like this—and should be employed sparingly. They create a jarring bump in the narrative’s flow, and that bump should only appear when you want it. Otherwise, appositives belong with commas (when they’re integral to the sentence) or parentheses (when they’re not). Clause breaks follow the same logic: the em dash is the strongest interruption. Colons convey a sense of arrival—you’ve been building up to this: and now it’s here. Semicolons are for those rare cases when two clauses can’t quite stand alone as separate sentences; most of the time, a full stop is cleaner. Like this. Which is why full stops should be your default splice when revising.
Sprinkling em dashes everywhere—even if each one is technically correct—feels like AI writing. The system has learned what they are, how they work, and when they fit, but it hasn’t learned how to revise for overall flow or density. The result is too many dashes, when the right number should be low, because they’re heavy emphasis—true interruptions.
(And yes, the quirky three-point bullet list with bolded openers and a punchy closer at the end is another hallmark of AI prose.)
But hey, I guess I fit the stereotype—I’m in Seattle, and I hate AI too.
I think it's because it is difficult to actually add an em dash when writing with a keyboard (except I heard on Macs). So it's either they 1)memorized the em dash alt code, 2)had a keyboard shortcut for the key, or 3)are using the character map to insert it every time, all of which are a stretch for a random online post.
You just type hyphen twice in many programs... Or on mobile you hold hyphen for a moment and choose em dash. I don't use it, but it's very easy to use.
Related article posted here https://news.ycombinator.com/item?id=46133941 explains it: "Within the A.I.’s training data, the em dash is more likely to appear in texts that have been marked as well-formed, high-quality prose. A.I. works by statistics. If this punctuation mark appears with increased frequency in high-quality writing, then one way to produce your own high-quality writing is to absolutely drench it with the punctuation mark in question. So now, no matter where it’s coming from or why, millions of people recognize the em dash as a sign of zero-effort, low-quality algorithmic slop."
So the funny thing is m dashes have always been a great trick to help your writing flow better. I guess gpt4o figured this out in RLHF and now it's everywhere
AI for decades has been the word to mean "frontier capability which is not fully developed yet". It is not a pitch for end users. Perhaps your product produces quality code. Perhaps it produces highly novel trip itineries. Say that, but don't say AI. The end user does not know the difference between a neural net and a for loop.
Basically everyone I know in engineering share this resentment in some way, and the AI industry has itself to blame.
People are fed up and burned out from being forced to try useless AI tools by non-technical leaders who do not understand how LLM works nor understand how they suck, and now resent anything related to AI. But for AI companies there is a perverse incentive to push AI on people until it finally works, because the winner of the AI arms race won't be the company that waits until they have a perfect, polished product.
I have myself had "fun" trying to discuss LLMs with non technical people, and met a complete wall trying to explain why LLMs aren't useful for programming - at least not yet. I argue the code is often of low quality, very unmaintainable, and usually not useful outside quick experimentation. They refuse to believe it, even though they do hit a wall with their vibe-coded project after a few months when claude stops generating miracles any more - they lack the experience with code to understand they are hitting maintainability issues. Combine that with how every "wow!" LLM example is actually just the LLM regurgitating a very common thing to write tutorials about, and people tend to over-estimate its abilities.
I use claude multiple times a week because even though LLM-generated code is trash I am open to try new tools, but my general experience is that Claude is unable to do anything well that I can't have my non-technical partner do. It has given me a sort of superiority complex where I immediately disregard the opinion of any developer who thinks its a wondertool, because clearly they don't have high standards for the work they were already doing.
I think most developers with any skill to their name agree. Looking at how Microsoft developers are handling the forced AI, they do seem desperate: https://news.ycombinator.com/item?id=44050152 even though they respond with the most "cope" answers I've ever read when confronted about how poorly it is going.
> and met a complete wall trying to explain why LLMs aren't useful for programming - at least not yet. I argue the code is often of low quality, very unmaintainable, and usually not useful outside quick experimentation.
There are quite a few things they can do reasonably well - but they mostly are useful for experienced programmers/architecs as a time safer. Working with a LLM for that often reminds me of when I had many young, inexperienced Indians to work with - the LLM comes up with the same nonsense, lies and excuses, but unlike the inexperienced humans I can insult it guilt free, which also sometimes gets it back on track.
> They refuse to believe it, even though they do hit a wall with their vibe-coded project after a few months when claude stops generating miracles any more - they lack the experience with code to understand they are hitting maintainability issues.
For having a LLM operate on a complete code base there currently seems to be a hard limit of something like 10k-15k LOC, even with the models with the largest context windows - after that, if you want to continue using a LLM, you'll have to make it work only on a specific subsection of the project, and manually provide the required context.
Now the "getting to 10k LOC" _can_ be sped up significantly by using a LLM. Ideally refactor stupid along the way already - which can be made a bit easier by building in sensible steps (which again requires experience). From my experiments once you've finished that initial step you'll then spend roughly 4-5 times the amount of time you just spent with the LLM to make the code base actually maintainable. For my test projects, I roughly spent one day building it up, rest of the week getting it maintainable. Fully manual would've taken me 2-3 weeks, so it saved time - but only because I do have experience with what I'm doing.
I think there's a lot of reason to what you are saying. The 4-5 amount of time to make the codebase readable resonates.
If i really wanted to go 100% LLM as a challenge I think I'd compartmentalize a lot and maybe rely on OpenAPI and other API description languages to reduce the complexity of what the LLM has to deal with when working on its current "compartment" (i.e the frontend or backend). Claude.md also helps a lot.
I do believe in some time saving, but at the same time, almost every line of code I write usually requires some deliberate thought, and if the LLM makes that thought, I often have to correct it. If i use English to explain exactly what I want it is some times ok, but then that is basically the same effort. At least that's my empirical experience.
> almost every line of code I write usually requires some deliberate though
That's probably the worst case for trying to use a LLM for coding.
A lot of the code it'll produce will be incorrect on the first try - so to avoid sitting through iterations of absolute garbage you want the LLM to be able to compile the code. I typically provide a makefile which compiles the code, and then runs a linter with a strict ruleset and warnings set to error, and allow it to run make without prompting - so the first version I get to see compiles, and doesn't cause lint to have a stroke.
Then I typically make it write tests, and include the tests in the build process - for "hey, add tests to this codebase" the LLM is performing no worse than your average cheap code monkey.
Both with the linter and with the tests you'll still need to check what it's doing, though - just like the cheap code monkey it may disable lint on specific lines of code with comments like "the linter is wrong", or may create stub tests - or even disable tests, and then claim the tests were always failing, and it wasn't due to the new code it wrote.
I did some contract work for Microsoft a few years ago (2011-2013). It was striking how much pressure was put on you to dogfood Microsoft stuff at the expense of basically everything. I can only imagine what it must be like at the moment.
I think treating AI as the best possible field for everyone smart and capable is itself very narrow minded and short sighted. Some people just aren't interested in that field, what's so hard to accept it? World still needs experts in other fields even within computing.
Well written! I’m Seattle based (although at Google) I think the mood is only slightly better than what you describe. But the general feeling that the company has no interest in engineering innovation is alive and well. Everything needs to be standardized and engineers get shuffled between products in a way that discourages domain knowledge.
Making no statement about the value or lack of value in AI itself:
When people talk about it like this (this author is hardly the only example) they sound like an evangelist proselytizing and it feels so weird to me.
This thing could basically read “people in Seattle don’t want to believe in God with me, people in San Francisco have faith though. I’m sad my friends in Seattle won’t be going to heaven.”
I think they exist as a "market segment" (i.e, there are people out there who will use AI), but in terms of how people talk about it, sentiment is overwhelmingly negative in most circles. Especially folks in the arts and humanities.
The only non-technical people I know who are excited about AI, as a group, are administrator/manager/consultant types.
This isn’t just a Seattle thing, but I do think the outsized presence of specific employers there contributes to an outsized negativity around AI.
Look, good engineers just want to do good work. We want to use good tools to do good work, and I was an early proponent of using these tools in ways to help the business function better at PriorCo. But because I was on the wrong team (On-Prem), and because I didn’t use their chatbots constantly (I was already pitching agents before they were a defined thing, I just suck at vocabulary), I was ripe for being thrown out. That built a serious resentment towards the tooling for the actions of shitty humans.
I’m not alone in these feelings of resentment. There’s a lot of us, because instead of trusting engineers to do good work with good tools, a handful of rich fucks decided they knew technology better than the engineers building the fucking things.
Roughly a third of the engineers in the greater Seattle area work for Microsoft, so we needn't conjure up any strange quality of the local culture to explain this.
The problem with AI is that the media and the tech hype machine wants everyone to believe that it is more than a glorified randomized text generator. Yes, for many problems this is just what you need, but not to create reliable software. Somehow, they want everyone to go into a state of disbelief and agree that it is a superior intelligence or at least the clear sign of something of this sorts, and that we should stop everything we're doing right now to give more money and attention to this endeavor.
I've lived in Seattle my whole life, and have worked in tech for 12+ years now as a SWE.
I think the SEA and SF tech scenes are hard to differentiate perfectly in a HN comment. However, I think any "Seattle hates AI" has to do more with the incessant pushing of AI into all the tech spaces.
It's being claimed as the next major evolution of computing, while also being cited as reasons for layoffs. Sounds like a positive for some (rich people) and a negative for many other people.
It's being forced into new features of existing products, while adoption of said features is low. This feels like cult-like behavior where you must be in favor of AI in your products, or else you're considered a luddite.
I think the confusing thing to me is that things which are successful don't typically need to be touted so aggressively. I'm on the younger side and generally positive to developments in tech, but the spending and the CEO group-think around "AI all the things" doesn't sit well as being aligned with a naturally successful development. Also, maybe I'm just burned out on ads in podcasts for "is your workforce using Agentic AI to optimize ..."
textbook way to NOT rollout AI for your org. AI has genuine benefits to white collar workers, but they are not trained for the use-cases that would actually benefit them, nor are they trained in what the tech is actually good at. they are being punished for using the tools poorly (with no guidance on how to use them "good"), and when they use the tools well, they fear being laid off once an SOP for their AI workflows are written.
The article seems full of made up things. The "coworker" isn't a real person, some kind of "composite of people", I'm then curious if the "she" is simply used as "a random made up person".
Then it says: "Engineers don't try because they think they can't." They don't try AI is what I understand, but that contradicts the whole article, that every engineer in Seattle is actively using AI, even forced too.
Then it says: "now believes she's both unqualified for AI work", why would they believe that? She's supposedly has been using AI constantly, has not been part of those "layed off", so must be a great AI talent.
Finally it says: "now believes she's both unqualified for AI work and that AI isn't worth doing anyway. She's wrong on both counts, but the culture made sure she'd land there." Which is completely usubstantiated and also coming from a person trying to grift us with their AI product which they want to promote and sell.
I don't know, it read like a shill article from a grifter.
> like building an AI product made me part of the problem.
I don't see how the author can believe that quitting their job to work on an AI startup is NOT contributing to the problem of "AI products being shoved down everyone's throats."
Except, of course, that their financial bottom line depends on not believing this.
The big revenue isn't going to come from improvements in coding, or writing better emails, or protein folding. It's going to come from more seductive and compelling ads (using all the data vacuumed up from your apps and your psychological profile).
Seattle sounds kinda nice now. AI fatigue is real. I just had to swap eye doctors because they changed their medical records to some AI powered bullshit and wanted me to re-enter all my info into the new system in order to check-in for my appointment. A website that when I looked at their EULA page redirected to an empty page, no clear mention of HIPAA anywhere on the website's other pages. The eye doctor seemed confused why I wanted to stop using them after ten years as a patient even after I pointed out the flaws. It's madness.
Some massive bait in this article. Like come on author - do you seriously have these thoughts and beliefs?
> It felt like the culture wanted change.
>
> That world is gone.
Ummm source?
> This belief system—that AI is useless and that you're not good enough to work on it anyway
I actually don't know anyone with this belief system. I'm pretty slow on picking up a lot of AI tooling, but I was slow to pick up JS frameworks as well.
It's just smart to not immediately jump on a bandwagon when things are changing so fast because there is a good chance you're backing the wrong horse.
And by the way, you sound ridiculous when you call me a dinosaur just because I haven't started using a tool that didn't even exist 6 months ago. FOMO sales tactics don't work on everyone, sorry to break it to you.
When the singularity hits in who knows how many years from now, do you really think it's one of these llm wrapper products that's going to be the difference maker? Again, sorry to break it to you but that's a party you and I are not going to get invited to. 0% chance governments would actually allow true super intelligence as a direct to consumer product.
UBI isn't about making financial sense it's about keeping the last traces of society duct taped together before it all collapses. Remove all pathways to a middle-class life and you're left with a populace on the precipice of violent revolts.
What I mean is that it simply would not work. The math doesn't add up. It would directly lead to Weimar levels of hyperinflation. Which is a far worse outcome.
As I've said before: AI mandates, like RTO mandates, are just another way to "quiet fire" people, or at least "quiet renegotiate" their employment.
That said, AI resistance is real too. We see it on this forum. It's understandable because the hype is all about replacing people, which will naturally make them defensive, whereas the narrative should be about amplifying them.
A well-intentioned AI mandate would either come with a) training and/or b) dedicated time to experiment and figuring out what works well for you. Instead what we're seeing across the industry is "You MUST use AI to do MORE with LESS while we layoff even more people and move jobs overseas."
My cynical take is, this is an intentional strategy to continue culling headcount, except overindexing on people seen as unaligned with the AI future of the company.
> AI mandates, like RTO mandates, are just another way to "quiet fire" people
That's a recurring argument, and I don't believe it, especially in large tech companies. They have no problem doing multiple large non-quiet lay-offs, why would they need moustache-twirling level schemes to get people to quit.
I don't believe companies to be well intentioned, but the simplest explanation is often the best:
1. RTO are probably driven by people in power who either like to be in the office, believe being in the office is the most efficient way to work (be that it's true or not), or have financial stakes in having people occupy said offices.
2. "AI" mandate is probably driven by people in power who either do see value in AI, think it's the most efficient way to work (be that it's true or not), have FOMO on AI, or have financial stakes in having people use it.
> They have no problem doing multiple large non-quiet lay-offs, why would they need moustache-twirling level schemes to get people to quit.
So the thing about all large layoffs is that there is actually some non-obvious calculus behind them.
One thing for instance, is that typically in the time period soon after layoffs, there is some increased attrition in the surviving employees, for a multitude of reasons. So if you layoff X people you actually end up with X + Y lower headcount shortly after. There are also considerations like regulations.
What this means is that planning layoffs has multiple moving parts:
1) The actual monetary amount to cut -- it all starts with $$$;
2) The absolute number of headcount that translates to;
3) The expected follow-on attrition rate;
4) The severance (if any) to offer;
5) The actual headcount to cut with a view of the attrition and severance;
6) Local labor regulations (e.g. WARN) and their impact, monetary or otherwise;
7) Consideration, impact on internal morale and future recruitment.
So it's a bit like tuning a dynamic system with several interacting variables at play
Now the interesting bit of tea here is that in the past couple of years, the follow-on (and all other) attrition has absolutely plummeted, which has thrown the standard approaches all out of whack. So companies are struggling a bit to "tune" their layoffs and attrition.
I had an exec frankly tell me this after one of the earliest waves of layoffs a couple years ago, and I heard from others that this was happening across the industry. Sure enough, there have been more and more seemingly haphazard waves of layoffs and the absolute toxicity this has introduced into corporate culture.
Due to all this and the overal economy and labor market, employee power has severely weakened, so things like morale and future recruitment are also lower priorities.
Given all this calculus, a company can actually save quite some money (severance) and trouble if people quit by themselves, with minimal negative repercussions.
Not quite moustache-twirling but not quite savory either.
I got a thread on SomethingAwful gassed [1] because it was about an AI radio station app I was working on. People on that forum do not like AI.
I think some of the reasons that they gave were bullshit, but in fairness I have grown pretty tired of how much low-effort AI slop has been ruining YouTube. I use ChatGPT all the time, but I am growing more than a little frustrated how much shit on the internet is clearly just generated text with no actual human contribution. I don’t inherently have an issue with “vibe coding”, but it is getting increasingly irritating having to dig through several-thousand-line pull requests of obviously-AI-generated code.
I’m conflicted. I think AI is very cool, but it is so perfectly designed to exploit natural human laziness. It’s a tool that can do tremendous good, but like most things, it requires people use it with effort, which does seem to be the outlier case.
I live and work in Seattle and I don't hate AI. Further, I know people here that are just as overly excited about AI as it's proponents on HN.
I've also heard complaints about the mandatory use of the tools in the office and the pageantry involved.
I've seen people in love with garbage they produced with AI.
I'm annoyed by the way they are being pushed in my face but hate is really too strong. I've tried using them and gotten total garbage. I think that's because my prompting sucks because I know people that love the tools and have shared great output from them. Those people are a minority in my opinion.
Trying to over simplify the experiences of humanity is a fool's game.
something I havent seen mentioned: the people who are AI's biggest proponents are distinctly unlikeable humans. Look at Elon Musk with Grok, its disgusting. Look at Sam Altman, Alex Karp, look at Peter "Im not sure humanity should continue" Thiel. These people are wildly misanthropic, of course it gives the whole thing an unpleasant miasma.
> This belief system—that AI is useless and that you're not good enough to work on it anyway—hurts three groups
I don't know anyone who thinks AI is useless. In fact, I've seen quite a few places where it can be quite useful. Instead, I think it's massively overhyped to its own detriment. This article presents the author as the person who has the One True Vision, and all us skeptics are just tragically undereducated.
I'm a crusty old engineer. In my career, I've seen RAD tooling, CASE tools, no/low-code tools, SGML/XML, and Web3 not live up to the lofty claims of the devotees and therefore become radioactive despite there being some useful bits in there. I suspect AI is headed down the same path and see (and hear of) more and more projects that start out looking really impressive and then crumble after a few promising milestones.
This person wrote a blog post admitting to tone-deafness in cheerleading AI and talking about all the ways AI hype has negatively impacted peoples' work environment. But then they wrap up by concluding that its the anti-AI people that are the problem. Thats a really weird conclusion to come to at the end of that blog post. My expectation was that the end result was "We should be measured and mindful with our advocacy, read the room, and avoid aggressively pushing AI in ways that negatively impact peoples' lives."
My 2¢... LLMs are kind of amazing for structured text output like code. I have a completely different experience using LLMs for assistance writing code (as a relative novice) than I do in literally every other avenue of life.
Electricl engineering? Garbage.
Construction projects? Useless.
But code is code everywhere, and the immense amount of training data available in the form of working code and tutorials, design and style guides, means that the output as regards software development doesn't really resemble what anybody working in any other field sees. Even adjacent technical fields.
AI is in the Radium phase of its world-changing discovery life cycle. It's fun and novel, so every corporate grifter in the world is cramming it into every product that they can, regardless of it making sense. The companies being the most reckless will soon develop a cough, if they haven't already.
The author has unquestioning assumption that the only innovation possible is the one with AI. That is genuinely weird. Even if one believes in AI, innovation in non-AI space should be possible, no?
Second, engineering and innovation are two different categories. Most of engineering is about ... making things work. Fixing bugs, refactoring fragile code, building new features people need or want. Maybe AI products would be hated less if they were just a little less about being able to pretend they are an innovation and just a little more about making things works.
A sizable fraction of current AI results are wrong. The key to using AI successfully is imposing the costs of those errors on someone who can't fight back. Retail customers. Low-level employees. Non-paying users.
A key part of today's AI project plan is clearly identifying the dump site where the toxic waste ends up. Otherwise, it might be on top of you.
I have some trouble reconciling the conclusion of this article with everything else it described. How is "Clearly my coworker just wasn't believing hard enough in AI, and it harms everyone!" the conclusion the author comes to?
It might just be an ESL issue on my end, but I seriously feel some huge dissonance between the explanations of "how the tech was made the main KPI, used to justify layoffs and forced in a way that hinders productivity", and the conclusion that seems to say "the real issue with those people complaining is that they just don't believe in AI".
I don't understand this article, it seems to explain all the reasons people in Seattle might have grievances, and then completely dismisses those to adopt the usual "you're using it wrong".
Is this article just a way to advertise for Wanderfugl? Because this reads like the usual "Okay your grievances are fine and all, but consider the following: it allows me to make a SaaS really fast!" that I became accustomed to see in HN discussions.
Wanderfugl is a strange for an "AI" powered map. The Wandervogel movement was against industrialization and pro nature. I'm sure they would have looked down on iPhones and centralized "AI" that gives them instructions where to go.
Again a somewhat positive term (if you focus on "back to nature" and ignore the nationalist parts) is taken, assimilated and turned on its head.
I honestly expected this to be about sanctimonious lefties complaining about a single chatgpt query using an Olympic swimming pool worth of water, but it was actually about Seattle big tech workers hating it due to layoffs and botched internal implementations which is a much more valid reason to hate it.
My buddies still or until recently still at Amazon have definitely been feeling this same push. Internal culture there has been broken since the post covid layoffs, and layering "AI" over the layoffs leaves a bad taste.
Tech company leadership sees AI as a shortcut to success. You know how in project planning meetings engineers are usually asked how they can pull in the schedule by x number of months? AI is now that thing. Obviously, this is a mistake.
The cult of AI maximalists aren't helping the situation.
There are a few clashing forces. One is the power of startups - what people love is what will prevail. It made macs and iphones grab marketshare back from "corporate" options like windows and palm pilot. Its what keeps tiktok running.
An opposing force is corporate momentum. Its unfortunately true that people are beholden to what companies create. If there are only a few phones available, you will have to pick. If there are only so many shows streaming, you'll probably end up watching the less disgusting of the options.
They are clashing. The ppl's sentiment is AI bad. But if tech keeps making it and pushing it long enough, ppl will get older, corporate initiatives will get sticky, and it will become ingrained. And once its ingrained, its gonna be here forever.
> Every time I shared Wanderfugl with a Seattle engineer, I got the same reflexive, critical, negative response. This wasn't true in Bali, Tokyo, Paris, or San Francisco—people were curious, engaged, wanted to understand what I was building
Believe me, the same reflexive, critical, negative response is true for most of Europe too
> like building an AI product made me part of the problem.
It's not about their careers. It's about the injustice of the whole situation. Can you possibly perceive the injustice? That the thing they're pissed about is the injustice? You're part of the problem because you can't.
That's why it's not about whether the tools are good or bad. Most of them suck, also, but occasionally they don't--but that's not the point. The point is the injustice of having them shoved in your face; of having everything that could be doing good work pivot to AI instead; of everyone shamelessly bandwagoning it and ignoring everything else; etc.
That's the thing, though, it is about their careers.
It's not just that people are annoyed that someone who spends years to decades learning their craft and then someone who put a prompt into a chatbot that spit out an app that mostly works without understanding any of the code that they 'wrote'.
It's that the executives are positively giddy at the prospect that they can get rid of some number their employees and the rest will use AI bots to pick up the slack. Humans need things like a desk and dental insurance and they fall unconscious for several hours every night. AI agents don't have to take lunch breaks or attend funerals or anything.
Most employees that have figured this out resent AI getting shoved into every facet of their jobs because they know exactly what the end goal is: that lots of jobs are going to be going away and nothing is going to replace them. And then what?
disagree completely. You're doing the thing I described: assuming it's all ultimately about personal benefit when they're telling you directly that it's not. The same people could trivially capitalize on the shifting climate and have a good career in the new world. But they'd still be pissed about it.
I'm one of these people. So is everyone I know. The grievance is moral, not utilitarian. I don't care about executives getting rid of people. I care that they're causing obviously stupid things to happen, based on their stupid delusions, making everyone's lives worse, and they're unaccountable for it. And in doing so they devalue all of the things I consider to be good about tech, like good software that works and solves real problems. Of course they always did that but it's especially bad now.
> You're doing the thing I described: assuming it's all ultimately about personal benefit when they're telling you directly that it's not.
It doesn't matter how much astroturf I read, I can see what's happening with my own eyes.
> The grievance is moral, not utilitarian.
Nope, it's both.
Businesses have no morals. (Most) people do. Everything that a business does is in service of the bottom line. They aren't pushing AI everywhere out of some desire to help humanity, they're doing it because they sunk a lot of resources into it and are trying to force an ROI.
There are a lot of people who have fully bought in to AI and think that it's more capable than it is. We just had a thread the other day where someone was using AI to vibe code an app, but managed to accidentally tell the LLM to delete the contents of his hard drive.
AI apologists insist that AI agents are a vital tool for doing more faster and handwave any criticism. It doesn't matter that AI agents consume an obscene amount of resources to do it, or that pretend developers are using it to write code they don't understand and can't test that they're shoving into production anyway. That's all fine because a loud fraction of senior developers are using it to bypass the 'boring parts' of writing programs to focus on the interesting bits.
I feel like this is a textbook example of how people talk past each other. There are people in this world who operate under personal utility maximization, and they think everyone else does also. Then there are people who are maximizing for justice: trying to do the most meaningful work themselves while being upset about injustices. Call it scrupulosity, maybe. Executives doing stupid pointless things to curry favor is massively unjust, so it's infuriating.
If you are a utilitarian person and you try to parse a scrupulous person according to your utilitarianism of course their actions and opinions will make no sense to you. They are not maximizing for utility, whatsoever, in any sense. They are maximizing for justice. And when injustices are perpetrated by people who are unaccountable, it creates anger and complaining. It's the most you can do. The goal is to get other people to also be mad and perhaps organize enough to do something about it. When you ignore them, when you fail to parse anything they say as about justice, then yes, you are part of the problem.
I'm just really isolated right now, I've been building solo for a long time. I don't have anyone to share my thoughts with, which is something I used to really value at Microsoft.
Howdy! I personally don't really understand the "point" the article is trying to make. I mostly agree with your sentiment that AI can be useful. I too have seen a massive increase in productivity in my hobbies, thanks to LLMs.
As to the point of the article, is it just to say "People shouldn't hate LLMs"? My takeaway was more "This person's future isn't threatened directly so they just aren't understanding why people feel this way." but I also personally believe that, if the CEOs have their way, AI will threaten every job eventually.
So yeah I guess I'm just curious what the conclusion presented here is meant to be?
I don't follow why it's hard to build in Seattle. Do you mean before this "AI summer" they struggled, or that with AI they have become too slow because they won't adopt it?
I was under the distinct impression that Seattle was somewhat divided over 'big tech', with many long-term residents resenting Microsoft and Amazon's impact on the city (and longing for the 'artsy and free-spirited' place it used to be). Do you think those non-techies are sympathetic to the Microsofties and Amazonians? This is a genuine question, as I've never lived in Seattle, but I visit often, and live in the PNW.
> Do you think those non-techies are sympathetic to the Microsofties and Amazonians?
As somebody who has lived in Seattle for over 20 years and spent about 1/3 of it working in big tech (but not either of those companies), no, I don't really think so. There is a lot of resentment, for the same reasons as everywhere else: a substantial big tech presence puts anyone who can't get on the train at a significant economic disadvantage.
If you are a writer or a painter or a developer - in a city as expensive as Seattle - then one may feel a little threatened. Then it becomes the trickle down effect, if I lose my job, I may not be able to pay for my dog walker, or my child care or my hair dresser, or...
Are they sympathetic? It depends on how much they depend on those who are impacted. Everyone wants to get paid - but AI don't have kids to feed or diapers to buy.
They kind of are, though I think so many locals now work in big tech in some way that it's shifted a bit. I wish we could return to being a bit more artsy and free spirited
I've lived in the Seattle area most of my life and lived in San Francisco for a year.
SF embraces tech and in general (politics, etc) has a culture of being willing to try new things. Overall tech hostility is low, but the city becoming a testbed for projects like Waymo is possibly changing that. There is a continuous argument that their free-spirited culture has been cannibalized by tech.
Seattle feels like the complete opposite. Resistant to change, resistant to trying things, and if you say you work in tech you're now a "techbro" and met with eyerolls. This is in part because in Seattle if you are a "techbro" you work for one of the megacorps whereas in SF a "techbro" could be working for any number of cool startups.
As you mentioned, Seattle has also been taken over by said megacorps which has colored the impressions of everyone. When you have entire city blocks taken over by Microsoft/Amazon and the roads congested by them it definitely has some negative domino effects.
As an aside, on TV we in the Seattle area get ads about how much Amazon has been doing for the community. Definitely some PR campaign to keep local hostility low.
I'm sure the 5% employee tax in Seattle and the bill being introduced in Olympia will do more to smooth things over than some quirky blipvert will.
I think most people in Seattle know how economics works, logic follows:
while "techbro" don't work is true:
if "techbro" debt > income:
unless assets == 0:
sellgighustle
else
sellhousebeforeforeclosure
nomoreseattleforyou("techbro")
end
else
"gigbot" isn't summoned and people don't get paid.
"techbro" health-- due to high expense of COBRA.
[etc...]
end
end
'how much they do for the community' like trying to buy elections so we won't tax them, same thing boeing and microsoft did. Anytime out local government gets a little uppity suddenly these big corps are looking to move like boeing largely did. Remember Amazon HQ2, at least part of the reasoning behind that disaster was seattlites asking, 'what the hell is amazon doing for us besides driving up rents and snarling traffic?'
(.. and exactly how is boeing doing since it was forced to move away from 'engineering culture' by moving out of the city where their workforce was trained and training the next generation. Oh yeah planes are falling out of the sky and their software is pushing planes into the ground.)
It kinda seems like you're conflating Microsoft with Seattle in general. From the outside, what you say about Microsoft specifically seems to be 100% true: their leadership has gone fucking nuts and their irrational AI obsession is putting stifling pressure on leaf level employees. They seem convinced that their human workforce is now a temporary inconvenience. But is this representative of Seattle tech as a whole? I'm not sure. True, morale at Amazon is likely also suffering due to recent layoffs that were at least partly blamed on AI.
Anecdotally, I work at a different FAANMG+whatever company in Seattle that I feel has actually done a pretty good job with AI internally: providing tools that we aren't forced to use (i.e. they add selectable functionality without disrupting existing workflows), not tying ratings/comp to AI usage (seriously how fucking stupid are they over in Redmond?), and generally letting adoption proceed organically. The result is that people have room to experiment with it and actually use it where it adds real value, which is a nonzero but frankly much narrower slice than a lot of """technologists""" and """thought leaders""" are telling us.
Maybe since Microsoft and Amazon are the lion's share (are they?) of big tech employment in Seattle, your point stands. But I think you could present it with a bit of a broader view, though of course that would require more research on your part.
Also, I'd be shocked if there wasn't a serious groundswell of anti-AI sentiment in SF and everywhere else with a significant tech industry presence. I suspect you are suffering from a bit of bias due to running in differently-aligned circles in SF vs. Seattle.
I think probably the safest place to be right now emotionally is a smaller company. Something about the hype right now is making Microsoft/Amazon act worse. Be curious to hear what specifically your company is doing to give people agency.
> Be curious to hear what specifically your company is doing to give people agency.
Wrt. AI specifically, I guess we are simply a) not using AI as an excuse to lay off scores of employees (at least, not yet) and b) not squeezing the employees who remain with arbitrary requirements that they use shitty AI tools in their work. More generally, participation in design work and independent execution are encouraged at all levels. At least in my part of the company, there simply isn't the same kind of miserable, paranoid atmosphere I hear about at MS and Amazon these days. I am not aware of any rigidly enforced quota for PIPing people. Etc.
Generally, it feels like our leadership isn't afflicted with the same kind of desperate FOMO fever other SMEGMAs are suffering from. Of course, I don't mean to imply there haven't been layoffs in the post free money era, or that some people don't end up on shitty teams with bad managers who make them miserable, or that there isn't the usual corporate bullshit, etc.
I get the feeling that this is supposed to be about the economics of a fairly expensive city/state and that "six-figure salary", but you don't really call it out.
If it was about the technology, then it would be no different than being a java/c++ developer and calling someone who does html and javascript their equal so pay them. It's not.
People get anxious when something may cause them to have to change - especially in terms of economics and the pressures that puts on people beyond just "adulting". But I don't really think you explained the why of their anxiety.
Pointing the finger at AI is like telling the Germans that all their problems are because of Jews without calling out why the Germans are feeling pressure from their problems in the first place.
Regarding "And then came the final insult: everyone was forced to use Microsoft's AI tools whether they worked or not."
As a customer, I actually had an MS account manager once yelled at me for refusing to touch <latest newfangled vaporware from MS> with a ten foot pole. Sorry, burn me a dozen times; I don't have any appendages left to care. I seriously don't get Microsoft. I am still flabbergasted anytime anyone takes Microsoft seriously.
One fun one was the leadership of Windows Update became obsessed with shipping AI models via Windows update, but they can't safely ship files larger than 200mb inside of an update.
I like that you shared the insight. Feels like you shared a secret to the world that is not so secret if you work a Microsoft (I guess this is less about the city)
I feel bad for people who work at dystopian places where you can't just do the job, try to get ahead etc. It is set up to make people fail and play politics.
I wonder if the company is dying slowly but with AI hype qaand old good foundations keeping her stock price going.
Well I think it's interesting how much what goes on inside of the major employers that affects Seattle. Like crappy behavior inside of Microsoft is felt outside of it.
> Bring up AI in a Seattle coffee shop now and people react like you're advocating asbestos.
can you please share the methodology you used to reach this conclusion?
in other words - what is the sample size? how many Seattle coffee shops did you walk into and yell out "hey, what do people think about AI?" (or did you gather the data in a different way, such as by approaching individual people at the coffee shop?)
what is your control group? in other words, how many SF coffee shops did you visit and conduct the same experiment?
In my opinion, the issue in AI is similar to the issue in self driving cars. I think the last “five percent” of functionality for agents etc. will be much, much more difficult to nail down for production use, just like snow weather and strange roads proved to be much more difficult for self-driving car technology rollout. They got to 95% and assumed they were nearing completion but it turned out there was even more work to be done to get to 100%. That’s kind of my take on all the AI hype. It’s going to take a lot more work to get the final five percent done.
the author makes the connection, people see AI as Asbestos, shoved in everything by profit hungry corps that don't care about what damage it will do in the long term.
Seattle has been screwed over so many times in the last 20 years that its a shell of itself.
I love AI but I find Microsoft AI to be mostly useless. You'd think that anything called Copilot can do things for you, but most of the time it just gives you text answers. Even when it is in the context of the application it can't give you better answers than ChatGPT, Claude or Perplexity. What is the point of that?
Satya has completely wasted their early lead in AI. Google is now the leader.
Is it that everyone in Seattle hates AI, or that Seattle is the only place you know people well enough they’ll tell you the truth? The bar for that is much lower in Seattle too, compared to say, Japan. And the author seems tone deaf enough to not know the difference.
AI is such a blessing. I use it almost every day at work, and I've spent this evening getting a Bluetooth to USB mapper for a ps4 controller working by having ChatGPT write it for me, for a bigger project I'm working on. Yes, it's going to take some time to fully understand the code and adjust it to my own standards, but i've been playing a game a few hours now and I feel zero latency and plenty of controller rumble that I'm having fun giving some extra power. It pretty much worked with the first 250 lines of C it spew out.
What's gonna be super interesting is that I'm going to have an rpi zero 2 power up my machine when I press the controller's ps-button. That means I might need to solder and do some electrical voodoo that I've never tried. Crossing my fingers that the plan ChatGPT has come up with won't electrocute me.
Reading some of these comments from fellow seattleites, I'm really quite thankful for having the privilege of being able to completely ignore all of this noise.
There is zero push in my org to use any of these tools. I don't really use them at all but know some coworkers who do and that's fine. Sounds like this is a rare and lucky arrangement.
I'm stuck between feeling bad because this is my field–I spend most days worrying about not being able to pay my bills or get another job–and wanting to shake every last tech worker by the shoulders and yell "WAKE UP!" at them. If you are unhappy with what your employer is doing, because they have more power over you, you don't have to just sit there and take it. You can organize.
Of course, you could also go online and sulk, I suppose. There are more options between "ZIRP boomtimes lol jobs for everyone!" and "I got fired and replaced with ELIZA". But are tech workers willing to expore them? That's the question.
It just feels like it's in bad taste that we have the most money and privilege and employment left (despite all of the doom and gloom), and we're sitting around feeling sorry for ourselves. If not now, when? And if not us, who?
To the extent that Microsoft pushes their employees to use all their other shitty products, Copilot seeks like just another one (it can't be more miserable/broken than Sharepoint).
I don't know if anyone has been reading cover letters recently but it seems that people are prompting the LLMs with the same shit, dusting their hands and thinking "done" and what the reader then sees is the same repetitive, uncreative and instantly recognizable boilerplate.
The people prompting don't seem to realize what's coming out the other end is boilerplate dreck, and you've got to think - if you're replaceable with boilerplate dreck maybe your skills weren't all that, anyway?
> My former coworker—the composite of three people for anonymity—now believes she's both unqualified for AI work and *that AI isn't worth doing anyway*. *She's wrong on both counts*, but the culture made sure she'd land there.
I'm not sure they're as wrong as these statements imply?
Do we think there's more or less crap out now with the advent and pervasiveness of AI? Not just from random CEOs pushing things top down, but even from ICs doing their own gig?
Oh but we're all supposed to swoon over the author's ability to make ANOTHER AI powered mapping solution! Probably vibecoded and bloated too. Just what we need, obviously all the haters are wrong! /s
I live in Seattle (well a 20 min ferry from Seattle) and I too hate AI. In fact I have a Kanji learning app which I am trying to push on to people, and I brand it as AI free. No AI was used to develop it, no AI used to write content, no AI is there to “help you learn”.
When I see apps like Wanderfugl, I get the same sense of disgust as OPs ex coworker. I don‘t want to try this app, I don’t want to see it, just get it away from me.
I wonder if I'm the guy in the bubble or if all these people are in the bubble. Everyone I know is really enjoying using these tools. I wrote a comment yesterday about how much my life has improved https://news.ycombinator.com/item?id=46131280
But also, it's not just my own. My wife's a graphic designer. She uses AI all the time.
Honestly, this has been revolutionary for me for getting things done.
> And then came the final insult: everyone was forced to use Microsoft's AI tools whether they worked or not.
Copilot for Word. Copilot for PowerPoint. Copilot for email. Copilot for code. Worse than the tools they replaced. Worse than competitors' tools. Sometimes worse than doing the work manually.
This is revolting. Three years ago I’d have said this is a terrible black mirror plot
Unlike Seattle, in Los Angeles there are few software engineers but I would not utter AI at all here
Its an infinite moving goalpost of hate, if its an actor, "creative", writer, AI is a monolithic doom, next its theoretical public policy or the lack thereof, and if they have nothing that affects them about it then it's about the energy use and environment
nobody is going to hear about what your AI does, so don't mention anything about AI unless you're trying to earn or raise money. Its a double life
I was just about to post that this entire story could have been completely transposed to almost every conversation I've had in Los Angeles over the past year and a half. Looks like you beat me to it!
the only difference is that I don't have the conversation ha, I don't tell people about anything I do that's remotely close to that, rarely even mention anything in tech. I listen to enough other conversations to catch on to how it goes, very easy to get roped into an AI doomer conversation that's hard to get out of
This isn’t really a common-folk-vs-tech-bros story. It’s about one specific part of Seattle’s tech culture reacting to AI hype. People outside that circle often have very different incentives.
Literally everyone I know is sick of AI. Sick of it being crowbar'd into tools we already use and find value in. Sick of it being hyped at us as though it's a tech moment it simply isn't. Sick of companies playing at being forward thinking and new despite selling the same old shit but they've bolted a chatbot to it, so now it's "AI." Sick of integrations and products that just plain do not fucking work.
I wouldn't shit talk you to your face if you're making an AI thing. However I also understand the frustration and the exhaustion with it, and to be blunt, if a product advertises AI in it, I immediately do treat it more skeptically. If the features are opt-in, fine. If however it seems like the sort of thing that's going to start spamming me with Clippy-style "let our AI do your work for you!" popups whilst I'm trying to learn your fucking software, I will get aggravated extremely fast.
Oh, I will happily get in your face and tell you your AI garbage sucks. I'm not afraid of these people, and you shouldn't be, either. Bring back social pressure. We successfully shamed Google Glassholes into obscurity, we can do it again. This shit has infested entire operating systems now, all so someone can get another billion dollars, while the rest of us struggle to make rent. It's made my career miserable, for so many reasons. It's made my daily life miserable. I'm so sick and tired of it.
> This shit has infested entire operating systems now
Well, it's not the fault on a random person doing some project that may even be cool.
I'll certainly adjust my priors and start treating the person as probably an idiot. But if given evidence they are not, I'm interested on what they are doing.
The thing that stops me being outwardly hostile is that there are a minority, and it is a minor, minor minority, of applications for AI that are actually pretty interesting and useful. It's just catastrophically oversaturated with samey garbage that does nothing.
I'm all for shaming people who just link to ChatGPT and call their whatever thing AI powered. If you're actually doing some work though and doing something interesting, I'll hear you out.
AI/LLMs provide an absolutely perfect excuse for halfwit managers and other fail upwards type people
I hate the entire premise even though some of it has been useful but at worst you're creating code and/or information that's just wrong and "can get someone killed" (metaphorical, but also probably literal), you're creating absolutely unrealistic expectations
Zuck said they'd be able to replace engineers with AI. Well, that tells you everything you need to know, doesn't it? With all of the scandals Facebook properties have had over the years. A real engineer/competent CEO wouldn't say that
Lots of creators (e.g., writers, illustrators, voice actors) hate "AI" too.
Not only because it's destroying creator jobs while also ripping off creators, but it's also producing shit that's offensively bad to professionals.
One thing that people in tech circles might not be aware of is that people outside of tech circles aren't thinking that tech workers are smart. They haven't thought that for a long time. They are generally thinking that tech workers are dimwit exploiter techbros, screwing over everyone. This started before "AI", but now "AI" (and tech billionaires backing certain political elements) has poured gasoline on the fire. Good luck getting dates with people from outside our field of employment. (You could try making your dating profile all about enjoying hiking and dabbling with your acoustic guitar, but they'll quickly know you're the enemy, as soon as you drive up in a Tesla, or as soon you say "actually..." before launching into a libertarian economics spiel over coffee.)
You have a very cartoon villain image of tech workers in your head. While Tesla-driving libertarian techbros are certainly a thing, this doesn't even accurately represent the majority of employees in Big Tech, nevermind the industry as a whole.
While everybody else is ranting about AI, I'll rant about something else: trip planning apps. There have been literally thousands of attempts at this and AFAICT precisely zero have ever gotten any traction. There are two intractable problems in this space.
1) A third party app simply cannot compete with Google Maps on coverage, accuracy and being up to date. Yes, there are APIs you can use to access this, but they're expensive and limited, which leads us to the second problem:
2) You can't make money off them. Nobody will pay to use your app (because there's so much free competition), and the monetization opportunities are very limited. It's too late in the flow to sell flights, you can't compete with Booking etc for hotel search, and big ticket attractions don't pay commissions for referrals. That leaves you with referrals for tours, but people who pay for tours are not the ones trying to DIY their trip planning in the first place.
There just isn't much friction between having a few tabs open (maps, booking site, airplane site, google search) and a notepad. The friction of searching for an app, downloading it, and then learning how to use it is just higher.
So many products are like this - it sounds good on paper to consolidate a bunch of tasks in one place but it's not without costs and the benefit is just not very high.
That’s why I built all my geo infrastructure from scratch from osm, there’s still some issues but for AI location grounding it outperforms google places for $300/mo
I use and pay for Wanderlog. Idk how their business is doing, but I love it as a user. They use an embedded Google Maps viewer for locations, so there is no problem for coverage.
> They use an embedded Google Maps viewer for locations
If they become popular they'll have to move to OSM, Google's steep charging for their Maps API at high usage that has brought companies to their knees is well known [1].
But I use it as a glorified notes app to keep track of flights, reservations, rental cars, confirmation numbers, etc, in one place, not for trip planning.
They do have user generated lists to help find things to do, but yeah, I mainly use it for organizing ideas and building my final itinerary. It’s really nice for this. I can then also share the itinerary with everybody else on the trip.
It's just another business/service niche that is solved until the current Big Provider becomes Evil or goes under.
Similar to "made for everyone" social networks and video upload platforms.
But there are niches that are trip planning + there are no one solving the pain! For example Geocaching. I always dreamed about an easy way to plan Geocaching routes to travel and find interesting caches on the way. Currently you gotta filter them out and then eyeball the map what seems to be nearby, despite there, maybe, not being any real roads there, or the cache is probably maybe actually lost or has to be accessed at specific time of day.
So... No one wants apps that are already solved + boring.
Oh yeah, call out a tech city and all the butt-hurt-ness comes out. Perfect example of "Rage Bait".
People here aren't hurt because of AI - people here are hurt because they learned they were just line items in a budget.
When the interest rates went up in 2022/2023 and the cheap money went away, businesses had to pivot their taxes while appeasing the shareholder.
Remember that time when Satya went to a company sponsored rich people thing with Aerosmith or whomever playing while announcing thousands of FTE's being laid off? Yeah, that...
If your job can be done by a very small shell script, why wasn't it done before?
I was at Microsoft until July of this year until I left for an SF-based company (not AI though).
The difference between the two with regards to AI tool usage couldn’t be more different- at Microsoft, they had started penalizing you in perf if you didn’t use the AI tools, which often were under par and you didn’t have a choice in. At the new place, perf doesn’t care if you use AI or not- just what you actually deliver. And, shocker, turns out they actually spend a lot building and getting feedback on internal AI tooling and so it gets a lot of use!
The Microsoft culture is a sort of toxic “get AI usage by forcing it down the engineer throats” vs the new “make it actually useful and win users” approach at that new place. The Microsoft approach builds resentment in the engineering base, but I’m convinced it’s the only way leadership there knows how to drive initiatives.
Perhaps the managers performance goals are linked to take up, this sometimes happens and it all becomes too blunt.
> they had started penalizing you in perf if you didn’t use the AI tools
That is kind of insane right? They are practically mining their own people for data, one wonders what they would not do to their customers.
Someone wrote on HN the (IMO) main reason why people do not accept AI.
So basically, only a few companies that hold on the large models will have all the knowledge required to do things, and will lend you your computer collecting monthly fees. Also see https://be-clippy.com/ for more arguments (like Adobe moving to cloud to teach their model on your work).For me AI is just a natural language query model for texts. So if I need to find something in text, make join with other knowledge etc. things I'd do in SQL if there was an SQL processing natural language, I do in LLM. This enhances my work. However other people seem to feel threatened. I know a person who resigned CS course because AI was solving algorithmic exercises better than him. This might cause global depression, as we no longer are on the "top". Moreover he went to medicine, where people basically will be using AI to diagnose people and AI operators are required (i.e. there are no threats of reductions because of AI in Public Health Service)
So the world is changing, the power is being gathered, there is no longer possibility to "run your local cloud with open office, and a mail server" to take that power from the giants.
> AI is about centralisation of power
I do not believe this is the main reason at all.
The core issue is that AI is taking away, or will take away, or threatens to take away, experiences and activities that humans would WANT to do. Things that give them meaning and many of these are tied to earning money and producing value for doing just that thing. As someone said "I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes".
Much of the meaning we humans derive from work is tied to the value it provides to society. One can do coding for fun but doing the same coding where it provides value to others/society is far more meaningful.
Presently some may say: AI is amazing I am much more productive, AI is just a tool or that AI empowers me. The irony is that this in itself shows the deficiency of AI. It demonstrates that AI is not yet powerful enough to NOT need to empower you to NOT need to make you more productive. Ultimately AI aims to remove the need for a human intermediary altogether that is the AI holy grail. Everything in between is just a stop along the way and so for those it empowers stop and think a little about the long term implications. It may be that for you right now it is comfortable position financially or socially but your future you in just a few short months may be dramatically impacted.
I can well imagine the blood draining from peoples faces, the graduate coder who can no longer get on the job ladder. The law secretary whose dream job is being automated away, a dream dreamt from a young age. The journalist whose value has been substituted by a white text box connected to an AI model.
It sounds like you agreed by the end, just with a slightly different way of getting there.
But why not? AI also has very powerful open models (that can actually be fine-tuned for personal use) that can compete against the flagship proprietary models.
As an average consumer, I actually feel like i'm less locked into gemini/chatgpt/claude than I am to Apple or Google for other tech (i.e. photos).
> AI also has very powerful open models (that can actually be fine-tuned for personal use) that can compete against the flagship proprietary models.
It was already tough to run flagship-class local models and it's only getting worse with the demand for datacenter-scale compute from those specific big players. What happens when the model that works best needs 1TB of HBM and specialized TPUs?
AI computation looks a lot like early Bitcoin: first the CPU, then the GPUs, then the ASICs, then the ASICs mostly being made specifically by syndicates for syndicates. We are speedrunning the same centralization.
Economies of scale makes this a space that is really difficult to be competitive in as a small player.
If it's ever to be economically viable to run a model like this, you basically need to run it non-stop, and make money doing so non-stop in order to offset the hardware costs.
You can still run your local cloud and AI providers will be heavily consolidated to a few.
While for programming task I do use Claude currently, local models can be tuned to serve 80% of the time reduction you win by using AI. Depends a bit on the work you do. This will improve probably, while frontier models seem to hit hard ceilings.
Where I would disagree is that joining concepts or knowledge works at all with current AI. It works decently bad in my opinion. Even the logical and mathematical improvements of the latest Gemini model don't impress too much yet.
Local models are fine for the way we have been using AI, like as a chatbot, or a fancy autocomplete. But everyone is craming AI into everything. Windows will be an agentic OS whether we like it or not. There will be no using your own local model for that use case. It is looking like everything is moving that way.
> This enhances my work. However other people seem to feel threatened.
I wish people would stop spreading this as if it were the main reason. It’s a weak argument and disconnected from reality, like those people who think the only ones who dislike cryptocurrencies are the ones who didn’t become rich from it.
There are plenty of reasons to be against the current crop of AI that have nothing to do with employment. The threat to the environment, the consolidation of resources by the ones at the top, the spread of misinformation and lies, the acceleration of mass surveillance, the decay of critical thinking, the decrease in quality of life (e.g. people who live next to noisy data centres)… Not everything is about jobs and money, the world is bigger than that.
Don't forget the fact that most of the time, the AI tools don't actually work well enough to be worth the trouble.
AI meeting notes are great! After you spend twice as long editing out the errors, figuring out which of the two Daves was talking each time, and removing all the unimportant side-items that were captured in the same level of detail as the core decision.
AI summaries are great - if you're the sort of person that would use a calculator that's wrong 10% of the time. The rest of us realize that an hour spent reading something is more rewarding and useful than an hour spent double checking an AI summary for accuracy.
AI as Asbestos isn't even an apt comparison, both are toxic and insidious, but Asbestos at least had a clear and compelling use case at the time. It solved some problems both better and cheaper than available alternatives. AI solves problems poorly and at higher cost, and people call you "threatened" if you point that out.
AI summaries are great for orgs that dont do them (AI better than nothing) but not that great for those orgs that have an agenda and a note taker. What is very rare. Bur better quality
Would you say there's at least fifty percent of people who are informed about these?
It’s not clear to me what exactly you’re asking, but I’ll try to answer it anyway. I’d say that of the people who are against AI, fewer than 50% (to use your number) aren’t against it solely (or primarily, or at all) because they feel threatened for their job. Does that answer your question?
I meant more about this part:
> The threat to the environment, the consolidation of resources by the ones at the top, the spread of misinformation and lies, the acceleration of mass surveillance, the decay of critical thinking
My question was that how many people are actually concerned about those things? If you think about it it's kind of obvious but it takes conscious effort to see it and I suspect not many people do.
anecdotal but it seems many people are concerned about those things
but the opposite is actually true. u can use ai to bypass a lot of SaaS solutions
So you are saying now that you can bypass a lot of solutions offered by a mix of small/large providers by using a single solution from a huge provider, this is the opposite of a centralization of power?
>"by using a single solution from a huge provider"
The parent didn't say that though and clearly didn't mean it.
Smaller SaaS providers have a problem right now. They can't keep up with the big players in terms of features, integrations and aggressive sales tactics. That's why concentration and centralisation is growing.
If a lot of specialised features can be replaced by general purpose AI tools, that could weaken the stranglehold that the biggest Saas players have, especially if those open weights models can be deployed by a large number of smaller service providers or even self hosted or operated locally.
That's the hypothesis I think. I'm not sure it will turn out that way though.
I'm not sure whether the current hyper-competitive situation where we have a lot of good enough open weights models from different sources will continue.
I'm not sure that AI models alone will ever be reliable enough to replace deterministic features.
I'm not sure whether AI doesn't create so many tricky security issues that once again only the biggest players can be trusted to manage them or provide sufficient legal liability protection.
with ai specialized hardware you can run the open source models locally too and without without the huge provider stealing your precious IP
Sorry but your SQL comparison is way off. SQL is deterministic, has a defined implementation that databases must follow and when you run a statement it presents a query plan.
This is the absolute opposite to using an LLM. Please stop using this comparison and perhaps look for others, like for example, a randomised search engine.
You're missing the point entirely. He's saying it's horses for courses, each tool has its use and you use the right tool for the job.
And he's right. LLMs are fancy text query engines and work very well as such.
The problem is when people try to shoehorn everything into LLMs. That's a disaster yet being perused vigorously by some.
I am not in Seattle. I do work in AI but have shifted more towards infrastructure.
I feel fatigued by AI. To be more precise, this fatigue includes several factors. The first one is that a lot of people around me get excited by events in the AI world that I find distracting. These might be new FOSS library releases, news announcements from the big players, new models, new papers. As one person, I can only work on 2-3 things at a given interval in time. Ideally I would like to focus and go deep in those things. Often, I need to learn something new and that takes time, energy and focus. This constant Brownian motion of ideas gives a sense of progress and "keeping up" but, for me at least, acts as a constantly tapped brake.
Secondly, there is a sentiment that every problem has an AI solution. Why sit and think, run experiments, try to build a theoretical framework when one can just present the problem to a model. I use LLMs too but it is more satisfying, productive, insightful when one actually thinks hard and understands a topic before using LLMs.
Thirdly, I keep hearing that the "space moves fast" and "one must keep up". The fundamentals actually haven't changed that much in the last 3 years and new developments are easy to pick up. Even if they did, trying to keep up results in very shallow and broad knowledge that one can't actually use. There are a million things going on and I am completely at peace with not knowing most of them.
Lastly, there is pressure to be strategic. To guess where the tech world is going, to predict and plan, to somehow get ahead. I have no interest in that. I am confident many of us will adapt and if I can't, I'll find something else to do.
I am actually impressed with and heavily use models. The tiresome part now are some of the humans around the technology who participate in the behaviors listed above.
> The fundamentals actually haven't changed that much in the last 3 years
Even said fundamentals don't have much in the way to foundations. It's just brute forcing your way using a O(n^3) algorithm using a lot of data and compute.
I hate scammers like many of the Anthropic employees that post every other week "brooo we have this model that can break out of the system bro!"
"broo it's so dangerous let me tell you how dangerous it is! you don't want to get this out! we have something really dangerous internally!"
Those are the worst, Dario included there btw, almost a worse grifter than Altman.
The models themselves are fine except Claude that calls the police if you say the word boob.
Dario wishes he was the grifter Altman is. He's like a kirland brand grifter compared to Altman. Altman is a generational level talent when it comes to grifting.
> I am actually impressed with and heavily use models. The tiresome part now are some of the humans around the technology who participate in the behaviors listed above.
the AI just an LLM and it just does what it is told to.
no limit to human greed though
I get excited by new model releases, try it, switch it to default if I feel it's better, and then I move on. I don't understand why any professional SWE should engage in weird cultish behavior about these models, it's a better mousetrap as far as I'm concerned
its just the old pc vs mac cultism. nobody who actually has work to do cares. much like authors obsessed with typewriters, transport companies with auto brands, etc
Ok so a few thoughts as a former Seattleite:
1. You were a therapy session for her. Her negativity was about the layoffs.
2. FAANG companies dramatically overhired for years and are using AI as an excuse for layoffs.
3. AI scene in Seattle is pretty good, but as with everywhere else was/is a victim of the AI hype. I see estimates of the hype being dead in a year. AI won't be dead, but throwing money at the whatever Uber-for-pets-AI-ly idea pops up won't happen.
4. I don't think people hate AI, they hate the hype.
Anyways, your app actually does sound interesting so I signed up for it.
Some people really do hate AI, it's not entirely about the layoffs. This is a well insulated bubble but you can find tons of anti-AI forums online.
Yeah, as a gamer I get a lot of game news in my feeds. Apparently there's a niche of indie games that claim to be AI-free. [0]
And I read a lot of articles about games that seem to love throwing a dig at AI even if it's not really relevant.
Personally, I can see why people dislike Gen AI. It takes people's creations without permission.
That being said, morality of the creation of AI tooling aside, there are still people who dislike AI-generated stuff. Like, they'd enjoy a song, or an image, or a book, and then suddenly when they find out it's AI suddenly they hate it. In my experience with playing with comfy ui to generate images, it's really easy to get something half decent, it's really hard to get something very high quality. It really is a skill in itself, but people who hate AI think it's just type a prompt and get image. I've seen workflows with 80+ nodes, multiple prompts, multiple masks, multiple loras, to generate one single image. It's a complex tool to learn, just like photoshop. Sure you can use Nano-Banana to get something but even then it can take dozens of generations and prompt iterations to get what you want.
[0] https://www.theverge.com/entertainment/827650/indie-develope...
>morality of the creation of AI tooling aside,
That's a big aside
>Like, they'd enjoy a song, or an image, or a book, and then suddenly when they find out it's AI suddenly they hate it.
Yes, because for some people its about supporting human creation. Finding out it's part of a grift to take from said humans can be infuriating. People don't want to be a part of that.
if I learn music had an AI involved it actually makes me feel awful. It totally just strips it of any appeal for me.
Most of the people that dislike genAI would have the exact same opinion if all the training data was paid for in full (whatever a fair price would be for what is essentially just reference material)
That if carries a lot of meaning here. In reality it is and was impossible to pay for all the stolen data. Also LLM corpos not only didn't pay for the data, but they never even asked. I know it may be a surprise, but some people would refuse to sell their data to a mechanical parrot.
Outside of tech, I think the opinion is generally negative. AI has lost a lot the narrative due to things like energy prices and layoffs.
Would agree with this and think it is more than just your reasons, especially if you venture outside the US at least from what I've experienced. I've seen it at least personally more so where AI tech hubs aren't around and there is no way to "get in on the action". I see blue collar workers who are less threatened ask me directly with less to lose - why would anyone want to invent this? One of the reasons the average person on the street doesn't relate well to tech workers in general; there is a perceived lack of "street smarts" and self preservation.
Anecdotally its almost like they see them like mad scientists who are happy blowing up themselves and the world if they get to play with the new toy; almost childlike usually thinking they are doing "good" in the process. Which is seen as a sign of a lack of a type of intelligence/maturity by most people.
ChatGPT is one of the most used websites in the world and it's used by the most normal people in the world, in what way is the opinion "generally negative"?
This is the epitome of the "yet you participate in society" gotcha.
No it's not. No one is forced to use ChatGPT, it got popular by itself. When millions use it voluntarily, that contradicts the 'generally negative' statement, even if there are legitimate criticisms of other aspects of AI.
I can criticize cities overreliance of cars for transport, yet own a car and even sporadically use it. The same applies here.
ChatGPT for the common folk is used in the same way PirateBay is. Something can be "popular" and also "bad"
The argument was that common folk see it as "bad" which is clearly not the case.
Yes, and I made an argument supporting that "used" and "it's bad" are not mutually exclusive . You simply repeated what I responded to and asserted you're the right opinion.
It's clearly not that straightforward.
I get your argument but in this case it is that straightforward because it's not a forced monopoly like e.g. Microsoft Windows. Common folk decided to use ChatGPT because they think it is good. Think Google Search, it got its market position because it was good.
>Common folk decided to use ChatGPT because they think it is good.
That is not the only reason to use a tool you think is bad. "good enough" doesn't mean "good". If you think it's better to generate an essay due in an hour then rush something by hand, that doesn't mean it's "good". If I decide to make a toy app full of useless branches, no documentation, and tons of sleep calls, it doesn't mean the program is "good". It's just "good enough".
That's the core issue here. "good enough" varies on the context, and not too many people are using it like the sales pitch to boost the productivity of the already productive.
I don't agree with your comments, especially using PirateBay as an example. Stating either as "bad" is purely subjective. I find both PirateBay and ChatGPT both good things. They both bring value to me personally.
I'd wager that most people would find both as "good" depending on how you framed the question.
We'll see how long that lasts with their their new Ad framework. Probably most normal people are put off by all the other AI being marketed at them. A useful AI website is one thing, AI forced into everything else is quite another. And then they get to hear on the news or from their friends how AI-everything is going to take all the jobs so a few controversial people in tech can become trillionaires.
Go express a pro-AI opinion or link a salient, accurate AI output on reddit, and watch the downvotes roll in.
We are talking about the common folk here, not redditors.
Seemingly the primary economic beneficiaries of AI are people who own companies and manage people. What this means for the average person working for a living is probably a lot of change, additional uncertainty, and additional reductions in their standard of living. Rich get richer, poor get poorer, and they aren't rich.
What has this to do with what I wrote? Go take your class conflict somewhere else.
Globally, the opinion isn't generally negative. It's localized.
What does that mean?
Sure, I meant the anglosphere. But in most countries, the less people are aware of technology or use the internet the less they are enthusiastic about AI.
I don't see the correlation between technology/internet use and man-on-the-street attitudes towards AI. Compare Sweden with Japan.
Some people take find their life meaning through craft and work. When that craft is suddenly less scarce, less special, so does that craft-tied meaning.
I wonder if these feelings are what scribes and amanuenses felt when the printing press arrived.
I do enjoy programming, I like my job and take pride on it, but I actively try for it not to be the life-mean giving activity. I'm a just mercenary of my trade.
The craft isn't any less scarce. If anything, only more. The craft of building wooden furniture is just as scarce as ever, despite the existence of Ikea.
Which is the only woodworkers that survive are the ones with enough customers willing to pay premium prices for furniture, or lucky to live in countries where Ikea like shops aren't yet a thing.
They are also the people who are able to see the most clearly how subpar generative-AI output is. When you can't find a single spot without AI slop to rest your eyes on and see it get so much praise, it's natural to take it as a direct insult to your work.
Yes, the general acceptance of generally mediocre AI output is quite frustrating.
Cool, you "made" that image that looks like ass. Great, you "wrote" that blog post with terrible phrasing and far too many words. Congrats, I guess.
I mean, I would still hate to be replaced by some chat bot (without being fairly compensated because, societally, it's kind of a dick move for every company to just fire thousands of people and then nobody can find a job elsewhere), but I wouldn't be as mad if the damn tools actually worked. They don't. It's one thing to be laid off, it's another to be laid off, ostensibly, to be replaced by some tool that isn't even actually thinking or reasoning, just crapping out garbage.
And I will not be replying to anyone who trots out their personal AI success story. I'm not interested.
The tech works well enough to function as an excuse for massive layoffs. When all that is over, companies can start hiring again. Probably with a preference for employees that can demonstrate affinity with the new tools.
On this topic I think it’s pretty off base to call HN a “well insulated bubble” - AI skepticism and outright hate is pretty common here and AI negative comments often get a lot of support. This thread itself offers plenty of examples.
> Some people really do hate AI
That's probably me for a lot of people. The reality is a bit finer than this namely :
- I hate VC funded AI which is actually super shallow (basically OpenAI/Claude wrappers)
- I hate VC funded genuine BigAI that sells itself as the literal opposite of what it is, e.g. OpenAI... being NOT open.
- I hate AI that hides it's ecological cost. Generating text, videos, etc is actually fascinating, but not if making the shittiest video with the dumbest script is taking the same amount of energy I'd need to fly across the globe.
- I hate AI that hides it's human cost, namely using cheap labor from "far away" where people have to label atrocities (murders, rape, child abuse, etc) without being provided proper psychological support.
- I hate AI that embodies capitalist principles of exploitation. If somehow your entire AI business relies on an entire pyramid of everything listed above to capture a market then hike the price once dependency is entrenched you might be a brilliant business man but you suck as a human being.
etc... I could go on but you get the idea.
I do love open source public AI research though. Several of my very good friends are researchers in universities working on the topic. They are smart, kind and just great human beings. Not fucking ghouls riding the hype with 0 concern for our World.
So... yes maybe AI haters have a slightly more refined perspective but of course when one summarize whatever text they see in 3 words via their favorite LLM, it's hard to see.
> making the shittiest video with the dumbest script is taking the same amount of energy I'd need to fly across the globe.
I get your overall point, but the hyperbole is probably unhelpful. Flying a human across the globe takes several MWh. That's billions of tokens created (give or take an order of magnitude...).
Does your comparison include training, data center building, GPUs productions, etc or solely inference? (genuine question I don't know the total cost for e.g. Sora2, only inference which AFAIK is significant yet pale in comparison to everything upstream)
No, that's one reason why there's at least an order of magnitude wiggle room there. I just took the first number for J/Token I found on arxiv from 2025. Choosing the exact model and hardware it runs on is also making a large difference (probably larger than your one-time upfront costs, since those are needed only once and spread out across years of inference).
My point is mobility, especially commercial flight, is extremely energy intense and the average westerner will burn much more resources here than on AI use. People get mad at the energy and water use of AI, and they aren't wrong, but right now it really is only a drop in the ocean of energy and water we're wasting anyways.
> right now it really is only a drop in the ocean of energy and water we're wasting anyways.
That's not what I heard. Maybe it was in 2024 but now data centers have their own categories in energy consumption whereas until now it was "others". I think we need to update our collective understanding in terms of actual energy consumed. It was all fun & games until recently and slop was kind of harmless consequence ecologically speaking but from what I can tell in terms of energy, water, etc it is not negligible anymore.
Probably just a matter of perspective. It's a few hundreds of TWh per year in 2025 - that's huge, and it's growing quickly. But again, that's still only a small fraction of a percent of total human primary energy consumption during the same time.
You could say the same about the airplane, does the CO2 emissions that the airline states for my seat include building the plane, the R/D, training the pilot.
Sure and I do, it's LCA https://en.wikipedia.org/wiki/Life-cycle_assessment the problem IMHO being that the AI hype entire ecosystem is literally hiding everything it can about this behind the veil of giving information to competitors. We have CO2eq on model cards but we don't have much datapoints on proprietary models running on Azure cloud or wherever. At best we infer from some research papers that are close enough but we don't know for the most popular models and that's quite problematic. The car industry did everything it could too, e.g. Volkswagen scandal so let's not repeat that.
Emphasis on ‘some’. Compare that to the article title!
I don’t think people hate models. They hate that techbros are putting LLMs in places they don’t belong … and then trying to anthropomorphize the thing finding what best rhymes with your prompt as “reasoning” and “intelligence” (which it isn’t).
> Some people really do hate AI
AGI? No, although it's not there. LLMs? Yes, lots. The main benefit they can give is to sort-of-speed-up internet search, but I have to go and check the sources anyway so I'll revert back to 20+ years of experience of doing it myself. Any other application of machine learning such almost instant speech to text? No, it's useful.
> You were a therapy session for her. Her negativity was about the layoffs.
I think there is no "her", the article ends with saying:
> My former coworker—the composite of three people for anonymity—now believes she's [...]
I think it's just 3 different people and they made up a "she" single coworker as a kind of example person.
I don't know, that's my reading at least, maybe I got it wrong.
I hate to be cagey here but I just really don’t want to make anyone’s life harder than it needs to be by revealing their identity. Microsoft is a really tough place to be an employee right now.
The hate starts with the name. LLMs don't have the I in AI. It's like marketing a car as self-driving while all it can do is lane assist.
That's because there are at least 5 different definitions of AI.
- At it's inception in 1955 it was "learning or any other feature of intelligence" simulated by a machine [1] (fun fact: both neural networks and computers using natural language were on the agenda back then)
- Following from that we have the "all machine learning is AI" which was the prevalent definition about a decade ago
- Then there's the academic definition that is roughly "computers acting in real or simulated environments" and includes such mundane and algorithmic things as path finding
- Then there's obviously AGI, or the closely related Hollywood/SciFi definition of AI
- Then there's just "things that the general public doesn't expect computers to be able to do". Back when chess computers used to be called AI this was probably the closest definition that fits. Clever sales people also used to love to call prediction via simple linear regression AI
Notably four out of five of them don't involve computers actually being intelligent. And just a couple years ago we still sold simple face detection as AI
1: https://www-formal.stanford.edu/jmc/history/dartmouth/dartmo...
And yet, somehow, "it's not actually AI" has wormed its way into the minds of various redditors.
It's the opposite. It is doing the driving but you really have to provide lane assist, otherwise you hit the tree, or start driving in the opposite direction.
Many people claim it's doing great because they have driven hundreds of kilometers, but don't particularly care whether they arrived at the exact place, and are happy with the approximate destination.
Then what do they have?
Is the siren song of "AI effect" so strong in your mind that you look at a system that writes short stories, solves advanced math problems and writes working code, and then immediately pronounce it "not intelligent"?
It doesn’t actually solve those math problems though, does it? It replies with a solution if it has seen one often enough in training data or something that looks like a solution but isn’t. At the end, the human still needs to proof it.
Same for short stories, it doesn’t actually write new stories, it rehashes stories it (probably illegally) ingested in training data.
LLMs are good at mimicking the content they were trained on, they don’t actually adopt or extend the intelligence required to create that content in the first place.
Oh, I remember those talks. People actually checking whether an LLM's response is something that was in the training data, something that was online that it replicated, or something new.
They weren't finding a lot of matches. That was odd.
That was in the days of GPT-2. That was when the first weak signs of "LLMs aren't just naively rephrasing the training data" emerged. That finding was controversial, at the time. GPT-2 couldn't even solve "17 + 29". ChatGPT didn't exist yet. Most didn't believe that it was possible to build something like it with LLM tech.
I wish I could say I was among the people who had the foresight, but I wasn't. Got a harsh wake-up call on that.
And yet, here we are, in year 20-fucking-25, where off-the-shelf commercially available AIs burn through math competitions and one shot coding tasks. And people still say "they just rehash the training data".
Because the alternative is: admitting that we found an algorithm that crams abstract thinking into arrays of matrix math. That it's no longer human exclusive. And that seems to be completely unpalatable to many.
Based on the absolute trash I usually get out of ChatGPT, Claude, etc, I wouldn’t say that it writes “working” code.
You and I must be using very different versions of Claude. As an infra/systems guy (non-coder), the ability for me to develop some powerful tools simply by leveraging Claude has been nothing short of amazing. I started using Claude about 8 months ago and have since created about 20 tools ranging from simple USB detection scripts (for secure erasing SSDs) to complex tools like an Azure File Manager and a production-ready data migration tool (Azure to Snowflake). Yes, I know bash and some Python, but Claude has really helped me create tools that would have taken many weeks/months to build using the right technology stack. I am happy to pay for the Claude Max plan; it has returned huge dividends to my productivity.
And, maybe that is the difference. Non coders can use AI to help build MVPs and tooling they could otherwise not do (or take a long time to get done). On the other hand, professional coders see this as an intrusion to their domain, become very skeptical because it does not write code "their way" or introduces some bugs, and push back hard.
Every HN thread about AI eventually has someone claiming the code it produces is “trash” or “non-working.” There are plenty of top-tier programmers here who dismiss anyone who actually finds LLM-generated code useful, even when it gets the job done.
I’m tempted to propose a new law—like Poe’s or Godwin’s—that goes something like: “Any discussion about AI will eventually lead to someone insisting it can’t match human programmers.”
By that metric: do you?
Seeing an AI casually spit out an 800 lines script that works first try is really fucking humbling to me, because I know I wouldn't be able to do that myself.
Sure, it's an area of AI advantage, and I still crush AI in complex codebases or embedded code. But AI is not strictly worse than me, clearly. The fact that it already has this area of advantage should give you a pause.
Humbling indeed. I am utterly amazed at Claude's breadth of knowledge and ability to understand the context of our conversations. Even if I misspell words, don't use the exact phrase, or call something a function instead of a thread, Claude understands what I want and helps make it happen. Not to mention the ability to read hundreds of lines of debug output and point out a tiny error that caused the bug.
See also hoverboards
The layoffs are due to tax incentives in the tax cut bills that financially incentivize offshoring work.
I think these companies would benefit from honesty, if they're right and their new AI capabilities are really powerful then poisoning their workforce against AI is the worst thing they could do right now. Like a generous severance approach and compassionate layoffs would go a long way right now.
Thanks for signing up. I’m going to try really hard to open up some beta slots next week so more people can try it. There’s some embarrassingly bad bugs in prod right now…
>FAANG companies dramatically overhired for years and are using AI as an excuse for layoffs.
Close. We're in a recession and they are using AI as an excuse for another wave of outsourcing.
>I don't think people hate AI, they hate the hype.
I hate the grift. I hate having it forced on me after refusing multiple times. That's pretty much 90% of AI right now.
I was an early employee at a unicorn and I saw this culture take hold once we started hiring from Big Tech talent pools and offering Big Tech comp packages, though before AI hype took off. There's a crazy lack of agency that kicks in for Big Tech folks that's really hard to explain. This feeling that each engineer is this mercenary trying really hard to not get screwed by the internal system.
Most of it is because there's little that ties actual output to organizational outcomes. AI mandates after all are just a way to bluntly for e engineers to use AI, where if you were at a startup or smaller company you would probably organically find how much an LLM helps you where. It may not even help your actual work even if it helps your coworkers. That market feedback is sorely missing from the Big Techs and so hamfisted engineering mandates have to do in order to for e engineers to become more efficient.
In these cases I always try to remind friends that you can always leave a Big Tech. The thing is, from what I can tell, a lot of these folks have developed lifestyle inflation from working in Big Tech and some of their anger comes from feeling trapped in their Big Tech role due to this. While I understand, I'm not particularly sympathetic to this viewpoint. At the end of the day your lifestyle is in your hands.
It’s not only the hype though.
What about the complete lack of morality some (most?) AI companies exhibit?
What about the consequences in the environment?
What about the enshitification of products?
What about the usage of water and energy?
Etc.
Is your "etc." keep repeating the same two points you did in your list of four?
What about the RAM price surge?
What about diverting funding from much more useful and needed things?
What about automation of scams, surveillance, etc?
I can keep going.
There are plenty of reasons to hate on AI beyond hype.
> What about the RAM price surge?
It's a bit more expensive. It's not the end of the world. Production will likely increase if the demand is consistent.
> What about diverting funding from much more useful and needed things?
And who determines that? People put there money where they want to. People think AI will provide value to other people those people will, therefore, pay money for AI. So the funding that AI is receiving is directly proportional to how useful and needed people think AI is. I disagree, but I'm not a dictator.
> What about automation of scams, surveillance, etc?
Technology makes things easier, including bad things. This isn't the first time this happened and it won't be the last. It also makes avoiding those things easier though but that usually lags a bit behind.
> I can keep going.
Please do because it seems like you're grasping at straws.
Not to diminish your overall point, but enshittification has been happening well before AI, AI just made it much easier and faster to enshittify everything.
But AI allows us to make customised enshitification, think of the possibilities!
We don't need to spend money on customer support!
hell we don't need customers. we'll just get MS or Nvidia to invest in us, while leaning on their offerings.
Why would you not hate AI. What is there to like.
It's the closing trash compactor of soullessness and hate of the human, described vividly as having affected Microsoft culture as thoroughly as intergranular corrosion can turn a solid block of aluminum to dust.
Fuck Microsoft for both hating me and hating their own people. Fuck. That. Shit.
> It's the closing trash compactor of soullessness and hate of the human, described vividly as having affected Microsoft culture as thoroughly as intergranular corrosion can turn a solid block of aluminum to dust.
That's a great way to describe it. There's a good article that points out AI is the new aesthetic of fascism. And, of course, in Miyazaki's words, "I strongly feel that this is an insult to life itself."
> Engineers don't try because they think they can't.
This article assumes that AI is the centre of the universe, failing to understand that that assumption is exactly what's causing the attitude they're pointing to.
There's a dichotomy in the software world between real products (which have customers and use cases and make money by giving people things they need) and hype products (which exist to get investors excited, so they'll fork over more money). This isn't a strict dichotomy; often companies with real products will mix in tidbits of hype, such as Microsoft's "pivot to AI" which is discussed in the article. But moving toward one pole moves you away from the other.
I think many engineers want to stay as far from hype-driven tech as they can. LLMs are a more substantive technology than blockchain ever was, but like blockchain, their potential has been greatly overstated. I'd rather spend my time delivering value to customers than performing "big potential" to investors.
So, no. I don't think "engineers don't try because they think they can't." I think engineers KNOW they CAN and resent being asked to look pretty and do nothing of value.
Yeah, "Engineers don't try" is a frustrating statement. We've all tried generative AI, and there's not that much to it — you put text in, you get text back out. Some models are better at some tasks, some tools are better at finding the right text and connecting it to the right actions, some tools provide a better wrapper around the text-generation process. Certain jobs are very easy for AI to do, others are impossible (but the AI lies about them).
A lot of us tried it and just said, "huh, that's interesting" and then went back to work. We hear AI advocates say that their workflow is amazing, but we watch videos of their workflow, and it doesn't look that great. We hear AI advocates say "the next release is about to change everything!", but this knowledge isn't actionable or even accurate.
There's just not much value in chasing the endless AI news cycle, constantly believing that I'll fall behind if I don't read the latest details of Gemini 3.1 and ChatGPT 6.Y (Game Of The Year Edition). The engineers I know who use AI don't seem to have any particular insights about it aside from an encyclopedic knowledge of product details, all of which are changing on a monthly basis anyway.
New products that use gen AI are — by default — uninteresting to me because I know that under the hood, they're just sending text and getting text back, and the thing they're sending to is the same thing that everyone is sending to. Sure, the wrapper is nice, but I'm not paying an overhead fee for that.
> Yeah, "Engineers don't try" is a frustrating statement. We've all tried generative AI, and there's not that much to it — you put text in, you get text back out.
"Engineers don't try" doesn’t refer to trying out AI in the article. It refers to trying to do something constructive and useful outside the usual corporate churn, but having given up on that because management is single-mindedly focused on AI.
One way to summarize the article is: The AI engineers are doing hype-driven AI stuff, and the other engineers have lost all ambition for anything else, because AI is the only thing that gets attention and helps the career; and they hate it.
> the other engineers have lost all ambition for anything else
Worse, they've lost all funding for anything else.
Industries are built upon shit people built in their basements, get hacking
I think it should be noted that a garage or basement in California costs like a million dollars.
That was true before Crypto and AI.
I am! No one's interested in any of it though...
You need to buy fake stars on github, fake download it 2 millions time a day and ask an AI to spam about it on twitter/linkedin.
ZIRP is gone, and so are the Good Times when any idiot could get money with nothing but a PowerPoint slide deck and some charisma.
That doesn't mean investors have gotten smarter, they've just become more risk averse. Now, unless there's already a bandwagon in motion, it's hard as hell to get funded (compared to before at least).
Are you sure it refers to that? Why would it later say:
> now believes she's both unqualified for AI work
Why would she believe to be unqualified for AI work if the "Engineers don't try" wasn't about her trying to adopt AI?
“Lost all ambition for anything else” is a funny way for the article to frame “hate being forced to run on the hampster wheel on ai, because an exec with the power to fire everyone is foaming at the mouth about ai and seemingly needs everyone to use it”
To add another layer to it, the reason execs are foaming at the mouth is because they are hoping to fire the as many people as possible. Including those who implemented whatever AI solution in the first place.
The most ironic part is that AI skills won't really help you with job security.
You touched on some of the reasons; it doesn't take much skill to call an API, the technology is in a period of rapid evolution, etc.
And now with almost every company trying to adopt "AI" there is no shortage of people who can put AI experience on their resume and make a genuine case for it.
Maybe not what the OP or article is talking about, but it's super frustrating recently dealing with non/less technical mgrs, PMs, etc who now think they have this Uno card to bypass technical discussion just because they vibe coded some UI demo. Like no shit, that wasn't the hard part. But since they don't see the real/less visible past like data/auth/security, etc they act like engineers "aren't trying", less innovative, anti-AI or whatever when you bring up objections to their "whole app" they made with their AI snoopy snow cone machine.
Hmm, (whatever is in execs' head about) AI appears to amplify the same kind of thinking fallacies that are discussed in the eternal Mythical Manmonth essay, which was written like half a century ago. Funny how some things don't change much...
My experience too. They are so convinced that AI is magical that pushing back makes you look bad.
Then things don't turn out as they expected and you have to deal with a dude thinking his engineers are messing with him.
It's just boring.
It reminds me of how we moved from "mockups" to "wireframes" -- in other words, deliberately making the appearance not look like a real, finished UI, because that could give the impression that the project was nearly done
But now, to your point: they can vibe-code their own "mockups" and that brings us back to that problem
> We hear AI advocates say that their workflow is amazing, but we watch videos of their workflow, and it doesn't look that great. We hear AI advocates say "the next release is about to change everything!", but this knowledge isn't actionable or even accurate.
There's a lot of disconnected-from-reality hustling (a.k.a lying) going on. For instance, that's practically Elon Musk's entire job, when he's actually doing it. A lot of people see those examples, think it's normal, and emulate it. There are a lot of unearned superlatives getting thrown around automatically to describe tech.
Yes, much the way some used to (still do?) try and emulate Steve Jobs. There's always some successful person these types are trying to be.
I’ve been an engineer for 20 years, for myself, small companies, and big tech, and now working for my own saas company.
There are many valid critiques of AI, but “there’s not much there” isn’t one of them.
To me, any software engineer who tries an LLM, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job, which is using technology to solve problems. Maybe AI isn’t the right tool for the job, but that kind of shallow dismissal indicates a closed mind, or perhaps a fear-based reaction. Either way, the market is going to punish them accordingly.
Punishment eh? Serves them right for being skeptical.
I've been around long enough that I have seen four hype cycles around AI like coding environments. If you think this is new you should have been there in the 80's (Mimer, anybody?), when the 'fourth generation' languages were going to solve all of our coding problems. Or in the 60's (which I did not personally witness on account of being a toddler), when COBOL, the language for managers was all the rage.
In between there was LISP, the AI language (and a couple of others).
I've done a bit more than looking at this and saying 'huh, that's interesting'. It is interesting. It is mostly interesting in the same way that when you hand an expert a very sharp tool they can probably carve wood better than with a blunt one. But that's not what is happening. Experts are already pretty productive and they might be a little bit more productive but the AI has it's own envelope of expertise and the closer you are to the top of the field the smaller your returns in that particular setting will be.
In the hands of a beginner there will be blood all over the workshop and it will take an expert to sort it all out again, quite possibly resulting in a net negative ROI.
Where I do get use out of it: to quickly look up some verifiable fact, to tell me what a particular acronym stands for in some context, to be slightly more functional than wikipedia for a quick overview of some subfield (but you better check that for gross errors). So yes, it is useful. But it is not so useful that competent engineers that are not using AI are failing at their job, and it is at best - for me - a very mild accelerator in some use cases. I've seen enough AI driven coding projects strand hopelessly by now to know that there are downsides to that golden acorn that you are seeing.
The few times that I challenged the likes of ChatGPT with an actual engineering problem to which I already knew the answer by way of verification the answers were so laughably incorrect that it was embarrassing.
I'm not a big llm booster, but I will say that they're really good for proof of concepts, for turning detailed pseudocode into code, sometimes for getting debugging ideas. I'm a decade younger than you, but I've programmed in 4GLs (yuch), lived through a few attempts at visual programming (ugh), and ... LLM assistance is different. It's not magic and it does really poorly at the things I'm truly expert at, but it does quite well with boring stuff that's still a substantial amount of programming.
And for the better. I've honestly not had this much fun programming applications (as opposed to students stuff and inner loops) in years.
> but it does quite well with boring stuff that's still a substantial amount of programming.
I'm happy that it works out for you, and probably this is a reflection of the kind of work that I do, I wouldn't know how to begin to solve a problem like designing a braille wheel or a windmill using AI tools even though there is plenty of coding along the way. Maybe I could use it to make me faster at using OpenSCAD but I am never limited by my typing speed, much more so by thinking about what it is that I actually want to make.
I've used it a little for openscad with mixed results - sometimes it worked. But I'm a beginner at openscad and suspect if I were better it would have been faster to just code it. It took a lot of English to describe the shape - quite possibly more than it would have taken to just write in openscad. Saying "a cube 3cm wide by 5cm high by 2cm deep" vs cube([5, 3, 2]) ... and as you say, the hard part is before the openscad anyway.
OpenSCAD has a very steep learning curve. The big trick is not to think sequentially but to design the part 'whole'. That requires a mental switch. Instead of building something and then adding a chamfered edge (which is possible, but really tricky if the object is complex enough) you build it out of primitives that you've already chamfered (or beveled). A strategic 'hull' here and there to close the gaps helps a lot.
Another very useful trick is to think in terms of vertices of your object rather than the primitives creates by those vertices. You then put hulls over the vertices and if you use little spheres for the vertices the edges take care of themselves.
This is just about edges and chamfers, but the same kind of thinking applies to most of OpenSCAD. If I compare how productive I am with OpenSCAD vs using a traditional step-by-step UI driven cad tool it is incomparable. It's like exploratory programming, but for physical objects.
> There are many valid critiques of AI, but “there’s not much there” isn’t one of them.
"There's not much there" is a totally valid critique of a lot of the current AI ecosystem. How many startups are simple prompt wrappers on top of ChatGPT? How many AI features in products are just "click here to ask Rovo/Dingo/Kingo/CutesyAnthropomorphizedNameOfAI" text boxes that end up spitting out wrong information?
There's certainly potential but a lot of the market is hot air right now.
> Either way, the market is going to punish them accordingly.
I doubt this, simply because the market has never really punished people for being less efficient at their jobs, especially software development. If it did, people proficient in vim would have been getting paid more than anyone else for the past 40 years.
IMO if the market is going to punish anyone it’s the people who, today, find that AI is able to do all their coding for them.
The skeptics are the ones that have tried AI coding agents and come away unimpressed because it can’t do what they do. If you’re proudly proclaiming that AI can replace your work then you’re telling on yourself.
> If you’re proudly proclaiming that AI can replace your work then you’re telling on yourself.
That's a very interesting observation. I think I'm safe for now ;)
> simply because the market has never really punished people for being less efficient at their jobs
In fact, it tends to be the opposite. You being more efficient just means you get "rewarded" with more work, typically without an appropriate increase in pay to match the additional work either.
Especially true in large, non-tech companies/bureaucratic enterprises where you are much better off not making waves, and being deliberately mediocre (assuming you're not a ladder climber and aren't trying to get promoted out of an IC role).
In a big team/org, your personal efficiency is irrelevant. The work can only move as fast as the slowest part of the system.
This is very true. So you can't just ask people to use AI and expect better output even if AI is all the hype. The bottlenecks are not how many lines of code you can produce in a typical big team/company.
I think this means a lot of big businesses are about to get "disrupted" because small teams can become more efficient because for them sheer generation of somtimes boilerplate low quality code is actually a bottleneck.
Sadly capitalism rewards scarcity at a macro level, which in some ways is the opposite of efficiency. It also grants "social status" to the scarce via more resources. As long as you aren't disrupted, and everyone in your industry does the same/colludes, restricting output and working less usually commands more money up to a certain point (prices are set more as a monopoly in these markets). Its just that scarcity was in the past correlated with difficulty which made it "somewhat fair" -> AI changes that.
Its why unions, associations, professional bodies, etc exist for example. This whole thread is an example -> the value gained from efficiency in SWE jobs doesn't seem to be accruing value to the people with SWE skills.
I think part of this is that there is no one AI and there is no one point in time.
The other day Claude Code correctly debugged an issue for me, that was seen in production, in a large product. It found a bug a human wrote, a human reviewed, and fixed it. For those interested the bug had to do with chunk decoding, the author incorrectly re-initialized the decoder in the loop for every chunk. So single chunk - works. >1 chunk fails.
I was not familiar with the code base. Developers who worked on the code base spent some time and didn't figure out what was going on. They also were not familiar with the specific code. But once Claude pointed this out that became pretty obvious and Claude rewrote the code correctly.
So when someone tells me "there's not much there" and when the evidence says the opposite I'm going to believe my own lying eyes. And yes, I could have done this myself but Claude did this much faster and correctly.
That said, it does not handle all tasks with the same consistency. Some things it can really mess up. So you need to learn what it does well and what it does less well and how and when to interact with it to get the results you want.
It is automation on steroids with near human (lessay intern) capabilities. It makes mistakes, sometimes stupid ones, but so do humans.
>So when someone tells me "there's not much there" and when the evidence says the opposite I'm going to believe my own lying eyes. And yes, I could have done this myself but Claude did this much faster and correctly.
If the stories were more like this where AI was an aid (AKA a fancy auto complete), devs would probably be much more optimistic. I'd love more debugging tools.
Unfortunately, the lesson an executive here would see is "wow AI is great! fire those engineers who didn't figure it out". Then it creeps to "okay have AI make a better version of this chunk decoder". Which is wrong on multiple levels. Can you imagine if the result for using Intellisense for the first time was to slas your office in half? I'd hate autocomplete too?
What's "there" though is that despite being wrappers of chat gpt, the product itself is so compelling that it's essentially got a grip on the entire american economy. That's why everyone's crabs in a bucket about it, there's something real that everyone wants to hitch on to. People compare crypto or NFTs to this in terms of hype cycle, but it's not even close.
>there's something real that everyone wants to hitch on to.
Yeah, stock prices, unregulated consolidation, and a chance to replace the labor market. Next to penis enhancement, it's a CEO's wet dream. They will bet it all for that chance.
Granted, I think its hastiness will lead to a crash, so the CEO's played themselves short term.
Sure, but under it all there's something of value... that's why it's a much larger hype wave than dick pills
> To me, any software engineer who tries an LLM, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job, which is using technology to solve problems.
I would argue that the "actual job" is simply to solve problems. The client / customer ultimately do not care what technology you use. Hell, they don't really care if there's technology at all.
And a lot of software engineers have found that using an LLM doesn't actually help solve problems, or the problems it does solve are offset by the new problems it creates.
Again, AI isn’t the right tool for every job, but that’s not the same thing as a shallow dismissal.
What you described isn't a shallow dismissal. They tried it, found it to not be useful in solving the problems they face, and moved on. That's what any reasonable professional should do if a tool isn't providing them value. Just because you and they disagree on whether the tool provides value doesn't mean that they are "failing at their job".
It is however much less of a shallow dismissal of a tool than your shallow dismissal of a person, or in fact a large group of persons.
Or maybe it indicates that the person looking at the LLM and deciding there’s not much there knows more than you do about what they are and how they work, and you’re the one who’s wrong about their utility.
>To me, any software engineer who tries an LLM, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job, which is using technology to solve problems
This feels like a mentality of "a solution trying to find a problem". There's enough actual problems to solve that I don't need to create more.
But sure, the extension of this is "Then they go home and research more usages and see a kerfluffle of legal, community, and environmental concerns". Then decides to not get involved in the politics".
>Either way, the market is going to punish them accordingly.
If you want to punish me because I gave evaluations you disagreed with, you're probably not a company I want to work for. I'm not a middle manager.
It really depends on what you’re doing. AI models are great at kind of junior programming tasks. They have very broad but often shallow knowledge - so if your job involves jumping between 18 different tools and languages you don’t know very well, they’re a huge productivity boost. “I don’t write much sql, or much Python. Make a query using sqlalchemy which solves this problem. Here’s our schema …”
AI is terrible at anything it hasn’t seen 1000 times before on GitHub. It’s bad at complex algorithmic work. Ask it to implement an order statistic tree with internal run length encoding and it will barely be able to get off the starting line. And if it does, the code will be so broken that it’s faster to start from scratch. It’s bad at writing rust. ChatGPT just can’t get its head around lifetimes. It can’t deal with really big projects - there’s just not enough context. And its code is always a bit amateurish. I have 10+ years of experience in JS/TS. It writes code like someone with about 6-24 months experience in the language. For anything more complex than a react component, I just wouldn’t ship what it writes.
I use it sometimes. You clearly use it a lot. For some jobs it adds a lot of value. For others it’s worse than useless. If some people think it’s a waste of time for them, it’s possible they haven’t really tried it. It’s also possible their job is a bit different from your job and it doesn’t help them.
> that kind of shallow dismissal indicates a closed mind, or perhaps a fear-based reaction
Or, and stay with me on this, it’s a reaction to the actual experience they had.
I’ve experimented with AI a bunch. When I’m doing something utterly formulaic it delivers (straightforward CRUD type stuff, or making a web page to display some data). But when I try to use it with the core parts of my job that actually require my specialist knowledge they fall apart. I spend more time correcting them than if I just write it myself.
Maybe you haven’t had that experience with work you do. But I have, and others have. So please don’t dismiss our reaction as “fear based” or whatever.
I would've thought that in 20 years you would have met other devs who do not think like you?
something I enjoy about our line of work is there are different ways to be good at it, and different ways to be useful. I really enjoy the way different types of people make a team that knows its strengths and weaknesses.
anyway, I know a few great engineers who shrug at the agents. I think different types of thinker find engagement with these complex tools to be a very different experience. these tools suit some but not all and that's ok
This is the correct viewpoint (in my opinion, of course). There are many ways that lead to a solution, some are better, some are worse, some are faster, some much slower. Different tools and different strokes for different folks and if it works for you then more power to you. That doesn't mean you get to discard everybody for whom it does not work in exactly the same way.
I think a big mistake junior managers make is that they think that their nominal subordinates should solve problems the way that they would solve them, without recognizing that there are multiple valid paths and that it doesn't so much matter which path is chosen as long as the problem is solved on time and within the allocated budget.
I use AI all the time, but the only gain they have is better spelling and grammar than me. Spelling and grammar has long been my weak point. I can write the same code they write just as fast without - typing has never been the bottleneck in writing code. The bottleneck is thinking and I still need to understand the code AI writes since it is incorrect rather often so it isn't saving any effort, other than the time to look up the middle word of some long variable name.
My dismissal I think indicates exhaustion from the additional work I’d need to do to make an LLM write my code, annoyance at its inaccuracies, and disgust at the massive scam and grift that is the LLM influencers.
Writing code via a LLM feels like writing with a wet noodle. It’s much faster and write what I mean, myself, with the terse was and precision of my own thought.
> with the terse was and precision of my own thought
Hehe. So much for precision ;)
> There are many valid critiques of AI, but “there’s not much there” isn’t one of them.
I have solved more problems with tools like sed and awk, you know, actual tools, more than I’ve entered tokens into an LLM.
Nobody seemed to give a fuck as long as the problem was solved.
This it getting out of hand.
Just because you can solve problems with one class of tools doesn’t mean another class is pointless. A whole new class of problems just became solvable.
> A whole new class of problems just became solvable.
This is almost by definition not really true. LLMs spit out whatever they were trained on, mashed up. The solutions they have access to are exactly the ones that already exist, and for the most part those solutions will have existed in droves to have any semblance of utility to the LLM.
If you're referring to "mass code output" as "a new class of problem", we've had code generators of differing input complexity for a very long time; it's hardly new.
So what do you really mean when you say that a new class of problems became solvable?
But sed and awk are problems.
> To me, any software engineer who tries an LLM, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job,
I don't understand why people seem so impatient about AI adoption.
AI is the future, but many AI products aren't fully mature yet. That lack of maturity is probably what is dampening the adoption curve. To unseat incumbent tools and practices you either need to do so seamlessly OR be 5-10x better (Only true for a subset of tasks). In areas where either of these cases apply, you'll see some really impressive AI adoption. In areas where AI's value requires more effort, you'll see far less adoption. This seems perfectly natural to me and isn't some conspiracy - AI needs to be a better product and good products take time.
> I don't understand why people seem so impatient about AI adoption.
We're burning absurd, genuinely farcical amounts of money on these tools now, so of course they're impatient. There's Trillions (with a "T") riding on this massive hypewave, and the VCs and their ilk are getting nervous because they see people are waking up to the reality that it's at best a kinda useful tool in some situations and not the new God that we were promised that can do literally everything ever.
Well that's capital's problem. Don't make it mine!
Well said!
I mean, this is the other extreme to the post being replied to (either you think it's useless and walk away, or you're failing at your job for not using it)
I personally use it, I find it helpful at times, but I also find that it gets in my way, so much so it can be a hindrance (think losing a day or so because it's taken a wrong turn and you have to undo everything)
FTR The market is currently punishing people that DO use it (CVs are routinely being dumped at the merest hint of AI being used in its construction/presentation, interviewers dumping anyone that they think is using AI for "help", code reviewers dumping any take home assignments that have even COMMENTS massaged by AI)
This isn’t “unfair”, but you are intentionally underselling it.
If you haven’t had a mind blown moment with AI yet, you aren’t doing it right or are anchoring in what you know vs discovering new tech.
I’m not making any case for anything, but it’s just not that hard to get excited for something that sure does seem like magic sometimes.
Edit: lol this forum :)
> If you haven’t had a mind blown moment with AI yet, you aren’t doing it right
I AM very impressed, and I DO use it and enjoy the results.
The problem is the inconsistency. When it works it works great, but it is very noticeable that it is just a machine from how it behaves.
Again, I am VERY impressed by what was achieved. I even enjoy Google AI summaries to some of the questions I now enter instead of search terms. This is definitely a huge step up in tier compared to pre-AI.
But I'm already done getting used to what is possible now. Changes after that have been incremental, nice to have and I take them. I found a place for the tool, but if it wanted to match the hype another equally large step in actual intelligence is necessary, for the tool to truly be able to replace humans.
So, I think the reason you don't see more glowing reviews and praise is that the technical people have found out what it can do and can't, and are already using it where appropriate. It's just a tool though. One that has to be watched over when you use it, requiring attention. And it does not learn - I can teach a newbie and they will learn and improve, I can only tweak the AI with prompts, with varying success.
I think that by now I have developed a pretty good feel for what is possible. Changing my entire workflow to using it is simply not useful.
I am actually one of those not enjoying coding as such, but wanting "solutions", probably also because I now work for an IT-using normal company, not for one making an IT product, and my focus most days is on actually accomplishing business tasks.
I do enjoy being able to do some higher level descriptions and getting code for stuff without having to take care of all the gritty details. But this functionality is rudimentary. It IS a huge step, but still not nearly good enough to really be able to reliably delegate to the AI to the degree I want.
The big problem is AI is amazing at doing the rote boilerplate stuff that generally wasn't a problem to begin with, but if you were to point a codebot at your trouble ticket system and tell it to go fix the issues it will be hopeless. Once your system gets complex enough the AI effectiveness drops off rapidly and you as the engineer have to spend more and more time babysitting every step to make sure it doesn't go off the rails.
In the end you can save like 90% of the development effort on a small one-off project, and like 5% of the development effort on a large complex one.
I think too many managers have been absolutely blown away by canned AI demos and toy projects and have not been properly disappointed when attempting to use the tools on something that is not trivial.
I think the 90/90 rule comes into play. We all know Tom Cargill quote (even if we’ve never seen it attributed):
The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time.
It feels like a gigantic win when it carves through that first 90%… like, “wow, I’m almost done and I just started!”. And it is a genuine win! But for me it’s dramatically less useful after that. The things that trip up experienced developers really trip up LLMs and sometimes trying to break the task down into teeny weeny pieces and cajole it into doing the thing is worse than not having it.
So great with the backhoe tasks but mediocre-to-counterproductive with the shovel tasks. I have a feeling a lot of the impressiveness depends on which kind of tasks take up most of your dev time.
The other problem is that if you didn't actually write the first 90% then the second 90% becomes 2x harder since you have to figure out wtf is actually going on.
The more I use AI for coding the more I realize that its a toy for vibe coding/fun projects. Its not for serious work.
When you work with a large codebase which have a very high complexity level, then the bugs put in there by AI will not worth the cost of the easily added features.
Many people also program and have no idea what a giant codebase looks like.
I know I don't. I have never been paid to write anything beyond a short script.
I actually can't even picture what a professional software engineer actually works on day to day.
From my perspective, it is completely mind blowing to write my own audio synth in python with Librosa. A library I didn't know existed before LLMs and now I have a full blown audio mangling tool that I would have never been able to figure out on my own.
It seems to me professional software engineering must be at least as different to vibe coding as my audio noodlings are to being a professional concert pianist. Both are audio and music related but really two different activities entirely.
I work on a stock market trading system in a big bank, in Hong Kong.
The code is split between a backend in Java (no GC allowed during trading) and C++ (for algos), a frontend in C# (as complex as the backend, used by 200 traders), and a "new" frontend in Javascript in infinite migration.
Most of the code was made before 2008 but that was the cvs to svn switch so we lost history before that. We have employees dating back 1997 who remembers that platform already existing.
It's made of millions of lines of code, hundreds of people worked on it, it does intricate things in 10 stock markets across Asia (we have no clue how the others in US or EU do, not really at least - it's not the same rules, market vendors, protocols etc)
Sometimes I need to configure new trading robots for random little thing we want to do automatically and I ask the AI the company is shoving down our throat. It is HOPELESS, literally hopeless. I had to write a review to my manager who will never pass it along up the ladder for fear of their response that was absolutely destructive. It cannot understand the code let alone write some, it cannot write the tests, it cannot generate configuration, it cannot help in anything. It's always wrong, it never gets it, it doesn't know what the fuck these 20 different repos of thousands of files are and how they connect to each other, why it's in so many languages, why it's so quirky sometimes.
Should we change it all to make it AI compatible, or give up ? Fuck do I know... When I started working on it 7 years ago coming from little startups doing little things, it took me a few weeks to totally get the philosophy of it all and be productive. It's really not that hard, it's just really really really really large, so you have to embrace certain ways of working (for instance, you'll do bugs, and you'll find them too late, and you'll apologize in post mortems, dont be paralized by it). AIs costing all that money to be so dumb and useless, are disappointing :(
There’s a reason why it’s so much better at writing JavaScript than HFT C++.
The latter codebase doesn’t tend to be in github repos as much.
> If you haven’t had a mind blown moment with AI yet, you aren’t doing it right or are anchoring in what you know vs discovering new tech.
Or your job isn't what AI is good at?
AI seems really good at greenfield projects in well known languages or adding features.
It's been pretty awful, IME, at working with less well-known languages, or deep troubleshooting/tweaking of complex codebases.
> It's been pretty awful, IME, at working with less well-known languages, or deep troubleshooting/tweaking of complex codebases.
This is precisely my experience.
Having the AI work on a large mono repo with a front-end that uses a fairly obscure templating system? Not great.
Spinning up a greenfield React/Vite/ShadCN proof-of-concept for a sales demo? Magic.
> It's been pretty awful, IME, at working with less well-known languages
Well, there’s your problem. You should have selected React while you had the chance.
This shit right here is why people hate AI hype proponents. It's like it never crosses their mind that someone who disagrees with them might just be an intelligent person who tried it and found it was lacking. No, it's always "you're either doing it wrong or weren't really trying". Do you not see how condescending and annoying that is to people?
I wonder if this issues isn't caused by people who aren't programmers, and now they can churn out AI generated stuff that they couldn't before. So to them, this is a magical new ability. Where as people who are already adept at their craft just see the slop. Same thing in other areas. In the before-times, you had to painstakingly handcraft your cat memes. Now a bot comes along and allows someone to make cat memes they didn't bother with before. But the real artisan cat memeists just roll their eyes.
AI is better than you at what you aren’t very good at. But once you are even mediocre at doing something you realize AI is wrong / pretty bad at doing most things and every once in awhile makes a baffling mistake.
There are some exceptions where AI is genuinely useful, but I have employees who try to use AI all the time for everything and their work is embarrassingly bad.
>AI is better than you at what you aren’t very good at.
Yes, this is better phrased.
You whole comment reads like someone who is a victim of hype.
LLMs are great in their own way, but they're not a panacea.
You may recall that magic is way to trick people into believing things that are not true. The mythical form of magic doesn't exist.
> If you haven’t had a mind blown moment with AI yet...
Results are stochastic. Some people the first time they use it will get the best possible results by chance. They will attribute their good outcome to their skill in using the thing. Others will try it and will get the worst possible response, and they will attribute their bad outcome to the machine being terrible. Either way, whether it's amazing or terrible is kind of an illusion. It's both.
[flagged]
> If you haven’t had a mind blown moment with AI yet, you aren’t doing it right or are anchoring in what you know vs discovering new tech.
Much of this boils down to people simply not understanding what’s really happening. Most people, including most software developers, don’t have the ability to understand these tools, their implications, or how they relate to their own intelligence.
> Edit: lol this forum :)
Indeed.
In European consulting agencies the trend now is to make AI part of each RFP reply, you won't go through the sales team, if AI isn't crammed there as part of the solution being delivered, and we get evaluated for it.
This takes all the joy away, even traditional maintenance projects of big corps seems attractive nowadays.
I remember when everything had a have the word ‘digital’ in it. And I’m old enough to remember when ‘multimedia’ was a buzzword that was crammed into anywhere it would fit.
You know what, this clarifies something for me.
PC, Web and Smartphone hype was based on "we can now do [thing] never done before".
This time out it feels more like "we can do existing [thing], but reduce the cost of doing it by not employing people"
It all feels much more like a wealth grab for the corporations than a promise of improving a standard of living for end customers. Much closer to a Cloud or Server (replacing Mainframes) cycle.
>> This time out it feels more like "we can do existing [thing], but reduce the cost of doing it by not employing people"
I was doing RPA (robotic process automation) 8 years ago. Nobody wanted it in their departments. Whenever we would do presentations, we were told to never, ever, ever talk about this technology replacing people - it only removes the mundane work so teams can focus more on the bigger scope stuff. In the end, we did dozens and dozens of presentations and only two teams asked us to do some automation work for them.
The other leaders had no desire to use this technology because they were not only fearful of it replacing people on their teams, they were fearful it would impact their budgets negatively so they just quietly turned us down.
Unfortunately, you're right because as soon as this stuff gets automated and you find out 1/3rd of your team is doing those mundane tasks, you learn very quickly you can indeed remove those people since there won't be enough "big" initiatives to keep everybody busy enough.
The caveat was even on some of the biggest automations we did, you still needed a subset of people on the team you were working with to make sure the automations were running correctly and not breaking down. And when they did crash, since a lot of these were moving time sensitive data, it was like someone just stole the crown jewels and suddenly you need two war rooms and now you're ordering in for lunch.
Yes and no. PC, Web, etc advancements were also about lowering cost. It’s not that no one could do some thing, it’s that it was too expensive for most people, e.g. having a mobile phone in the 80’s.
Or hiring a mathematician to calculate what is now done in a spreadsheet.
100%.
"You should be using AI in your day to day job or you won't get promoted" is the 2025 equivalent of being forced to train the team that your job is being outsourced to.
or 'interactive' or 'cloud' (early 2010s).
Same, doesn't make this hype phase more bearable though.
> There's a dichotomy in the software world between real products (which have customers and use cases and make money by giving people things they need) and hype product
I think there is a broader dichotomy between the people-persuation-plane, and the real-world-facts plane. In the people-persuation plane, it is all about convincing someone of something, and hype plays here, and marketing, religion and political persuation too. In the real world plane, it is all about tangible outcomes, and working code or results play here, and gravity and electromagnetism too. Sometimes there is a reflex loop between the two. I chose the engineering career because, what i produce is tangible, but I realize that a lot of my work is in the people-plane.
>a broader dichotomy between the people-persuation-plane, and the real-world-facts plane
This right here is the real thing which AI is deployed to upset.
The Enlightenment values which brought us the Industrial Revolution imply that the disparity between the people-persuasion-plane and the real-world-facts-plane should naturally decrease.
The implicit expectation here is that as civilization as a whole learns more about how the universe works, people would naturally become more rational, and thus more persuadable by reality-compliant arguments and less persuadable by reality-denying ones.
That's... not really what I've been seeing. That's not really what most of us have been seeing. Like, devastatingly not so.
My guess is that something became... saturated? I'd place it sometime around the 1970s, same time Bretton Woods ended, and the productivity/wages gap began to grow. Something pertaining to the shared-culture-plane. Maybe there's only so much "informed" people can become before some sort of phase shift occurs and the driving force behind decisions becomes some vague, ethically unaccountable ingroup intuition ("vibes", yo), rather than the kind of explicit, systematic reasoning which actually is available to any human, except for the weird fact how nobody seems to trust it very much any more.
This sounds a lot like the Marxist concept of alienation: https://en.wikipedia.org/wiki/Marx%27s_theory_of_alienation
Probably what it is, yeah. It's in the water.
I wonder if objectively Seattle got hit harder than SF in the last bust cycle. I don’t have a frame of comparison. But if the generational trauma was bigger then so too would the backlash against new bubbles.
I've never worked at Microsoft. However, I do have some experience with the company.
I worked building tools within the Microsoft ecosystem, both on the SQL Server side, and on the .NET and developer tooling side, and I spent some time working with the NTVS team at Microsoft many years ago, as well as attending plenty of Microsoft conferences and events, working with VSIP contacts, etc. I also know plenty of people who've worked at or partnered with Microsoft.
And to me this all reads like classic Microsoft. I mean, the article even says it: whatever you're doing, it needs to align with whatever the current key strategic priority is. Today that priority is AI, 12 years ago it was Azure, and on and on. And, yes, I'd imagine having to align everything you do to a single priority regardless of how natural that alignment is (or not) gets pretty exhausting, and I'd bet it's pretty easy to burn out on it if you're in an area of the business where this is more of a drag and doesn't seem like it delivers a lot of value. And you'll have to dogfood everything (another longtime Microsoft pattern) core to that priority even if it's crap compared with whatever else might be out there.
But I don't think it's new: it's simply part and parcel of working at Microsoft. And the thing is, as a strategy it's often served them well: Windows[0], Xbox, SQL Server, Visual Studio, Azure, Sharepoint, Office, etc. Doesn't always work, of course: Windows Phone went really badly, but it's striking that this kind of swing and a miss is relatively rare in Microsoft's history.
And so now, of course, they're doing it with AI. And, of course, they're a massive company, so there will be plenty of people there who really aren't having a good time with it. But, although it's far from a foregone conclusion, it would not be a surprise for Microsoft to come from behind and win by repeating their usual strategy... again.
[0] Don't overread this: I'm not necessarily saying I'm a huge fan. In fact I do think Windows, at is core, is a decent operating system, and has been for a very long time. On the back end it works well, and I have no complaints. But I viscerally despise Windows 11 as a desktop operating system. That's right: DESPISE. VISCERALLY. AT A MOLECULAR LEVEL.
> But moving toward one pole moves you away from the other.
My assumption detector twigged at that line. I think this is just replacing the dichotomy with a continuum between two states. But the hype proponents always hope - and in some cases they are right - that those two poles overlap. People make and lose fortunes on placing those bets and you don't necessarily have to be right or wrong in an absolute sense, just long enough that someone else will take over your load and hopefully at a higher valuation.
Engineers are not usually the ones placing the bets, which is why they're trying to stay away from hype driven tech (to them it is neutral with respect to the outcome but in case of a failure they lose their job, so better to work on things that are not hyped, it is simply safer). But as soon as engineers are placing bets they are just as irrational as every other class of investor.
I do assume that, I legitimately think it's the most important thing happening in the next decade in tech. There's going to be an incredible amount of traditional software written to make this possible (new databases, frameworks, etc.) and I think people should be able to see the opportunity, but the awful cultures in places like Microsoft are hindering this.
This somewhat reflects my sentiment to this article. It felt very condescending. This "self-limiting beliefs" and the implication that Seattle engineers are less than San Francisco engineers because they haven't bought into AI...well, neither have all the SF engineers.
One interesting take away from the article and the discussion is that there seem to be two kinds of engineers: those that buy into the hype and call it "AI," and those that see it for the fancy search engine it is and call it an "LLM." I'm pretty sure these days when someone mentions "AI" to me I roll my eyes. But if they say, "LLM," ok, let's have a discussion.
> So, no. I don't think "engineers don't try because they think they can't." I think engineers KNOW they CAN and resent being asked to look pretty and do nothing of value.
I understood “they think they can’t” to refer to the engineers thinking that management won’t allow them to, not to a lack of confidence in their own abilities.
> often companies with real products will mix in tidbits of hype
The wealthiest person in the world relies entirely on his ability to convince people to accept hype that surpasses all reason.
>So, no. I don't think "engineers don't try because they think they can't." I think engineers KNOW they CAN and resent being asked to look pretty and do nothing of value.
Spot. Fucking. On.
Thank you.
the list of people who write code, use high quality LLM agents (not chatbots) like Claude, and report not just having success with the tools but watching the tools change how they think about programming, continues to grow. The sudden appearance of LLMs has had a really destabilizing effect on everything, and a vast portion of what LLMs can do and/or are being used for runs from intellectually stifling (using LLMs to write your term papers) to revolting (all kinds of non-artists/writers/musicians using LLM to suddenly think they are "creators" and displacing real artists, writers, and musicians) to utterly degenerate (political / sexual deepfakes of real people, generation of antivax propaganda, etc). Put on top of that the way corporate America is absolutely doing the very familiar "blockchain" dance on this and insisting everyone has to do AI all the time everywhere is a huge problem that hopefully will shake out some in the coming years.
But despite all that, for writing, refactoring, and debugging computer code, LLM agents are still completely game changing. All of these things are true at the same time. There's no way someone that works with real code all day could spent an honest few weeks with a tool like Claude and come away calling it "hype". someone might still not prefer it, or it's not for them, but to claim it's "hype", that's not possible.
> There's no way someone that works with real code all day could spent an honest few weeks with a tool like Claude and come away calling it "hype". someone might still not prefer it, or it's not for them, but to claim it's "hype", that's not possible.
I've tried implementing features with Claude Code Max and if I had let that go on for a week instead of just a couple of days I would've lost a week's worth of work (it was pretty immediately obvious that it was too slow at doing pretty much everything, and even the slightest interaction with the LLM caused very long round-trips that would add additional time, over and over and over again). It's possible people simply don't do the kind of things I do. On the extreme end of that, had I spent my days making CRUD apps I probably would've thought it was magic and a "game changer"... But I don't.
I actually don't have a problem believing that there are people who basically only need to write 25% of their code now; if all you're doing for work is gluing together libraries and writing boilerplate then of course an LLM is going to help with that, you're probably the 1000th person that day to ask for the same thing.
The one part I would say LLMs seem to help me with is medium-depth questions about DirectX12. Not really how to use it, but parts of the API itself. MSDN is good for learning about it, but I would concede that LLMs have been useful for just getting more composite knowledge of DX12.
P.S.:
I have found that very short completions, 1-3 lines, is a lot more productive for me personally than any kind of "generate this feature", or even function-sized generation. The reason is likely that LLMs just suck at the things I do, but they can figure out that a pattern exists in the pretty immediate context and just spit out that pattern with some context clues nearby. That remains my best experience with any and all LLM-assisted coding. I don't use it often because we don't allow LLMs for work, but I have a keybind for querying for a completion when I do side projects.
my current job /role combinations has me working in a variety of projects which feature tasks to be done in: Python/SQLAlchemy (which I maintain), Go, k8s, Ansible, Bash, Groovy, Java, Typescript, javascript, etc. If I'm doing an architecture-intensive thing in SQLAlchemy, obviously I'm not going to say "Claude here go do this feature for me". I will have it do things like write change notes (where I'll write out the changelog in the convoluted and overly technical way I can do in 10 seconds, and it produces something presentable and readable from it), set up test cases, and sometimes I will give it very specific instructions for a large refactoring that has a predictable pattern (basically, instead of me figuring out a complex search and replace or doing it manually). For stuff I do in Ansible and especially Groovy (a horrible language which heavily resists being lintable), these are very simple declarative playbooks or Jenkins pipeline jobs, I use Claude heavily to write out directives and such because it will do so without syntax errors and without me having to google every individual pattern or directive; it's much easier to check what it writes and debug from there. But I'm also not putting Claude in charge in these places, it's doing the boring stuff for me and doing it a lot faster and without my having to spend cognitive overhead (which is at a premium when you're in your late 50s like me).
> The one part I would say LLMs seem to help me with is medium-depth questions about DirectX12. Not really how to use it, but parts of the API itself. MSDN is good for learning about it, but I would concede that LLMs have been useful for just getting more composite knowledge of DX12.
see there you go, I have things like this I have to figure out many times per week. so many of them are one-off things I really dont need to learn deeply at the moment (like TypeScript). It's also very helpful to bounce off ideas, like when I need to achieve something in the Go/k8s realm, it can sanity check how I'm approaching a problem and often suggest other ways that I would not have considered (which it knows because it's been trained on millions of tech blogs).
> the list of people who write code, use high quality LLM agents (not chatbots) like Claude, and report not just having success with the tools but watching the tools change how they think about programming, continues to grow.
My company is basically writing blank cheques for "AI" (aka LLM, I hate the way we've poisoned AI as a term))tooling so that people can use any and all tooling they want and see what works and doesn't. This is a company with ~1500ish engineers, ranging from hardware engineers building POS devices to the junior frontenders building out our simplest UIs. There's also a whole lot more people who aren't technical, and they're also encouraged to use any and all AI tooling they can.
Despite the entire company trying to figure out how to use these effectively precisely because we're trying to look at things objectively and separate out the hype from the reality, the only people I've seen with any kind of praise so far (and this has been going on since the early ChatGPT days) have been people in Marketing and Sales, because for them it doesn't matter if the AI hallucinates some pure bullshit since that's 90% of their job anyway.
We have spent god knows how much time and resources trying to get these tools doing anything more useful than simple demos that get thrown out immediately, and it's just not there. No one is pushing 100x the code or features they were before, projects aren't finishing any faster than they were before, and nobody even bothers turning on the meeting transcription tools either anymore because more often than not it'll interpret things said in the meeting just plain wrong or even make up entire discussion points that were never had.
Just recently, like last week recently, we had some idiotic PR review bot from coderabbit or some other such company be activated. I've never seen so many people complain all at once on Slack, there was a thread with hundreds of individuals all saying how garbage it was and how much it was distracting from reviews. I didn't see a single person say they liked the tool, not 1 single person had anything good to say about it.
So as far as I'm concerned, it's just a MASSIVE fucking hype bubble that will ultimately spawn some tooling that is sorta useful for generating unimportant scripts, but little else.
never give an LLM to your junior engineers. The LLM itself is mostly like a junior engineer and will make a complete mess of things if not guided by someone with a lot of experience.
Basically if people are producing code or documentation that looks like an LLM wrote it, that's not really what I see as the model that makes these tools useful.
The last few years has revealed the extent to which HN is packed with middle-aged, conservative engineers who are letting their fear and anxiety drive their engineering decisions. It’s sad, I always thought of my fellow engineers as more open-minded.
> The last few years has revealed the extent to which HN is packed with middle-aged, conservative engineers who are letting their fear and anxiety drive their engineering decisions.
so, people with experience?
ive been programming for more than 40 years
Obviously. Turns out experience can be self-limiting in the face of paradigm-shifting innovation.
In hindsight it makes sense, I’m sure every major shift has played out the same way.
> Turns out experience can be self-limiting in the face of paradigm-shifting innovation.
It also turns out that experience can be what enables you to not waste time on trendy stuff which will never deliver on its promises. You are simply assuming that AI is a paradigm shift rather than a waste of time. Fine, but at least have the humility to acknowledge that reasonable people can disagree on this point instead of labeling everyone who disagrees with you as some out of touch fuddy-duddy.
Bitcoin is at 93k so I don’t think it’s entirely accurate to say blockchain is insubstantive or without value
There can be a bunch of crazy people trading each other various lumps of dog feces for increasing sums of cash, that doesn't mean dogshit is particularly valuable or substantive either.
I'd argue even dogshit has more practical use than Bitcoin, if no one paid money for Bitcoin. You can throw it for self-defence, compost it (under high heat to kill the germs), put it on your property to scare away raccoons (it works sometimes).
Bitcoin and other crypto coins have a practical use. You can use them to buy whatever is being sold on the darkweb with the main product categories being drugs and guns. I honestly believe much of the valuation of Crypto is tied to these marketplaces.
And by "dog feces," I assume you mean fiat currency, correct?
Cryptocurrency solves the money-printing problem we've had around the world since we left the gold standard. If governments stopped making their currencies worthless, then bitcoin would go to zero.
This seems to be almost purely bandwagon value, like preferring Coca-Cola to some other drink. There are other blockchains that are better technically along a bunch of dimensions, but they don't have the mindshare.
Bitcoin is probably unkillable. Even if were to crash, it won't be hard to round up enough true believers to boost it up again. But it's technically stagnant.
True, but then so is a lot of "tech". There were certainly, at least equivalent, social applications before and all throughout Facebooks dominance, but like Bitcoin the network effect becomes primary, after a minimum feature set.
For Bitcoin, it doesn't exactly seem to be a network effect? It's not like choosing a chat app because that's what your friends use.
Many other cryptocurrencies are popular enough to be easily tradable and have features to make them work better for trade. Also, you can speculate on different cryptocurrencies than your friends do.
Technically stagnant is a good thing; I'd prefer the term technically mature. It's accomplished what it set out to do, which is to be a decentralized, anonymous form of digital currency.
The only thing that MIGHT kill it is if governments stopped printing money.
Beanie Babies were trading pretty well, too, although it wasn't quite "solving sudokus for drugs", so I guess that's why they didn't have as much staying power.
very little of the trading actually happens on the blockchain, it's only used to move assets between trading venues.
The values of bitcoin are:
- easy access to trading for everyone, without institutional or national barriers
- high leverage to effectively easily borrow a lot of money to trade with
- new derivative products that streamline the process and make speculation easier than ever
The blockchain plays very little part in this. If anything it makes borrowing harder.
I agree with "easy access to trading for everyone, without institutional or national barriers"
how on earth does bitcoin have anything to do with borrowing or derivatives?
in a way that wouldn't also work for beanie babies
Those are the main innovations tied to crypto trading. They do indeed have little to do with the blockchain or bitcoin itself, and do apply to any asset.
There are actually several startups whose pitch is to bring back those innovations to equities (note that this is different from tokenized equities).
If you can't point to real use cases at scale, it's hard to argue it has intrisinc value even though it may have speculative value.
With almost zero fundamentals. That’s the part you are glossing over.
Uh… So the argument here is that anticipated future value == meaningful value today?
The whole cryptocurrency world requires evangelical buy-in. But there is no directly created functional value other than a historic record of transactions and hypothetical decentralization. It doesn’t directly create value. It’s a store of it - again, assuming enough people continue to buy into the narrative so that it doesn’t dramatically deflate when you need to recover your assets. States and other investors are helping make stability happen to maintain it as a value store, but you require the story propagating to achieve those ends.
You’re wasting your breath. Bitcoin will be at a million in 2050 and you’ll still get downvoted here for suggesting it’s anything other than a stupid bubble that’s about to burst any day now.
It’s hard to understand how people can be so determined to ignore reality.
> There's a dichotomy in the software world between real products (which have customers and use cases and make money by giving people things they need) and hype products (which exist to get investors excited, so they'll fork over more money).
AI is not both of these things? There are no real AI products that have real customers and make money by giving people what they need?
> LLMs are a more substantive technology than blockchain ever was, but like blockchain, their potential has been greatly overstated.
What do you view as the potential that’s been stated?
Not OP but for starters LLMs != AI
LLMs are not an intelligence, and people who treat them as if they are infallible Oracles of wisdom are responsible for a lot of this fatigue with AI
>Not OP but for starters LLMs != AI
Please don't do this, make up your own definitions.
Pretty much anything and everything that uses neural nets is AI. Just because you don't like how the definition has been since the beginning doesn't mean you get to reframe it.
In addition, if humans are not infallible oracles of wisdom, they wouldn't be an intelligence in your definition.
Why then there is an AI-powered dishwasher, but no AI car?
https://www.tesla.com/fsd ?
I also don't understand the LLM ⊄ AI people. Nobody was whining about pathfinding in video games being called AI lol. And I have to say LLMs are a lot smarter than A*.
Cannot find any mention of AI there.
Also it's funny how they add (supervised) everywhere. It looks like "Full self driving (not really)"
Yes one needs some awareness of the technology. Computer vision: unambiguously AI, motion planning: there are classical algorithms but I believe tesla / waymo both use NNs here too.
Look I don't like the advertising of FSD, or musk himself, but we without a doubt have cars using significant amounts of AI that work quite well.
It's because nobody was trying to take video game behavior scripts and declare them the future of all things technology.
Ok? I'm not going to change the definition of a 70 year old field because people are annoyed at chatgpt wrappers.
A way to state this point that you may find less uncharitable is that a lot of current LLM applications are just very thin shells around ChatGPT and the like.
In those cases the actual "new" technology (ie, not the underlying ai necessarily) is not as substantive and novel (to me at least) as a product whose internals are not just an (existing) llm.
(And I do want to clarify that, to me personally, this tendency towards 'thin-shell' products is kind of an inherent flaw with the current state of ai. Having a very flexible llm with broad applications means that you can just put Chatgpt in a lot of stuff and have it more or less work. With the caveat that what you get is rarely a better UX than what you'd get if you'd just prompted an llm yourself.
When someone isn't using llms, in my experience you get more bespoke engineering. The results might not be better than an llm, but obviously that bespoke code is much more interesting to me as a fellow programmer)
Yes ok then I definitely agree
Shells around chatgpt are fine if they provide value.
Way better than AI jammed into every crevice for no reason.
It's not just that AI is being pushed on to employees by the tech giants - this is true - but that the hype of AI as a life changing tech is not holding up and people within the industry can easily see this. The only life-changing thing it's doing is due to a self-fulfilling prophecy of eliminating jobs in the tech industry and outside by CEOs who have bet too much on AI. Everyone currently agrees that there is no return on all the money spent on AI. Some players may survive and do well in the future but for a majority there is only the prospect of pain, and this is what all the negativity is about.
As a layoff justification and a hurryup tool, it is pretty loathesome. People use their jobs for their housing, food, etc.
More than this man. AI is making me re-appreciate part of the Marxist criticism of capitalism. The concept of worker alienation could be easily extended in new forms to the labor situation in an AI-based economy. FWIW, humans derive a lot of their self-evaluation as people from labor.
Marx was correct in his identification of the problem (the communist manifesto still holds up today). Marx went off the rails with his solution.
Getting everyone to even agree that this is a problem is impossible. I'm open to the universe of solutions, as long as it isn't "Anthropic and OpenAI get another $100 billion dollars while we starve". We can probably start there.
It's a problem, it's just not the root problem.
The root problem is nepo babies.
Whether it's capitalism or communism or whatever China has currently - it's all people doing everything to give their own children every unfair advantage and lie about it.
Why did people flee to America from Europe? Because Europe was nepo baby land.
Now America is nepo baby land and very soon China will be nepo baby land.
It's all rather simple. Western 'culture' is convincing everyone the nepo babies running things are actually uber experts because they attended university. Lol.
Yeah, unfortunately Marx was right about people not realizing the problem, too. The proletariat drowns in false consciousness :(
In reality, the US is finally waking up to the fact that the "golden age" of capitalism in the US was built upon the lite socialism of the New Deal, and that all the bs economic opinions the average american has subscribed to over the past few decades was completely just propaganda and anyone with half a brain cell could see from miles away that since reagonomics we've had nothing but a system that leads to gross accumulation to the top and to the top alone and this is a sure fire way (variable maximization) in any complex system to produce instability and eventual collapse.
There's a false dichotomy in that conclusion.
> humans derive a lot of their self-evaluation as people from labor.
We're conditioned to do so, in large part because this kind of work ethic makes exploitation easier. Doesn't mean that's our natural state, or a desirable one for that matter.
"AI-based economy" is too broad a brush to be painting with. From the Marxist perspective, the question you should be asking is: who owns the robots? and who owns the wealth that they generate?
All big corporate employees hate AI because it is incessantly pushed on them by clueless leadership and mostly makes their job harder. Seattle just happens to have a much larger percent of big tech employees than most other cities (>50% work for Microsoft or Amazon alone). In places like SF this gloom is balanced by the wide eyed optimism of employees of OpenAI, Anthropic, Nvidia, Google etc. and the thousands of startups piggybacking off of them hoping to make it big.
Definitely, AI sentiment is positive among most people at the small startup I work at in the Seattle area. I do see the "AI fatigue" too, I bet the majority is from using AI as a repeated layoff rationalization. Personally AI is a tool, one of the more useful ones (e.g. Claude and Gemini thinking models make quite helpful code reviewers once given a checklist) The hype often overshadows these benefits.
That's probably the difference
I feel like there is an absurd amount of negative rhetoric about how AI doesn't have any real world use cases in this comment thread.
I do believe that the product leadership is shoehorning it into every nook and cranny of the world right now and there are reasons to be annoyed by that but there are also countless incredible use cases that are mind blowing, that you can use it every day for.
I need to write about some absolutely life changing scenarios, including: got me thousands of dollars after it drafted a legal letter quoting laws I knew nothing about, saved me countless hours troubleshooting an RV electrical problem, found bugs in code that I wrote that were missed by everyone around me, my wife was impressed with my seemingly custom week long meal plan that fit her short term no soy/dairy allergy diet, helped me solve an issue with my house that a trained professional completely missed the mark on, completely designed and wrote code for a halloween robot decoration I had been trying to build for years, saves my wife hundreds of hours as an audio book narrator summarize characters for her audio books so she doesn't have to read the entire book before she narrates the voices.
I'm worried about some of the problems LLMs will create for humanity in the future but those are problems we can solve in the future too. Today it's quite amazing to have these tools at our disposal and as we add them in smart ways to systems that exist today, things will only get better.
Call me glass half full... but maybe it's because I don't live in Seattle
> I feel like there is an absurd amount of negative rhetoric about how AI doesn't have any real world use cases in this comment thread.
What I feel is people are denouncing the problems and describing them as not being worth the tradeoff, not necessarily saying it has zero use cases. On the other end of the spectrum we have claims such as:
> countless incredible use cases that are mind blowing, that you can use it every day for.
Maybe those blow your mind, but not everyone’s mind is blown so easily.
For every one of your cases, I can give you a counter example where doing the same went horribly wrong. From cases being dismissed due to non-existent laws being quoted, to people being poisoned by following LLM instructions.
> I'm worried about some of the problems LLMs will create for humanity in the future but those are problems we can solve in the future too.
No, they are not! We can’t keep making climate change worse and fix it later. We can’t keep spreading misinformation at this rate and fix it later. We can’t keep increasing mass surveillance at this rate and fix it later. That “fix it later” attitude is frankly naive. You are falling for the narrative that got us into shit in the first place. Nothing will be “fixed later”, the powerful actors will just extract whatever they can and bolt.
> and as we add them in smart ways to systems that exist today, things will only get better.
No, they will not. Things are getting worse now, it’s absurd to think it’s inevitable they’ll get better.
> I feel like there is an absurd amount of negative rhetoric about how AI doesn't have any real world use cases in this comment thread
Yep.
I feel like actually, being negative on AI is the common view now, even though every other HN commenter thinks they’re the only contrarian in the world to see the light and surely the masses must be misguided for not seeing it their way.
The same way people love to think they’re cooler than the masses by hating [famous pop artist]. “But that’s not real music!” they cry.
And that’s fine. Frankly, most of my AI skeptic friends are missing out on a skill that’s helped me a fair bit in my day to day at work and casually. Their loss.
Like it or not, LLMs are here to stay. The same way social media boomed and was here to stay, the same way e-commerce boomed and was here to stay… there’s now a whole new vertical that didn’t exist before.
Of course there will be washouts over time as the hype subsides, but who cares? LLMs are still wicked cool to me.
I don’t even work in AI, I just think they’re fascinating. The same way it was fascinating to me when I made a computer say “Hello, world!” for the first time.
I think the disconnect for me is that I want AI to do a bunch of mundane stuff in my job where it is likely to be discouraged so I can focus on my work. My employer's CEO just implemented an Elon-style "top 5" bi-weekly report. Would they find it acceptable for me to submit AI-generated writing? I just had to do my annual self and peer reviews. Is AI writing valid here? A company wanted to put me, a senior engineer, through a five stage interview process, including a software-graded Leetcode style assessment. Should I be able to use AI to complete it?
These aren't meant to be gotcha rhetorical questions, just parts of my professional life where AI _isn't_ desirable by those in power, even if they're some of the only real world use cases where I'd want to use it. As someone said upthread, I want AI to do my dishes and laundry so I can focus on leisure and creative pursuits (or, in my job, writing code). I don't want AI doing creative stuff for me so I can do dishes and laundry.
From the article:
> I wanted her take on Wanderfugl , the AI-powered map I've been building full-time.
I can at least give you one piece of advice. Before you decide on a company or product name, take the time to speak it out loud so you can get a sense of how it sounds.
I grew up in Norway and there's this idea in Europe of someone who breaks from corporate culture and hikes and camps a lot (called wandervogel in german). I also liked how when pronounced in Norwegian or Swedish it sounds like wander full. I like the idea of someone who is full of wander.
In Swedish the G wouldn't be silent so it wouldn't really be all that much like "wonderful"; "vanderfugel" is the closest thing I could come up with for how I'd pronounce it with some leniency.
Same in Danish FWIW.
In English, I’d pronounce it very similar to “wonderful”.
If OP dropped the g, it would be a MUCh better product name.
this would make it even closer to the dangerously similar travel planning app "wanderlog"
Solid advice. Seeing how many here would pronounce it differently, I totally agree hahah
I actually own wanderfull.ai
Drop an l would be better I think
The weird thing is that half of the uses of the name on that landing page spell it as "Wanderfull". All of the mock-up screencaps use it, and at the bottom with "Be one of the first people shaping Wanderfull" etc.
So even the creator can't decide what to call it!
AI probably generated all of that and the OP didn't even review its output.
Also, do it assuming different linguistic backgrounds. It could sound dramatically different by people that speak English but as second language, which are going to be a whole lot of your users, even if the application is in English.
If there is a g in there I will pronounce a g there. I have some standards and that is one. Pronouncing every single letter.
> Pronouncing every single letter.
Now I want to know how you pronounce words like: through, bivouac, and queue.
You don’t pronounce all the letters?
That's a gnarly standard you have there.
obviously not a native French speaker
It's pronounced wanderfull in Norwegian
And how many of your users are going to have Nordic backgrounds?
I personally thought it was wander _fughel_ or something.
Let alone how difficult it is to remember how to spell it and look it up on Google.
The one current paying user of the app I've seen in this discussion called it "Wanderlog". FYI on the stickiness of the current name.
wanderlog is a separate web service
https://wanderlog.com/
Just FYI, I would read it out loud in English as “wander fuggle”. I would assume most Americans would pronounce the ‘g’.
I thought ‘wanderfugl’ was a throwback to ~15 years ago when it was fashionable to use a word but leave out vowels for no reason, like Flickr/ /Tumblr/Scribd/Blendr.
"Wanderful" would be a better name.
And if you manage to say it outloud, say it to someone else and ask them to spell it. If they can’t spell it, they can’t type it into the url bar.
Maybe that's why they didn't go with the English cognate i.e. Wanderfowl, since being foul isn't great branding
I think the more pressing advice here is, limit yourself to one name (https://wanderfugl.com/images/guides.png)
this must be one of the incredible AI innovations the folks in Seattle are missing out on
What's wrong with wahn-der-fyoo-gull?
What? You don't want travel tips from an itinerant swinger? Or for itinerant swingers?
Anecdotally, lots of people in SF tech hate AI too. _Most_ people out of tech do. But, enough of the people in tech have their future tied to AI that there are lot of vocal boosters.
It is not at all my experience working in local government (that is, in close contact with everybody else paying attention to local government) that non-tech people hate AI. It seems rather the opposite.
Managers everywhere love the idea of AI because it means they can replace expensive and inefficient human workers with cheap automation.
Among actual people (i.e. not managers) there seems to be a bit of a generation gap - my younger friends (Gen Z) are almost disturbingly enthusiastic about entrusting their every thought and action to ChatGPT; my older friends (young millennials and up) find it odious.
The median age of people working local politics is probably 55, and I've met more people (non-family, that is) over 70 doing this than in anything else, and all of them are (a) using AI for stuff and (b) psyched to see any new application of AI being put to use (for instance, a year or so ago, I used 4o to classify every minute spent in our village meetings according to broad subjects).
Or, drive through Worth and Bridgeview in IL, where all the middle eastern people in Chicago live, and notice all the AI billboards. Not billboards for AI, just, billboards obviously made with GenAI.
I think it's just not true that non-tech people are especially opposed to AI.
A Pew Research Center survey found age correlation. But not a generation gap.[1]
[1] https://www.pewresearch.org/science/2025/09/17/ai-in-america...
Mangers should realize that the thing AI might be best at is to replace them. Most of my managers don't understand the people they are managing and don't understand what the people they are managing are actually building. They job is to get a question from management that their reports can answer, format that answer for their boss and send the email. They job is to be the leader in a meeting to make sure it stays on track, not understand the content. AI can do all that shit without a problem.
MANNA https://milweesci.weebly.com/uploads/1/3/2/4/13247648/mannap...
I don't doubt that many love it. I'm just going based on SF non-tech people I know, who largely see it as the thing vaguely mentioned on every billboard and bus stop, the chatbot every tech company seems to be trying to wedge into every app, and the thing that makes misleading content on social media and enables cheating on school projects. But, sometimes it is good at summarizing videos and such. I probably have a biased sample of people who don't really try to make productive use of AI.
I can imagine reasons why non-tech people in SF would hate all tech. I work in tech and living in the middle of that was a big part of why I was in such a hurry to get out of there.
Frankly, tech deserves its bad reputation in SF (and worldwide, really).
One look at the dystopian billboards bragging about trying to replace humans with AI should make any sane human angry at what tech has done. Or the rising rents due to an influx of people working on mostly useless AI startups, 90% of which won't be around in 5 years. Or even how poorly many in tech behave in public and how poorly they treat service workers. That's just the tip of the iceberg, and just in SF alone.
I say all this as someone living in SF and working in tech. As a whole, we've brought the hate upon ourselves, and we deserve it.
I don't agree with any of this. I just think it's aggravating to live in a company town.
There's a long list of things that have "replaced" humans all the way back to the ox drawn plow. It's not sane to be angry at any of those steps along the way. GenAI will likely not be any different.
it's plenty sane to be angry when the benefits of those technical innovations are not distributed equally.
It is absolutely sane to be angry at people's livelihoods being destroyed and most aspects of life being worsened just so a handful of multi-billionaires that already control society can become even richer.
The plough also made the rich richer, but in the long run the productivity gains it enabled drove improvements to common living standards.
Non-technical people that I know have rapidly embraced it as "better google where i don't have to do as much work to answer questions." This is in a non-work context so i don't know how much those people are using it to do their day job writing emails or whatever. A lot of these people are tech-using boomers - they already adjusted to Google/the internet, they don't know how it works, they just are like "oh, the internet got even better."
There's maybe a slow trend towards "that's not true, you should know better than to trust AI for that sort of question" in discussions when someone says something like "I asked AI how [xyz was done]" but it's definitely not enough yet to keep anyone from going to it as their first option for answering a question.
Anyone involved in government procurement loves AI, irrespective of what it even is, for the simple fact that they get to pointedly ask every single tech vendor for evidence that they have "leveraged efficiency gains from AI" in the form of a lower bid.
At least, that's my wife's experience working on a contract with a state government at a big tech vendor.
Not talking about government employees, for whatever that's worth.
EDIT: Removed part of my post that pissed people off for some reason. shrug
It makes a lot of sense that someone casually coming in to use chatgpt for 30 minutes a week doesn't have any reason to think more deeply about what using that tool 'means' or where it came from. Honestly, they shouldn't have to think about it.
The claim I was responding to implied that non-techies distinctively hate AI. You're a techie.
It’s one of those “people hate noticing AI-generated stuff, but everyone and their mom is using ChatGPT to make their works easier”. There are a lot of vocal boosters and vocal anti-boosters, but the general population is using it in a Google fashion and move on. Not everyone is thinking about AI-apocalypse every day.
Personally, I’m in-between the opinions. I hate when I’m consuming AI-generated stuff, but can see the use for myself for work or asking bunch of not-so-important questions to get general idea of stuff.
> enough of the people in tech have their future tied to AI that there are lot of vocal boosters
That's the presumption. There's no data on whether this is actually true or not. Most rational examinations show that it most likely isn't. The progress of the technology is simply too slow and no exponential growth is on the horizon.
Most of my FB contacts are not in tech. It is overwhelming viewed as a negative by them. To be clearer: I'm counting anyone who posts AI-generated pictures on FB as implicitly being pro-AI; if we neglect this portion the only non-negative posts about AI would be highly qualified "in some special cases it is useful" statements.
That’s fair. The bad behavior in the name of AI definitely isn’t limited to Seattle. I think the difference in SF is that there are people doing legitimately useful stuff with AI
I think this comment (and TFA) is really just painting with too broad of strokes. Of course there are going to be people in tech hubs that are very pro-AI, either because they are working with it directly and have had legitimately positive experiences or because they work with it and they begrudgingly see the writing on that wall for what it means for software professionals.
I can assure you, living in Seattle I still encounter a lot a AI boosters just as much as I encounter AI haters/skeptics
What’s so striking to me is these “vocal boosters” almost preach like televangelists the moment the subject comes up. It’s very crypto-esque (not a hot take at all I know). I’m just tired of watching these people shout down folks asking legitimate questions pertaining to matters like health and safety.
Health and safety seems irrelevant to me. I complain about cars, I point out "obscure" facts like that they are a major cause of lung related health problems for innocent bystanders, I don't actually ride in cars on any regular basis, I use them less in fact than I use AI. There were people at the car's introduction who made all the points I would make today.
The world is not at all about fairness of benefits and impacts to all people it is about a populist mass and what amuses them and makes their life convenient, hopefully without attending the relevant funerals themselves.
> health and safety seems irrelevant to me
Honestly I don’t really know what to say to that, other than it seems rather relevant to me. I don’t really know what to elaborate on given we disagree on such a fundamental level.
Do you think the industry will stop because of your concern? If for example, AI does what it says on the box but causes goiters for prompt jockeys do you think the industry will stop then or offshore the role of AI jockey?
It's lovely that you care about health, but I have no idea why you think you are relevant to a society that is very much willing to risk extinction to avoid the slightest upset or delay to consumer convenience measured progress.
> Do you think the industry will stop because of your concern?
I’m not sure what this question is addressing. I didn’t say it needs to “stop” or the industry has to respond to me.
> It's lovely that you care about health,
1) you should care too, 2) drop the patronizing tone if you are actually serious about having a conversation.
From my PoV you are trolling with virtue signalling and thought terminating memes.. You don't want to discuss why every(?) technological introduction so far has ignored priorities such as your sentiments and any devil's adovocate must be the devil..
The members of HN are actually a pretty strongly biased sample towards people who get the omelet when the eggs get broken.
People not being assholes and having opinions is not "trolling with virtue signaling". Even where people do virtue signal, it is significant improvement over "vice signaling" which you seem to be doing and expecting others to do.
I for one have no idea what you mean by health and safety with respect to AI. Do you have an OSHA concern?
I have an “enabling suicidal ideation” concern for starters.
To be honest I’m kind of surprised I need to explain what this means so my guess is you’re just baiting/being opaque, but I’ll give you the benefit of the doubt and answer your question taken at face value: There have been plenty of high profile incidents in the news over the past year or two, as well as multiple behavioral health studies showing that we need to think critically about how these systems are deployed. If you are unable to find them I’ll locate them for you and link them, but I don’t want to get bogged down in “source wars.” So please look first (search “AI psychosis” to start) and then hit me up if you really can’t find anything.
I am not against the use of LLM’s, but like social media and other technologies before it, we need to actually think about the societal implications. We make this mistake time and time again.
> To be honest I’m kind of surprised I need to explain what this means so my guess is you’re just baiting/being opaque
Search for health and safety and see how many results are about work.
All the Ai companies are taking those concerns seriously though. Every major chat service has guardrails in place that shutdown sessions which appear to be violating such content restrictions.
If your concerns are things like AI psychosis, then I think it is fair to say that the tradeoffs are not yet clear enough to call this. There are benefits and bad consequences for every new technology. Some are a net positive on the balance, others are not. If we outlawed every new technology because someone, somewhere was hurt, nothing would ever be approved for general use.
> All the Ai companies are taking those concerns seriously though.
I do not feel they are but also I was primarily talking about the AI-evangelists who shout people asking these questions down as Luddites.
Strangely I've found the only people who are super excited about AI are executive level boomers. My mom loves AI and uses it to do her job, which of course has poor results. All the younger people I know hate AI. Perhaps it's also a generational dofference.
[flagged]
As a Seattle SWE, I'd say most of my coworkers do hate all the time-wasting AI stuff being shoved down our throats. There are a few evangelical AI boosters I do work with, but I keep catching mistakes in their code that they didn't used to make. Large suites of elegant looking unit tests, but the unit tests include large amounts of code duplicating functionality of the test framework for no reason, and I've even seen unit tests that mock the actual function under test. New features that actually already exist with more sane APIs. Code that is a tangled web of spaghetti. These people largely think AI is improving their speed but then their code isn't making it past code review. I worry about teams with less stringent code review cultures, modifying or improving these systems is going to be a major pain.
As someone on a team with a less stringent code review culture, AI generated code creates more work when used indiscriminately. Good enough to get approved but full of non-obvious errors that cause expensive rework which only gets prioritized once the shortcomings become painfully obvious (usually) months after the original work was “completed” and once the original author has forgotten the details, or worse, left the team entirely. Not to say AI generated code is not occasionally valuable, just not for anything that is intended to be correct and maintainable indefinitely by other developers. The real challenge is people using AI generated code as a mechanism to avoid fully understanding the problem that needs to be solved.
Exactly it’s the non-obvious errors that are easy to miss—doubly so if you are just scanning the code. Those errors can create very hard to find bugs.
So between the debugging and many times you need to reprompt and redo (if you bother at all, but then that adds debugging time) is any time actually saved?
I think the dust hasn’t settled yet because no one has shipped mostly AI generated code for a non-trivial application. They couldn’t have with its current state. So it’s still unknown whether building on incredibly shaky ground will actually work in real life (I personally doubt it).
> and I've even seen unit tests > that mock the actual function > under test.
Yup. Ai is so fickle it’ll do anything to accomplish the task. But ai is just a tool it’s all about what you allow it to do. Can’t blame ai really.
In fairness I’ve seen humans make that mistake. We had a complete outage in the testing of a product once and a couple of tests were still green. Turns it they tested nothing and never had.
> In fairness I’ve seen humans make that mistake
These were (formerly) not the kinds of humans who regularly made these kinds of mistakes.
Leverage.
Those slops already existed, but AI scales them by an order of magnitude.
I guess the same can be said of any technology, but AI is just a more powerful tool overall. Using languages as an example - lets say duck typing allowed a 10% productivity boost, but also introduced 5% more mistakes/problems. AI (claims to) allow a 10x productivity boost, but also ~10x mistakes/problems.
If a tool makes it easy to shoot yourself in the foot, then it's not a good tool. See C++.
I'm no apologist but this statement doesn't ring for me. It's easy to shock yourself with electricity, is it a bad tool?
Electricity isn't a tool, it's nature. An unenclosed electrical plug which you had to be really careful when handling would be a bad tool, yes.
A tool is something designed by humans. We don't get to design electricity, but we do get to design the systems we put in place around it.
Gravity isn't a tool, but stairs are, and there are good and bad stairs.
A knife then.
Most tools are dangerous in the hands of the inept or the careless. Don’t run with scissors.
A gun is a good tool easy to shoot yourself in the foot with
I've had Claude try to pull the same trick on me just yesterday. It will also try to cheat and apply a "fix" that just masks the real problem.
I've interfaced with some AI generated code and after several examples of finding subtle and yet very wrong bugs I now find that I digest code that I suspect coming from AI (or an AI loving coworker) with much much more scrutiny than I used to. I've frankly lost trust in any kind of care for quality or due diligence from some coworkers.
I see how the use of AI is useful, but I feel that the practitioners of AI-as-coding-agent are running away from the real work. How can you tell me about the system you say you have created if you don't have the patience to make it or think about it deeply in the first place?
Your coworkers were probably writing subtle bugs before AI too.
Would you rather consume a bowl of soup with a fly in it, or a 50 gallon drum with 1,000 flies in it? In which scenario are you more likely to fish out all the flies before you eat one?
Easier to skim 1000 flies from a single drum than 100 flies from 100 bowls of soup.
Alas, the flies are not floating on the surface. They are deeply mixed in, almost as if the machine that generated the soup wanted desperately to appear to be doing an excellent job making fly-free soup.
… while not having a real distinction between flies and non-fly ingredients.
No, I think it would be far easier to pick 100 flies each from a single bowl of soup than to pick all 1000 flies out of a 50 gallon drum.
You don’t get to fix bugs in code by simply pouring it through a filter.
I think the dynamic is different - before, they were writing and testing the functions and features as they went. Now, (some of) my coworkers just push a PR for the first or second thing copilot suggested. They generate code, test it once, it works that time, and then they ship it. So when I am looking through the PR it's effectively the _first_ time a human has actually looked over the suggested code.
Anecdote: In the 2 months after my org pushed copilot down to everyone the number of warnings in the codebase of our main project went from 2 to 65. I eventually cleaned those up and created a github action that rejects any PR if it emits new warnings, but it created a lot of pushback initially.
Then when you've taken an hour to be the first person to understand how their code works from top to bottom and point out obvious bugs, problems and design improvements (no, I don't think this component needs 8 useEffects added to it which deal exclusively with global state that's only relevant 2 layers down, which are effectively treating React components like an event handling system for data - don't believe people who tell you LLMs are good at React, if you see a useEffect with an obvious LLM comment above it, it's likely to be buggy or unnecessary), your questions about it are answered with an immediate flurry of commits and it's back to square one.
Who are we speeding up, exactly?
Yep, and if you're lucky they actually paste your comments back into the LLM. A lot of times it seems like they just prompted for some generic changes, and the next revision has tons of changes from the first draft. Your job basically becomes playing reviewer to someone else's interactions with an LLM.
It's about as productive as people who reply to questions with "ChatGPT says <...>" except they're getting paid to do it.
I wonder if there’s a way to measure the cost of such code and associate it with the individuals incurring it. Unless this shows on reports, managers will continue believing LLMs are magic time saving machines writing perfect code.
As another Seattle SWE, I'll go against the grain and say that I think AI is going to change the nature of the market for labor for SWE's and my guess would be for the negative. People need to remember that the ability of AI in code generation today is the worst that it ever will be, and it's only going to improve from here. If you were to just judge by the sentiment on HN, you would think no coder worth their weight was using this in the real world—but my experience on a few teams over the last two years has been exactly the opposite—people are often embarrassed to admit it but they are using it all the time. There are many engineers at Meta that "no longer code" by hand and do literally all of their problem solving with AI.
I remember last year or even earlier this year feeling like the models had plateau'd and I was of the mindset that these tools would probably forever just augment SWEs without fully replacing them. But with Opus 4.5, gemini 3, et al., these models are incredibly powerful and more and more SWEs are leaning on them more and more—a trend that may slow down or speed up—but is never going to backslide. I think people that don't generally see this are fooling themselves.
Sure, there are problem areas—it misses stuff, there are subtle bugs, it's not good for every codebase, for every language, for every scenario. There is some sloppiness that is hard to catch. But this is true with humans too. Just remember, the ability of the models today is the worst that it will ever be—it's only going to get better. And it doesn't need to be perfect to rapidly change the job market for SWE's—it's good enough to do enough of the tasks for enough mid-level SWEs at enough companies to reshape the market.
I'm sure I'll get downvoted to hell for this comment; but I think SWEs (and everyone else for that matter) would best practice some fiscal austerity amongst themselves because I would imagine the chance of many of us being on the losing side of this within the next decade is non-trivial. I mean, they've made all of the progress up to now in essentially the last 5 years and the models are already incredibly capable.
This has been exactly my mindset as well (another Seattle SWE/DS). The baseline capability has been improving and compounding, not getting worse. It'd actually be quite convenient if AI's capabilities stayed exactly where they are now; the real problems come if AI does work.
I'm extremely skeptical of the argument that this will end up creating jobs just like other technological advances did. I'm sure that will happen around the edges, but this is the first time thinking itself is being commodified, even if it's rudimentary in its current state. It feels very different from automating physical labor: most folks don't dream of working on an assembly line. But I'm not sure what's left if white collar work and creative work are automated en masse for "efficiency's" sake. Most folks like feeling like they're contributing towards something, despite some people who would rather do nothing.
To me it is clear that this is going to have negative effects on SWE and DS labor, and I'm unsure if I'll have a career in 5 years despite being a senior with a great track record. So, agreed. Save what you can.
> I'm extremely skeptical of the argument that this will end up creating jobs just like other technological advances did. I'm sure that will happen around the edges, but this is the first time thinking itself is being commodified, even if it's rudimentary in its current state. It feels very different from automating physical labor: most folks don't dream of working on an assembly line.
Most people do not dream of working most white collar jobs. Many people dream of meaningful physical labor. And many people who worked in mines did not dream of being told to learn to code.
> the real problems come if AI does work.
Exactly. For example, what happens to open source projects where developers don't have access to the latest proprietary dev tools? Or, what happens to projects like Spring if AI tools can generate framework code from scratch? I've seen maven builds on Java projects that pull in hundreds or even thousands of libraries. 99% of that code is never even used.
The real changes to jobs will be driven by considerations like these. Not saying this will happen but you can't rule it out either.
edit: Added last sentence.
> It'd actually be quite convenient if AI's capabilities stayed exactly where they are now
That's what Im' crossing my fingers at, makes our job easier, but doesn't degrade our worth. It's the best possible outcome for devs.
I keep getting blown away by AI (specifically Claude Code with the latest models). What it does is literally science fiction. If you told someone 5 years ago that AI can find and fix a bug in some complex code with almost zero human intervention nobody would believe you, but this is the reality today. It can find bugs, it can fix bugs, it can refactor code, it can write code. Yes, not perfect, but with a well organized code base, and with careful prompting, it rivals humans in many tasks (certainly outperforms them in some aspects).
As you're also saying this is the worst it will ever be. There is only one direction, the question is the acceleration/velocity.
Where I'm not sure I agree is with the perception this automatically means we're all going to be out of a job. It's possible there would be more software engineering jobs. It's not clear. Someone still has to catch the bad approaches, the big mistakes, etc. There is going to be a lot more software produced with these tools than ever.
I think whether you are right or wrong it makes sense to hedge your bets. I suspect many people here are feeling some sense of fear (career, future implications, etc); I certainly do on some of these points and I think that's a rational response to be aware of the risk of the future unknown.
In general I think -> if I was not personally invested in this situation (i.e. another man on the street) what would be my immediate reaction to this? Would I still become a software engineer as an example? Even if it is doesn't come to past, given what I know now, would I take that bet with my life/career?
I think if people were honest with themselves sadly the answer for many would probably be "no". Most other professions wouldn't do this to themselves either; SWE is quite unique in this regard.
> Just remember, the ability of the models today is the worst that it will ever be—it's only going to get better.
This is the ultimate hypester’s motte to retreat to whenever the bailey of claimed utility of a technology falls. It’a trivially true of literally any technology, but also completely meaningless on its own.
> code generation today is the worst that it ever will be, and it's only going to improve from here.
I'm also of the mindset that even if this is not true, that is, even if current state of LLMs is best that it ever will be, AI still would be helpful. It is already great at writing self contained scripts, and efficiency with large codebases has already improved.
> I would imagine the chance of many of us being on the losing side of this within the next decade is non-trivial.
Yes, this is worrisome. Though its ironic that almost every serious software engineer at some point in time in their possibly early childhood / career when programming was more for fun than work, thought of how cool it would be for a computer program to write a computer program. And now when we have the capability, in front of our eyes, we're afraid of it.
But, one thing humans are really good at is adaptability. We adapt to circumstances / situation -- good or bad. Even if the worst happens, people loose jobs, for a short term it will be negatively impactful for the families, however, over a period of time, humans will adapt to the situation, adapt to coexist with AI, and find next endeavour to conquer.
Rejecting AI is not the solution. Using it as any other tool, is. A tool that, if used correctly, by the right person, can indeed produce faster results.
I mean, some are good at adaptability, while others get completely left in the dust. Look at the rust belt: jobs have left, and everyone there is desperate for a handout. Trump is busy trying to engineer a recession in the US—when recessions happen, companies at the margin go belly-up and the fat is trimmed from the workforce. With the inroads that AI is making into the workforce, it could be the first restructuring where we see massive losses in jobs.
> People need to remember that the ability of AI in code generation today is the worst that it ever will be, and it's only going to improve from here.
I sure hope so. But until the hallucination problem is solved, there's still going to be a lot of toxic waste generated. We have got to get AI systems which know when they don't know something and don't try to fake it.
> I mean, they've made all of the progress up to now in essentially the last 5 years
I have to challenge this one, the research on natural language generation and machine learning dates back to the 50s, it just it only recently came together at scale in a way that became useful, but tons of the hardest progress was made over many decades, and very little innovation happened in the last 5 years. The innovation has mostly been bigger scale, better data, minor architectural tweaks, and reinforcement learning with human feedback and other such fine tuning.
We're definitely in the territory of splitting hairs; but I think most of what people call modern AI is the result of the transformer paper. Of course this was built off the back of decades of research.
> People need to remember that the ability of AI in code generation today is the worst that it ever will be
I've been reading this since 2023 and yet it hasn't really improved all that much. The same things are still problems that were problems back then. And if anything the improvement is slowing down, not speeding up.
I suspect unless we have real AGI we won't have human-level coding from AIs.
It has improved drastically, as evident by the kinds of issues these things can do with minimal supervision now.
Pretty much. Someone on our team put out a code review for some new feature and then bounced for a 2 week vacation. One of our junior engineers approved it. Despite the fact that it was in a section of dead code that wasn’t supposed to even be enabled yet, it managed to break our test environment. Took senior engineers a day to figure out how that was even possible before reverting. We had another couple engineers take a look to see what needs to be done to fix the bug. All of them came away with the conclusion that it was 1,000 lines of pure AI-generated slop with no redeemable value. Trying to fix it would take more work than just re-implenting from scratch.
> One of our junior engineers approved it.
pretty sure the process I've seen most places is more like: one junior approves, one senior approves, then the owner manually merges.
so your process seems inadequate to me, agents or not.
also, was it tagged as generated? that seems like an obvious safety feature. As a junior, I might be thinking: 'my senior colleague sure knows lots of this stuff', but all it would take to dispel my illusion is an agent tag on the PR.
> pretty sure the process I've seen most places is more like: one junior approves, one senior approves, then the owner manually merges.
Yeah that’s what I think we need to enforce. To answer your question, it was not tagged as AI generated. Frankly, I think we should ban AI-generated code outright, though labeling it as such would be a good compromise.
My hot take is that the evangelical people don't really like AI either they're just scared. I think you have to be outside of big tech to appreciate AI
If AI replaces software engineers, people outside tech doesn't have much chance of surviving it too.
Exactly. I think it’s pretty clear that software engineering is an “intelligence complete” problem. If you can automatically solve SWE than you can automatically solve pretty much all knowledge work.
A lot of modern corporate work is bullshit work.
I don't think it is too outrageous to believe that LLMs can do a lot of what all those armies of corporate bureaucrats do.
The difference is that unlike SWEs, the people doing all that bullshit work are much better at networking, so they will (collectively) find a reason why they shouldn't be replaced with AI and push it through.
SWEs could do so as well, if only we were unionized.
I see it like the hype of js/node and whatever module tech is glued to it when it was new from the perspective of someone who didn't code js. Sum of F's given is still zero.
-206dev
People hate what the corporations want AI to be and people hate when AI is used the way corporations seem to think it should be used, because the executives at these companies have no taste and no vision for the future of being human. And that is what people think of when they hear “AI”.
I still think there’s a third path, one that makes people’s lives better with thoughtful, respectful, and human-first use of AI. But for some reason there aren’t many people working on that.
> I still think there’s a third path, one that makes people’s lives better with thoughtful, respectful, and human-first use of AI. But for some reason there aren’t many people working on that.
I am thinking about this third path a lot, but the reality is that it wouldn't make AI more interesting than any other tool humans use to go about their daily lives. Does one get this obsessed about screwdrivers?
The issue is that a human-first world where technology is subservient to our needs is very incompatible with our current society. The AI hype is part and parcel of the capitalistic mode of production where humans are ultimately judged for their ability to produce more commodities; the goal has always been to improve productivity to make more for cheaper — this time, the quest for efficiency has found a viable replacement for many humans activities.
40+ year software developer here (not saying this for ego, just that I've been doing this a long time and I've seen a lot of things). Here's how I've re-framed things in my mind due to AI:
The centralization of 'power' in AI will be the entities that want to run the bigger, more general-purpose models. Fine. So be it. Knock yourself out. Good luck building the data centers and finding power.
After that, AI now becomes a 'field leveler', and I say this with the utmost of sincerity and confidence. Need a supply chain system? Goodbye big boys. Goodbye vendor lock. Now there will be dozens, if not hundreds of small teams that can provide this for you at a fraction of the cost. Accounting you ask? Goodbye Intuit. We'll whip up what you need and you'll be off and running and you can kiss the global monsters goodbye. You get the idea.
This is a defining moment and it's awesome. Sure there will be some initial pain. Mindsets will have to change. Priorities will shift. My fundamental point is, all of the big boy/fat cats rushing in the AI race are literally rushing to completely undermine their own leverage and power. Each and every day I am amazed at what I can do, after a lifetime of staring at screens, with a $25.00/month Claude Code license. I am reaching out and lifting my neighbors, who have no technical experience at all, into a completely new playing field where they compete against entities they had no chance of competing with ever before.
Forgive my optimism, but it's hard not to be as I can now use my experience, with a tight group of trusted friends and colleagues, and a little bit of coding help (which goes a long way) to go wherever we want to go now.
Ex-Google here; there are many people both current and past-Google that feel the same way as the composite coworker in the linked post.
I haven't escaped this mindset myself. I'm convinced there are a small number of places where LLMs make truly effective tools (see: generation of "must be plausible, need not be accurate" data, e.g. concept art or crowd animations in movies), a large number of places where LLMs make apparently-effective tools that have negative long-term consequences (see: anything involving learning a new skill, anything where correctness is critical), and a large number of places where LLMs are simply ineffective from the get-go but will increasingly be rammed down consumers' throats.
Accordingly I tend to be overly skeptical of AI proponents and anything touching AI. It would be nice if I was more rational, but I'm not; I want everyone working on AI and making money from AI to crash and burn hard. (See also: cryptocurrency)
My friends at Google are some of the most negative about the potential of AI to improve software development. I was always surprised by this and assumed internally at Google would be one of the first places to adopt these.
I've generally found an inverse correlation between "understands AI" and "exuberance for AI".
I'm the only person at my current company who has had experience at multiple AI companies (the rest have never worked on it in a production environment, one of our projects is literally something I got paid to deliver customers at another startup), has written professionally about the topic, and worked directly with some big names in the space. Unsurprisingly, I have nothing to do with any of our AI efforts.
One of the members of our leadership team, who I don't believe understands matrix multiplication, genuinely believes he's about to transcend human identity by merging with AI. He's publicly discussed how hard it is to maintain friendship with normal humans who can't keep up.
Now I absolutely think AI is useful, but these people don't want AI to be useful they want it to be something that anyone who understands it knows it can't be.
It's getting to the point where I genuinely feel I'm witnessing some sort of mass hysteria event. I keep getting introduced to people who have almost no understanding of the fundamentals of how LLMs work who have the most radically fantastic ideas about what they are capable of on a level I have ever experienced in my fairly long technical career.
Personally, I don't understand how LLMs work. I know some ML math and certainly could learn, and probably will, soon.
But my opinions about what LLMs can do are based on... what LLMs can do. What I can see them doing. With my eyes.
The right answer to the question "What can LLMs do?" is... looking... at what LLMs can do.
I'm sure you're already familiar with the ELIZA effect [0], but you should be a bit skeptical of what you are seeing with your eyes, especially when it comes to language. Humans have an incredible weakness to be tricked by language.
You should be doubly skeptically ever since RLHF has become standard as the model has literally been optimized to give you answers you find most pleasing.
The best way to measure of course is with evaluations, and I have done professional LLM model evaluation work for about 2 years. I've seen (and written) tons of evals and they both impress me and inform my skepticism about the limitations of LLMs. I've also seen countless times where people are convinced "with their eyes" they've found a prompt trick that improves the results, only to be shown that this doesn't pan out when run on a full eval suite.
As an aside: What's fascinating is that it seems our visual system is much more skeptical, an eyeball being slightly off created by a diffusion model will immediately set off alarms where enough clever word play from an LLM will make us drop our guard.
0. https://en.wikipedia.org/wiki/ELIZA_effect
We get around this a bit when using it to write code since we have unit tests and can verify that it's making correct changes and adhering to an architecture. It has truly become much more capable in the last year. This technology is so flexible that it can be used in ways no eval will ever touch and still perform well. You can't just rely on what the labs say about it, you have to USE it.
Interesting observation about the visual system. Truth be told, we get the visual feedback about the world at a much higher data rat AND the visual about the world is usually much higher correlated with reality, whereas the language is a virtual byproduct of cognition and communication.
No one understands how LLMs work. But some people manage to delude themselves into thinking that they do.
One key thing that people prefer not to think about is that LLMs aren't created by humans. They are created by an inhuman optimization algorithm that humans have learned to invoke and feed with data and computation.
Humans have a say in what it does and how, but "a say" is about the extent of it. The rest is a black box - incomprehensible products of a poorly understood mathematical process. The kind of thing you have to research just to get some small glimpses of how it does what it does.
Expecting those humans to understand how LLMs work is a bit like expecting a woman to know how humans work because she made a human once.
Bro- do you even matrix multiply?
Spot on in my experience.
I work in a space where I get to build and optimise AI tools for my own and my team's use pretty much daily. As such I focus mainly on AI'ing the crap out of boring & time-consuming stuff that doesn't interest any of us any more, and luckily enough there's a whole lot of low hanging fruit in that space where AI is a genuine time, cost and sanity saver.
However any activity that requires directed conscious thought and decision making where the end state isn't clearly definable up front tends to be really difficult for AI. So much of that work relies on a level of intuition and knowledge that is very hard to explain to a layman - let alone eidetic idiots like most AIs.
One example is trying to get AI to identify security IT incidents in real time and take proactive action. Skilled practitioners can fairly easily use AI to detect anomalous events in near real time, but getting AI to take the next step to work out which combinations of "anomalous" activities equate to "likely security incident" is much harder. A reasonably competent human can usually do that relatively quickly, but often can't explain how they do it.
Working out what action is appropriate once the "likely security incident" has been identified is another task that a reasonably competent human can do, but where AIs are hopeless. In most cases, a competent human is WAAAY better at identifying a reasonable way forward based on insufficient knowledge. In those cases, a good decision made quickly is preferable to a perfect decision made slowly, and humans understand this fairly intuitively.
> I've generally found an inverse correlation between "understands AI" and "exuberance for AI".
Few years ago I had this exact observation regarding self driving cars. Non/semi engineers who worked in the tech industry were very bullish about self driving cars, believing every and ETA spewed by Musk, engineers were cautious optimistically or pessimistically depending on their understanding of AI, LiDAR, etc.
This completely explains why so many engineers are skeptical of AI while so many managers embrace it: The engineers are the ones who understand it.
(BTW, if you're an engineer who thinks you don't understand AI or are not qualified to work on it, think again. It's just linear algebra, and linear algebra is not that hard. Once you spend a day studying it, you'll think "Is that all there is to it?" The only difficult part of AI is learning PyTorch, since all the AI papers are written in terms of Python nowadays instead of -- you know -- math.)
I've been building neural net systems since the late 1980s. And yes they work and they do useful things when you have modern amounts of compute available, but they are not the second coming of $DEITY.
Linear algebra cannot be learned in a day. Maybe multiplying matrices when the dimensions allow but there is far more to linear algebra than knowing how to multiply matrices. Knowing when and why is far more interesting. Knowing how to decompose them. Knowing what a non-singular matrix is and why it’s special and so on. Once you know what’s found in a basic lower devision linear algebra class, one can move it linear programming and learn about cost functions and optimization or numerical analysis. PyTorch is just a calculator. If I handed someone a Ti-84 they wouldn’t magically know how to bust out statistics on it…
> This completely explains why so many engineers are skeptical of AI while so many managers embrace it: The engineers are the ones who understand it.
Curiously some Feynman chap reported that several NASA engineers put the chance of the Challenger going kablooie—an untechnical term for rapid unscheduled deconstruction, which the Challenger had then just recently exhibited—at 1 in 200, or so, while the manager said, after some prevarications—"weaseled" is Feynman's term—that the chance was 1 in 100,000 with 100% confidence.
I mostly disagree with this. Lots of things correlate weakly with other things, often in confusing and overlapping ways. For instance, expertise can also correlate with resistance to change. Ego can correlate with protection of the status quo and dismissal of people who don't have the "right" credentials. Love of craft can correlate with distaste for automation of said craft (regardless of the effectiveness of the automation). Threat to personal financial stability can correlate with resistance (regardless of technical merit). Potential for personal profit can correlate with support (regardless of technical merit). Understanding neural nets can correlate both with exuberance and skepticism in slightly different populations.
Correlations are interesting but when examined only individually they are not nearly as meaningful as they might seem. Which one you latch onto as "the truth" probably says more about what tribe you value or want to be part of than anything fundamental about technology or society or people in general.
I think there is a correlation between when you can you expect from something when I know their internals vs someone that doesn’t know but is not like who knows internals is much much better.
Example: many people created websites without a clue of how they really work. And got millions of people on it. Or had crazy ideas to do things with them.
At the same time there are devs that know how internals work but can’t get 1 user.
pc manufacturers never were able to even imagine what random people were able to do with their pc.
This to say that even if you know internals you can claim you know better, but doesn’t mean it’s absolute.
Sometimes knowing the fundamentals it’s a limitation. Will limit your imagination.
I'm a big fan of the concept of 初心 (Japanese: Shoshin aka "beginners mind" [0] ) and largely agree with Sazuki's famous quote:
> “In the beginner’s mind there are many possibilities, but in the expert’s there are few”
Experts do tend to be limited in what they see as possible. But I don't think that allows carte blanche belief that a fancy Markov Chain will let you transcend humanity. I would argue one of the key concepts of "beginners mind" is not radical assurance in what's possible but unbounded curiosity and willingness to explore with an open mind. Right now we see this in the Stable Diffusion community: there are tons of people who also don't understand matrix multiplication that are doing incredible work through pure experimentation. There's a huge gap between "I wonder what will happen if I just mix these models together" and "we're just a few years from surrendering our will to AI". None of the people I'm concerned about have what I would consider an "open mind" about the topic of AI. They are sure of what they know and to disagree is to invite complete rejection. Hardly a principle of beginners mind.
Additionally:
> pc manufacturers never were able to even imagine what random people were able to do with their pc.
Belies a deep ignorance of the history of personal computing. Honestly, I don't think modern computing has still ever returned to the ambition of what was being dreampt up, by experts, at Xerox PARC. The demos on the Xerox Alto in the early 1970s are still ambitious in some senses. And, as much as I'm not a huge fan, Gates and Jobs absolutely had grand visions for what the PC would be.
0. https://en.wikipedia.org/wiki/Shoshin
I think this is what is blunted by mass education and most textbooks. We need to discover it again if we want to enjoy our profession with all the signals flowing from social media about all the great things other people are achieving. Staying stupid and hungry really helps.
[dead]
I think this is more about mechanistic understanding vs fundamental insight kind of situation. The linear algebra picture is currently very mechanistic since it only tells us what the computations are. There are research groups trying to go beyond that but the insight from these efforts are currently very limited. However, the probabilistic view is very much clearer. You can have many explorable insights, both potentially true and false, by jıst understanding the loss functions, what the model is sampling from, what is the marginal or conditional distributions are and so on. Generative AI models are beautiful at that level. It is truly mind blowing that in 2025, we are able to sample from the megapixel image distributions conditioned on the NLP text prompts.
If were true then people could predict this AI many years ago
If you dig ml/vision papers from old, you will see that formulation-wise they actually did, but they lacked the data, compute, and the mechanistic machinery provided by the transformer architecture. The wheels of progress are slow and requires many rotations to finally reach somewhere.
It's definitely interesting to look at people's mental models around AI.
I don't know shit about the math that makes it work, but my mental model is basically - "A LLM is an additional tool in my toolbox which performs summarization, classification and text transformation tasks for me imperfectly, but overall pretty well."
Probably lots of flaws in that model but I just try to think like an engineer who's attempting to get a job done and staying up to date on his tooling.
But as you say there are people who have been fooled by the "AI" angle of all this, and they think they're witnessing the birth of a machine god or something. The example that really makes me throw up my hands is r/MyBoyfriendIsAI where you have women agreeing to marry the LLM and other nonsense that is unfathomable to the mentally well.
There's always been a subset of humans who believe unimaginably stupid things, like that there's a guy in the sky who throws lightning bolts when he's angry, or whatever. The interesting (as in frightening) trend in modernity is that instead of these moron cults forming around natural phenomena we're increasingly forming them around things that are human made. Sometimes we form them around the state and human leaders, increasingly we're forming them around technologies, in line with Arthur C. Clarke's third law - that "Any sufficiently advanced technology is indistinguishable from magic."
If I sound harsh it's because I am, we don't want these moron cults to win, the outcome would be terrible, some high tech version of the Dark Ages. Yet at this moment we have business and political leaders and countless run-of-the-mill tech world grifters who are leaning into the moron cult version of AI rather than encouraging people to just see it as another tool in the box.
Google has good engineers. Generally I've noticed the better someone is at coding the more critical they are of AI generated code. Which make sense honestly. It's easier to spot flaws the more expert you are. This doesn't mean they don't use AI gen code, just they are more careful with when an where.
Yes, because they're more likely to understand that the computer isn't this magical black box, and that just because we've made ELIZA marginally better, doesn't mean it's actually good. Anecdata, but the people I've seen be dazzled by AI the most are people with little to no programming experience. They're also the ones most likely to look on computer experts with disdain.
Well yeah. And because when an expert looks at the code chatgpt produces, the flaws are more obvious. It programs with the skill of the median programmer on GitHub. For beginners and people who do cookie cutter work, this can be incredible because it writes the same or better code they could write, fast and for free. But for experts, the code it produces is consistently worse than what we can do. At best my pride demands I fix all its flaws before shipping. More commonly, it’s a waste of time to ask it to help, and I need to code the solution from scratch myself anyway.
I use it for throwaway prototypes and demos. And whenever I’m thrust into a language I don’t know that well, or to help me debug weird issues outside my area of expertise. But when I go deep on a problem, it’s often worse than useless.
This is why AI is the perfect management Rorschach test.
To management (out of IC roles for long enough to lose their technical expertise), it looks perfect!
To ICs, the flaws are apparent!
So inevitably management greenlights new AI projects* and behaviors, and then everyone is in the 'This was my idea, so it can't fail' CYA scenario.
* Add in a dash of management consulting advice here, and note that management consultants' core product was already literally 'something that looks plausible enough to make execs spend money on it'
In my experience (with ChatGPT 5.1 as of late) is that the AI follows a problem->solution internal logic and doesn't think and try to structure its code.
If you ask for an endpoint to a CRUD API, it'll make one. If you ask for 5, it'll repeat the same code 5 times and modify it for the use case.
A dev wouldn't do this, they would try to figure out the common parts of code, pull them out into helpers, and try to make as little duplicated code as possible.
I feel like the AI has a strong bias towards adding things, and not removing them. The most obviously wrong thing is with CSS - when I try to do some styling, it gets 90% of the way there, but there's almost always something that's not quite right.
Then I tell the AI to fix a style, since that div is getting clipped or not correctly centered etc.
It almost always keeps adding properties, and after 2-3 tries and an incredibly bloated style, I delete the thing and take a step back and think logically about how to properly lay this out with flexbox.
> If you ask for an endpoint to a CRUD API, it'll make one. If you ask for 5, it'll repeat the same code 5 times and modify it for the use case. > >A dev wouldn't do this, they would try to figure out the common parts of code, pull them out into helpers, and try to make as little duplicated code as possible. > >I feel like the AI has a strong bias towards adding things, and not removing them.
I suspect this is because an LLM doesn't build a mental model of the code base like a dev does. It can decide to look at certain files, and maybe you can improve this by putting a broad architecture overview of a system in an agents.md file, I don't have much experience with that.
But for now, I'm finding it most useful still think in terms of code architecture, and give it small steps that are part of that architecture, and then iterate based on your own review of AI generated code. I don't have the confidence in it to just let some agent plan, and then run for tens of minutes or even hours building out a feature. I want to be in the loop earlier to set the direction.
A good system prompt goes a long way with the latest models. Even just something as simple as "use DRY principles whenever possible." or prompting a plan-implement-evaluate cycle gets pretty good results, at least for tasks that are doing things that AI is well trained on like CRUD APIs.
> If you ask for an endpoint to a CRUD API, it'll make one. If you ask for 5, it'll repeat the same code 5 times and modify it for the use case.
I don’t think this is an inherent issue to the technology. Duplicate code detectors have been around for ages. Given an AI agent a tool which calls one, and ask it to reduce duplication, it will start refactoring.
Of course, there is a risk of going too far in the other direction-refactorings which technically reduce duplication but which have unacceptable costs (you can be too DRY). But some possible solutions: (a) ask it to judge if the refactoring is worth it or not - if it judges no, just ignore the duplication and move on; (b) get a human to review the decision in (a); (c) if AI repeatedly makes wrong decision (according to human), prompt engineering, or maybe even just some hardcoded heuristics
It actually is somewhat a limit of the technology. LLMs can't go back and modify their own output, later tokens are always dependent on earlier tokens and they can't do anything out of order. "Thinking" helps somewhat by allowing some iteration before they give the user actual output, but that requires them to write it the long way and THEN refactor it without being asked, which is both very expensive and something they have to recognize the user wants.
Coding agents can edit their own output - because their output is tool calls to read and write files, and so it can write a file, run some check on it, modify the file to try to make it pass, run the check again, etc
Sorry but from where I sit, this is only marginally closes gap from AI to truly senior engineers.
Basically human junior engineers start by writing code in a very procedural and literal style with duplicate logic all over the place because that's the first step in adapting human intelligence to learning how to program. Then the programmer realizes this leads to things becoming unmaintainable and so they start to learn the abstraction techniques of functions, etc. An LLM doesn't have to learn any of that, because they already know all languages and mechanical technique in their corpus, so this beginning journey never applies.
But what the junior programmer has that the LLM doesn't, is an innate common sense understanding of human goals that are driving the creation of the code to begin with, and that serves them through their entire progression from junior to senior. As you point out, code can be "too DRY", but why? Senior engineers understand that DRYing up code is not a style issue, its more about maintainability and understanding what is likely to change, and what will be the apparent effects to human stakeholders who depend on the software. Basically do these things map to things that are conceptually the same for human users and are unlikely to diverge in the future. This is also a surprisingly deep question as perhaps every human stakeholder will swear up and down they are the same, but nevertheless 6 months from now a problem arises that requires them to diverge. At this point there is now a cognitive overhead and dissonance of explaining that divergence of the users who were heretofore perfectly satisfied with one domain concept.
Ultimately the value function for success of a specific code factoring style depends on a lot of implicit context and assumptions that are baked into the heads of various stakeholders for the specific use case and can change based on myriad outside factors that are not visible to an LLM. Senior engineers understand the map is not the territory, for LLMs there is no territory.
I’m not suggesting AIs can replace senior engineers (I don’t want to be replaced!)
But, senior engineers can supervise the AI, notice when it makes suboptimal decisions, intervene to address that somehow (by editing prompts or providing new tools)… and the idea is gradually the AI will do better.
Rather than replacing engineers with AIs, engineers can use AIs to deliver more in the same amount of time
Which I think points out the biggest issue with current AI - knowledge workers in any profession at any skill level tend to get the impression that AI is very impressive, but is prone to fail at real world tasks unpredictably, thus the mental model of 'junior engineer' or any human that does its simple tasks by itself reliably, is wrong.
AI operating at all levels needs to be constantly supervised.
Which would still make AI a worthwhile technology, as a tool, as many have remarked before me.
The problem is, companies are pushing for agentic AI instead of one that can do repetitve, short horizon tasks in a fast and reliable manner.
Sure. My point was AI was already 25% of the way there even with their verbose messy style. I think with your suggestions (style guidance, human in the loop, etc) we get at most 30% of the way there.
Bad code is only really bad if it needs to be maintained.
If your AI reliably generates working code from a detailed prompt, the prompt is now the source that needs to be maintained. There is no important reason to even look at the generated code
> the prompt is now the source that needs to be maintained
The inference response to the prompt is not deterministic. In fact, it’s probably chaotic since small changes to the prompt can produce large changes to the inference.
The inference response to the prompt is not deterministic.
So? Nobody cares.
Is the output of your C compiler the same every time you run it? How about your FPGA synthesis tool? Is that deterministic? Are you sure?
What difference does it make, as long as the code works?
The C compiler will still make working programs every time, so long as your code isn’t broken. But sometimes the code chatgpt produces won’t work. Or it'll kinda work but you’ll get weird, different bugs each time you generate it. No thanks.
I think this might be plausible in the future, but it needs a lot more tooling. For starters you need to be able to run the prompt through the exact same model so you can reproduce a "build".
OK, then the current models aren't as good as I thought/hoped.
I guess one thing it means is that we still need extensive test suites. I suppose an LLM can write those too.
Even the exact same model isn't enough. There are several sources of nondeterminism in LLMs. These would all need to be squashed or seeded - which as far as I know isn't a feature that openai / anthropic / etc provide.
Well.. except the AI models are nondeterministic. If you ask an AI the same prompt 20 times, you'll get 20 different answers. Some of them might work, some probably won't. It usually takes a human to tell which are which and fix problems & refactor. If you keep the prompt, you can't manually modify the generated code afterwards (since it'll be regenerated). Even if you get the AI to write all the code correctly, there's no guarantee it'll do the same thing next time.
> It programs with the skill of the median programmer on GitHub
This is a common intuition but it's provably false.
The fact that LLMs are trained on a corpus does not mean their output represents the median skill level of the corpus.
Eighteen months ago GPT-4 was outperforming 85% of human participants in coding contests. And people who participate in coding contests are already well above the median skill level on Github.
And capability has gone way up in the last 18 months.
The best argument I've yet heard against the effectiveness of AI tools for SW dev is the absence of an explosion of shovelware over the past 1-2 years.
https://mikelovesrobots.substack.com/p/wheres-the-shovelware...
Basically, if the tools are even half as good as some proponents claim, wouldn't you expect at least a significant increase in simple games on Steam or apps in app stores over that time frame? But we're not seeing that.
Are you sure we aren't seeing an increase in steam games?
Charts I'm looking at show a mild exponential around 2024 https://www.statista.com/statistics/552623/number-games-rele...
Also theres probably a bottleneck in manual review time.
The shovelware is the companies getting funded…
https://docs.google.com/spreadsheets/d/1Uy2aWoeRZopMIaXXxY2E...
The shovelware software is coming…
Interesting approach. I can think of one more explanation the author didn't consider: what if software development time wasn't the bottleneck to what he analyzed? The chart for Google Play app submissions, for example, goes down because Google made it much more difficult to publish apps on their store in ways unrelated to software quality. In that case, it wouldn't matter whether AI tools could write a billion production-ready apps, because the limiting factor is Google's submission requirements.
There are other charts besides Google play. Particularly insightful id the steam chart as steam is already full of shovelware and, in my experience, many developers wish they were making games but the pay is bad.
GitHub repos is pretty interesting too but it could be that people just aren't committing this stuff. Showing zero increase is unexpected though.
I've had this same thought for some time. There should have been an explosion in startups, new product from established companies, new apps by the dozen every day. If LLMs can now reliably turn an idea into an application, where are they?
There is a deluge, every day. Just nobody notices or uses them.
The argument against this is that shovelware has a distinctly different distribution model now.
App stores have quality hurdles that didn’t exist in the diskette days. The types of people making low quality software now can self publish (and in fact do, often), but they get drowned out by established big dogs or the ever-shifting firehose of our social zeitgeist if you are not where they are.
Anyone who has been on Reddit this year in any software adjacent sub has seen hundreds (at minimum) of posts about “feedback on my app” or slop posts doing a god awful job of digging for market insights on pain points.
The core problem with this guy’s argument is that he’s looking in the wrong places - where a SWE would distribute their stuff, not a normie - and then drawing the wrong conclusions. And I am telling you, normies are out there, right now, upchucking some of the sloppiest of slop software you could ever imagine with wanton abandon.
Interesting, I would make the exact opposite conclusion from the same data: if AI coding was that bad, we'd see more crapware.
Algorithmic coding contests are not an equivalent skillset to professional software development
Trying to figure out how to align this with my experiences (which match the parents’ comment), and I have an idea:
Coding contests are not like my job at all.
My job is taking fuzzy human things and making code that solves it. Frankly AI isn’t good at closing open issues on open source projects either.
I don't think this disproves my claim, for several reasons.
First, I don't know where those human participants came from, but if you pick people off the street or from a college campus, they aren't going to be the world's best programmers. On the other hand, github users are on average more skilled than the average CS student. Even students and beginners who use github usually don't have much code there. If the LLMs are weighted to treat every line of code about same, they'd pick up more lines of code from prolific developers (who are often more experienced) than they would from beginners.
Also in a coding contest, you're under time pressure. Even when your code works, its often ugly and thrown together. On github, the only code I check in is code that solves whatever problem I set out to solve. I suspect everyone writes better code on github than we do in programming competitions. I suspect if you gave the competitors functionally unlimited time to do the programming competition, many more would outperform GPT-4.
Programming contests also usually require that you write a fully self contained program which has been very well specified. The program usually doesn't need any error handling, or need to be maintained. (And if it does need error handling, the cases are all fully specified in the problem description). Relatively speaking, LLMs are pretty good at these kind of problems - where I want some throwaway code that'll work today and get deleted tomorrow.
But most software I write isn't like that. And LLMs struggle to write maintainable software in large projects. Most problems aren't so well specified. And for most code, you end up spending more effort maintaining the code over its lifetime than it takes to write in the first place. Chatgpt usually writes code that is a headache to maintain. It doesn't write or use local utility functions. It doesn't factor its code well. The code is often overly verbose. It often writes code that's very poorly optimized. Or the code contains quite obvious bugs for unexpected input - like overflow errors or boundary conditions. And the code it produces very rarely handles errors correctly. None of these problems really matter in programming competitions. But it does matter a lot more when writing real software. These problems make LLMs much less useful at work.
Chess AI trained at specific human levels performs better than any humans at those levels, because the random mistakes get averaged out.
https://www.maiachess.com
Coding Contest != Software Engineering
Or even solving problems that business need to solve, generally speaking.
This complete misunderstand of what software engineering even is is the major reason so many engineers are fed up with the clueless leaders foisting AI tools upon their orgs because they apparently lack the critical reasoning skills to be able to distinguish marketing speak from reality.
> The fact that LLMs are trained on a corpus does not mean their output represents the median skill level of the corpus.
It does, by default. Try asking ChatGPT to implement quicksort in JavaScript, the result will be dogshit. Of course it can do better if you guide it, but that implies you recognize dogshit, or at least that you use some sort of prompting technique that will veer it off the beaten path.
I asked the free version of ChatGPT to implement quicksort in JS. I can't really see much wrong with it, but maybe I'm missing something? (Ugh, I just can't get HN to format code right... pastebin here: https://pastebin.com/tjaibW1x)
----
function quickSortInPlace(arr, left = 0, right = arr.length - 1) { if (left < right) { const pivotIndex = partition(arr, left, right); quickSortInPlace(arr, left, pivotIndex - 1); quickSortInPlace(arr, pivotIndex + 1, right); } return arr; }
function partition(arr, left, right) { const pivot = arr[right]; let i = left;
This is exactly the level of code I've come to expect from chatgpt. Its about the level of code I'd want from a smart CS student. But I'd hope to never use this in production:
- It always uses the last item as a pivot, which will give it pathological O(n^2) performance if the list is sorted. Passing an already sorted list to a sort function is a very common case. Good quicksort implementations will use a random pivot, or at least the middle pivot so re-sorting lists is fast.
- If you pass already sorted data, the recursive call to quickSortInPlace will take up stack space proportional to the size of the array. So if you pass a large sorted array, not only will the function take n^2 time, it might also generate a stack overflow and crash.
- This code: ... = [arr[j], arr[i]]; Creates an array and immediately destructures it. This is - or at least used to be - quite slow. I'd avoid doing that in the body of quicksort's inner loop.
- There's no way to pass a custom comparator, which is essential in real code.
I just tried in firefox:
My computer ran for about a minute then the javascript virtual machine crashed: This is about the quality of quicksort implementation I'd expect to see in a CS class, or in a random package in npm. If someone on my team committed this, I'd tell them to go rewrite it properly. (Or just use a standard library function - which wouldn't have these flaws.)OK, you just added requirements the previous poster had not mentioned. Firstly, how often do you really need to sort a million elements in a browser anyway? I expect that sort of heavy lifting would usually be done on the server, where you'd also want to do things like paging.
Secondly, if a standard implementation was to be used, that's essentially a No-Op. AI will reuse library functions where possible by default and agents will even "npm install" them for you. This is purely the result of my prompt, which was simply "Can you write a QuickSort implementation in JS?"
In any case, to incorporate your feedback, I simply added "that needs to sort an array of a million elements and accepts a custom comparator?" to the initial prompt and reran in a new session, and this is what I got in less than 5 seconds. It runs in about 160ms on Chrome:
https://pastebin.com/y2jbtLs9
How long would your team-mate have taken? What else would you change? If you have further requirements, seriously, you can just add those to the prompt and try it for yourself for free. I'd honestly be very curious to see where it fails.
However, this exchange is very illustrative: I feel like a lot of the negativity is because people expect AI to read their minds and then hold it against it when it doesn't.
> OK, you just added requirements the previous poster had not mentioned.
Lol of course! The real requirements for a piece of software are never specified in full ahead of time. Figuring out the spec is half the job.
> Firstly, how often do you really need to sort a million elements in a browser anyway? I expect that sort of heavy lifting would usually be done on the server
Who said anything about the browser? I run javascript on the server all the time.
Don't defend these bugs. 1 million items just isn't very many items for a sort function. On my computer, the built in javascript sort function can sort 1 million sorted items in 9ms. I'd expect any competent quicksort implementation to be able to do something similar. Hanging for 1 minute then crashing is a bug.
If you want a use case, consider the very common case of sorting user-supplied data. If I can send a JSON payload to your server and make it hang for 1 minute then crash, you've got a problem.
> If you have further requirements, seriously, you can just add those to the prompt and try it for yourself for free. [..] How long would your team-mate have taken?
We've gotta compare like for like here. How long does it take to debug code like this when an AI generates it? It took me about 25 minutes to discover & verify those problems. That was careful work. Then you reprompted it, and then you tested the new code to see if it fixed the problems. How long did that take, all added together? We also haven't tested the new code for correctness or to see if it has new bugs. Given its a complete rewrite, there's a good chance chatgpt introduced new issues. I've also had plenty of instances where I've spotted a problem and chatgpt apologises then completely fails to fix the problem I've spotted. Especially lifetime issues in rust - its really bad at those!
The question is this: Is this back and forth process faster or slower than programming quicksort by hand? I'm really not sure. Once we've reviewed and tested this code, and fixed any other problems in it, we're probably looking at about an hour of work all up. I could probably implement quicksort at a similar quality in a similar amount of time. I find writing code is usually less stressful than reviewing code, because mistakes while programming are usually obvious. But mistakes while reviewing are invisible. Neither you nor anyone else in this thread spotted the pathological behavior this implementation had with sorted data. Finding problems like that by just looking is hard.
Quicksort is also the best case for LLMs. Its a well understood, well specified problem with a simple, well known solution. There isn't any existing code it needs to integrate with. But those aren't the sort of problems I want chatgpt's help solving. If I could just use a library, I'm already doing that. I want chatgpt to solve problems its probably never seen before, with all the context of the problem I'm trying to solve, to fit in with all the code we've already written. It often takes 5-10 minutes of typing and copy+pasting just to write a suitable prompt. And in those cases, the code chatgpt produces is often much, much worse.
> I feel like a lot of the negativity is because people expect AI to read their minds and then hold it against it when it doesn't.
Yes exactly! As a senior developer, my job is to solve the problem people actually have, not the problem they tell me about. So yes of course I want it to read my mind! Actually turning a clear spec into working software is the easy part. ChatGPT is increasingly good at doing the work of a junior developer. But as a senior dev / tech lead, I also need to figure out what problems we're even solving, and what the best approach is. ChatGPT doesn't help much when it comes to this kind of work.
(By the way, that is basically a perfect definition of the difference between a junior and senior developer. Junior devs are only responsible for taking a spec and turning it into working software. Senior devs are responsible for reading everyone's mind, and turning that into useful software.)
And don't get me wrong. I'm not anti chatgpt. I use it all the time, for all sorts of things. I'd love to use it more for production grade code in large codebases if I could. But bugs like this matter. I don't want to spend my time babysitting chatgpt. Programming is easy. By the time I have a clear specification in my head, its often easier to just type out the code myself.
By volume the vast majority of code on github is students. Think about that when you average github for ai
I saw an ad for Lovable. The very first thing I noticed was an exchange where the promoter asked the AI to fix a horizontal scroll bar that was present on his product listing page. This is a common issue with web development, especially so for beginners. The AI’s solution? Hide overflow on the X axis. Probably the most common incorrect solution used by new programmers.
But to the untrained eye the AI did everything correctly.
Yes. The people who are amazed with AI were never that good at a particular subject area in the first place - I dont care who you are. You were not good enough - how do I know this? Well I know economics, corporate finance, accounting et al very deeply. Ive engaged with LLMS for years now and still they cannot get below the surface level and are not improving further than this.
Its easy to recall information, but something entirely different to do something with that information. Which is what those subject ares are all about - taking something (like a theory) and applying it in a disciplined manner given the context.
Thats not to diminish what LLMs can do. But lets get real.
[dead]
I am not a great (some would argue, not even good) programmer, and I find a lot of issues with LLM generated code. Even Claude pro does really weird dumb stuff.
> Generally I've noticed the better someone is at coding the more critical they are of AI generated code.
I'm having a dejavu of yesterday's discussion: https://news.ycombinator.com/item?id=46126988
It works both ways. If you are good, it's also easier to spot moments of brilliance from AI agent when it saves you hours of googling, reading docs, some trial and error while you pour yourself cup of coffee and ponder the next steps. You can spot when a single tab press saved you minutes.
Yes. Love it for quick explorations of available options, reviewing my work, having it propose tests, getting its help with debugging, and all kinds of general subject matter questions. I don’t trust it to write anything important but it can help with a sketch.
[dead]
IMO this is mostly just an ego thing. I often see staff+ engineers make up reasons why AI is bad, when really it’s just a prompting skill issue.
When something threatens a thing that gives you value, people tend to hate it
It starts to make you realize how unaware many people must be of what their programs are doing to accept AI stuff wholesale.
This seems to be the overall trend in AI. If you're an expert in something, you can see where it's wrong. If you're not, you can't.
Engineers at Google are much less likely to be doing green-field generation of large amounts of code . It's much more incremental, carefully measured changes to mature, complex software stacks, and done within the Google ecosystem, which is heavily divergent from the OSS-focused world of startups, where most training data comes from
That is the problem.
AI is optimized to solve a problem no matter what it takes. It will try to solve one problem by creating 10 more.
I think long time/term agentic AI is just snake oil at this point. AI works best if you can segment your task into 5-10 minutes chunks, including the AI generating time, correcting time and engineer review time. To put it another way, a 10 minute sync with human is necessary, otherwise it will go astray.
Then it just makes software engineering into bothering supervisor job. Yes I typed less, but I didn’t feel the thrill of doing so.
> it just makes software engineering into bothering supervisor job.
I'm pretty sure this is the entire enthusiasm from C-level for AI in a nutshell. Until AI SWE resisted being mashed into a replaceable cog job that they don't have to think/care about. AI is the magic beans that are just tantalizingly out of reach and boy do they want it.
Luckily for us, technologies like SQL made similar promises (for more limited domains) and C suites couldn't be bothered to learn that stuff either.
Ultimately they are mostly just clueless, so we will either end up with legions of way shittier companies than we have today (because we let them get away with offloading a bunch of work to tools they rms int understand and accepting low quality output) or we will eventually realize the continued importance of human expertise.
But every version of AI for almost a century had this property, right down from the first vocoders that were going to replace entire callcenters to convolutional AI that was going to give us self-driving cars. Yes, a century, vocoders were 1930s technology, but they can essentially read the time aloud.
... except they didn't. In fact most AI tech were good for a nice demo and little else.
In some cases, really unfairly. For instance, convnet map matching doesn't work well not because it doesn't work well, but because you can't explain to humans when it won't work well. It's unpredictable, like a human. If you ask a human to map a building in heavy fog they may come back with "sorry". SLAM with lidar is "better", except no, it's a LOT worse. But when it fails it's very clear why it fails because it's a very visual algorithm. People expect of AIs that they can replace humans but that doesn't work, because people also demand AIs never say no, never fail, like the Star Trek computer (the only problem the star trek computer ever has is that it is misunderstood or follows policy too well). If you have a delivery person occasionally they will radically modify the process, or refuse to deliver. No CEO is ever going to allow an AI drone to change the process and No CEO will ever accept "no" from an AI drone. More generally, no business person seems to ever accept a 99% AI solution, and all AI solutions are 99%, or actually mostly less.
AI winters. I get the impression another one is coming, and I can feel it's going to be a cold one. But in 10 years, LLMs will be in a lot of stuff, like with every other AI winter. A lot of stuff ... but a lot less than CEOs are declaring it will be in today.
Yeah but Google won’t expect you to use AI tools developed outside Google and trained on primarily OSS code. It would expect you to use the Google internal AI tools trained on google3, no?
There are plenty of good tasks left, but they're often one-off/internal tooling.
Last one at work: "Hey, here are the symptoms for a bug, they appeared in <release XYZ> - go figure out the CL range and which 10 CLs I should inspect first to see if they're the cause"
(Well suited to AI, because worst case I've looked at 10 CLs in vain, and best case it saved me from manually scanning through several 1000 CLs - the EV is net positive)
It works for code generation as well, but not in a "just do my job" way, more in a "find which haystack the needle is in, and what the rough shape of the new needle is". Blind vibecoding is a non-starter. But... it's a non-starter for greenfields too, it's just that the FO of FAFO is a bit more delayed.
My internal mnemonic for targeting AI correctly is 'It's easier to change a problem into something AI is good at, than it is to change AI into something that fits every problem.'
But unfortunately the nuances in the former require understanding strengths and weaknesses of current AI systems, which is a conversation the industry doesn't want to have while it's still riding the froth of a hype cycle.
Aka 'any current weaknesses in AI systems are just temporary growing pains before an AGI future'
> 'any current weaknesses in AI systems are just temporary growing pains before an AGI future'
I see we've met the same product people :)
I had a VP of a revenue cycle team tell me that his expectation was that they could fling their spreadsheets and Word docs on how to do calculations at an AI powered vendor, and AI would be able to (and I direct quote) "just figure it all out."
That's when I realized how far down the rabbit hole marketing to non-technical folks on this was.
I think it’s a fair point that google has more stakeholders with a serious investment in some flubbed AI generated code not tanking their share value, but I’m not sure the rest of it is all that different from what engineer at $SOME_STARTUP does after the first ~8monthes the company is around. Maybe some folks throwing shit at a wall to find PMF are really getting a lot out of this, but most of us are maintaining and augmenting something we don’t want to break.
Excuse the throwaway. It's not even just the employees, but it doesn't even seem like the technical leadership seriously cares about internal AI use. Before I left all they pushed was code generation, but my work was 80% understanding 5-20 year old code and 20% actual development. If they put any noticeable effort into an LLM that could answer "show me all users of Proto.field that would be affected by X", my life would've been changed for the better, but I don't think the technical leadership understands this, or they don't want to spare the TPUs.
When I started at my post-Google job, I felt so vindicated when my new TL recommended that I use an LLM to catch up if no one was available to answer my questions.
Being forced to adopt tools regardless of fit to workflow (and being smart enough to understand the limitations of the tools despite management's claims) correlates very well to being negative on them.
Makes sense to me.
From the outside, the AI push at Google very closely resembles the death march that Google+ but immensely more intense from the entire tech ecosystem following suit.
I notice that expert tends to be pretty bimodal. e.g. chef either enjoy really well made food or some version of scrappy fast food comfort they grew up eating.
Bimodal here suggests either/or which I don’t think is correct for either chefs or code enjoyers. I think experts tend to eschew snobbery more and can see the value in comfort food, quick and dirty AI prototypes or boilerplate, or say cheap and drinkable wine, while also being able to appreciate what the truly high-end looks like.
It’s the mid-range with pretensions that gets squeezed out. I absolutely do not need a $40 bottle of wine to accompany my takeout curry, I definitely don’t need truffle slices added to my carbonara, and I don’t need to hand-roll conceptually simple code.
Googler, opinion is my own.
Working on our mega huge code basis with lots of custom tooling and bleeding edge stuff hasn't been the best for for AI generated code compared to most companies.
I do think AI as a rubber ducky / research assistant type has been overall helpful as a SWE.
You cannot trust someone’s judgement on something if that something can result in them being unemployed.
Or if they stand to make a lot of money.
See both sides can be pithy.
because autocorrect and predictive text doesn't help when half your job is revisions
so would love to be a fly in there office and hear all their convos
[dead]
People who've spent their life perfecting a craft are exactly the people you'd expect would be most negative about something genuinely disrupting that craft. There is significant precedent for this. It's happened repeatedly in history. Really smart, talented people routinely and in fact quite predictably resist technology that disrupts their craft, often even at great personal cost within their own lifetime.
Yes you get it. Obviously “writing code” will die. It will hold on in legacy systems that need bespoke maintenance, like COBOL systems have today. There will be artisanal coders, like there are artisanal blacksmiths, who do it the old fashioned way, and we will smile and encourage them. Within 20 years, writing code syntax will be like writing assembly: something they make you do in school, something that your dad reminds you about the good old days.
I talked to someone who was in denial about this, until he said he had conflated writing code with solving problems. Solving problems isn’t going anywhere! Solving problems: you observe a problem, write out a solution, implement that solution, measure the problem again, consider your metrics, then iterate.
“Implement it” can mean writing code, like the past 40 years, but it hasn’t always been. Before coding, it was economics and physics majors, who studied and implemented scientific management. For the next 20 years, it will be “describe the tool to Claude code and use the result”.
But Claude cannot code at all, it's gonna shit the bed and it learns only on human coders to be able to even know an example is a solution rather than a malware...
Every greenfield project uses claude code to write 90+% of code. Every YC startup for the past six months says AI writes 90+% of their code. Claude code writes 90+% of my code. That’s today.
It works great. I have a faster iteration cycle. For existing large codebases, AI modifications will continue to be okay-ish. But new companies with a faster iteration cycle will outcompete olds ones, and so in the long run most codebases will use the same “in-distribution” tech stacks and architecture and design principles that AI is good at.
I don't know that i consider recognizing the limitations of a tool to be resistance to the idea. It makes sense that experts would recognize those limitations most acutely -- my $30 harbor freight circular saw is a lifesaver for me when I'm doing slapdash work in my shed, but it'd be a critical liability for a professional carpenter needing precision cuts. That doesn't mean the professional carpenter is resistant to the idea of using power saws, just that they necessarily must be more discerning than I do.
It's the latest tech holy war. Tabs vs Spaces but more existential. I'm usually anti hype and I've been convinced of AI's use over and over when it comes to coding. And whenever I talk about it, I see that I come across as an evangelist. Some people appreciate that, online I get a lot of push back despite having tangible examples of how it has been useful.
I don't see it that way. Tabs, spaces, curly brace placement, Vim, Emacs, VSCode, etc are largely aesthetic choices with some marginal unproven cognitive implications.
I find people mostly prefer what they are used to, and if your preference was so superior then how could so many people build fantastic software using the method you don't like?
AI isn't like that. AI is a bunch of people telling me this product can do wonderful things that will change society and replace workers, yet almost every time I use it, it falls far short of that promise. AI is certainly not reliable enough for me to jeopardize the quality of my work by using it heavily.
You can vibe-code a throwaway UI for investigating some complex data in less than 30 minutes. The code quality doesn't matter, and it will make your life much easier.
Rinse and repeat for many "one-off" tasks.
It's not going away, you need to learn how to use it. shrugs shoulders
The issue is people trying to use these AI tools to investigate complex data not the throwaway UI part.
I work as the non-software kind of engineer at an industrial plant there is starting to emerge a trend of people who just blindly trust the output of AI chat sessions without understanding what the chat bot is echoing at them which is wasteful of their time and in some cases my time.
This not not new in the past I have experienced engineers who use (abuse) statistics/regression tools etc. Without understanding what the output was telling them but it is getting worse now.
It is not uncommon to hear something like: "Oh I investigated that problem and this particular issue we experienced was because of reasons x, y and z."
Then when you push back because what they've said sounds highly unlikely it boils down to. "I don't know that is what the AI told me".
Then if they are sufficiently optimistic they'll go back and prompt it with "please supply evidence for your conclusion" or some similar prompt and it will supply paragraphs of plausible sounding text but when you dig into what it is saying there are inconsistencies or made up citations. I've seen it say things that were straight up incorrect and went against Laws of Thermodynamics for example.
It has become the new "I threw the kitchen sink into a multivariate regression and X emerged as significant - therefore we should address x"
I'm not a complete skeptic I think AI has some value, for example if you use it as a more powerful search engine by asking it something like "What are some suggested techniques for investigating x" or "What are the limitations of Method Y" etc. It can point you to the right place assist you with research, it might find papers from other fields or similar. But it is not something you should be relying on to do all of the research for you.
But how do you know you're getting the correct picture from that throwaway UI? A little while back there was an blog posted where the author wrote an article praising AI for his vibe-coded earth-viewer app that used Vulkan to render inside a GUI window. Unfortunately, that wasn't the case and AI just copied from somewhere and inserted code for a rudimentary software rendering. The AI couldn't do what was asked because it had seldom been done. Nobody on the internet ever discussed that particular objective, so it wasn't in the training set.
The lesson to learn is that these are "large-language models." That means it can regurgitate what someone else has done before textually, but not actually create something novel. So it's fine if someone on the internet has posted or talked about a quick UI in whatever particular toolkit you're using to analyze data. But it'll throw out BS if you ask for something brand new. I suspect a lot of AI users are web developers who write a lot of repetitive rote boilerplate, and that's the kind of thing these LLMs really thrive with.
> But how do you know you're getting the correct picture from that throwaway UI?
You get the AI to generate code that lets you spot-check individual data points :-)
Most of my work these days is in fact that kind of code. I'm working on something research-y that requires a lot of visualization, and at this point I've actually produced more throwaway code than code in the project.
Here's an example: I had ChatGPT generate some relatively straightforward but cumbersome geometric code. Saved me 30 - 60 minutes right there, but to be sure, I had it generate tests, which all passed. Another 30 minutes saved.
I reviewed the code and the tests and felt it needed more edge cases, which I added manually. However, these started failing and it was really cumbersome to make sense of a bunch of coordinates in arrays.
So I had it generate code to visualize my test cases! That instantly showed me that some assertions in my manually added edge cases were incorrect, which became a quick fix.
The answer to "how do you trust AI" is human in the loop... AND MOAR AI!!! ;-)
>You can vibe-code a throwaway UI
And then people create non-throwaway things with it and your job, performance report, bonus, and healthcare are tied to being compared to those people who just do what management says without arguing about the correct application of the tool.
If you keep your job, it's now tied to maintaining the garbage those coworkers checked in.
It’s kind of fun watching this comment go up and down :)
There’s so much evidence out there of people getting real value from the tools.
Some questions you can ask yourself are “why doesn’t it work for me?” and “what can I do differently?”.
Be curious, not dogmatic. Ignore the hype, find people doing real work.
I’m an AI skeptic. I like seeing what UIs it spits out, though, which defeats the blank page staring into my soul fear nicely. I don’t even use the code, just take inspiration from the layouts.
Yeah, it helps a lot to make first steps, to overcome writers block, to make you put into words what you'd like to have built.
At one point you might take over, ask it for specific refactors you'd do but are too lazy to do yourself. Or even toss it away entirely and start fresh with better understanding. Yourself or again with agent.
They're good questions! The problem is that I've tried to talk to the people who are getting real value from it, and often the answer ends up being that the value is not as real as they think. One guy gave an excited presentation about how AI let him write 7k LOC per day, expounded for an entire session about how the rest of us should follow in his shoes, and then clarified only in Q&A that reviewers couldn't keep up so he exempted himself from code review.
I’m starting to believe there are situations where the human code review is genuinely not necessary. Here’s a concrete example of something that’s been blowing my mind. I have 25 years of professional coding experience but it’s almost all web, with a few years of iOS in the objective C era. I’m also an amateur electronic musician. A couple of weeks ago I was thinking about this plugin that I used to love until the company that made it went under. I’ve long considered trying to make a replacement but I don’t know the first thing about DSP or C++.
You know where this is going. I asked Claude if audio plugins were well represented in its training data, it said yes, off I went. I can’t review the code because I lack the expertise. It’s all C++ with a lot of math and the only math I’ve needed since college is addition and calculating percentages. However, I can have intelligent discussions about design and architecture and music UX. That’s been enough to get me a functional plugin that already does more in some respects than the original. I am (we are?) making it steadily more performant. It has only crashed twice and each time I just pasted the dump into Claude and it fixed the root cause.
Long story short: if you can verify the outcome, do you need to review the code? It helps that no one dies or gets underpaid if my audio plugin crashes. But still, you can’t tell me this isn’t remarkable. I think it’s clear there will be a massive proliferation of niche software.
I don’t think I’ve ever seen someone seriously argue that personal throwaway projects need thorough code reviews of their vibe code. The problem comes in when I’m maintaining a 20 year old code base used by anywhere from 1M to 1B users.
In other words you can’t vibe code in an environment where evaluating “does this code work” is an existential question. This is the case where 7k LOC/day becomes terrifying.
Until we get much better at automatically proving correctness of programs we will need review.
My point about my experience with this plugin isn’t that it’s a throwaway or meaningless project. My point is that it might be enough in some cases to verify output without verifying code. Another example: I had to import tens of thousands of records of relational data. I got AI to write the code for the import. All I verified was that the data was imported correctly. I didn’t even look at the code.
In this context I meant throwaway as "low stakes" not "meaningless". Again, evaluating the output of a database import like that could be existensial for your company given the context. Not to mention there's many cases where evaluating the output isn't feasible for a human.
Human code review does not prove correctness. Almost every software service out there contains bugs. Humans have struggled for decades to reliably produce correct software at scale and speed. Overall, humans have a pretty terrible track record of producing bug-free correct code no matter how much they double-check and review their code along the way.
So the solution is to stop doing code reviews and just YOLO-merge everything? After all, everything is fucked already, how much worse could it get?
For the record, there are examples where human code review and design guidelines can lead to very low-bug code. NASA published their internal guidelines for producing safety-critical code[1]. The problem is that the development cost of software when using such processes is too high for most companies, and most companies don't actually produce safety-critical software.
My experience with the vast majority of LLM code submitted to projects I maintain is that it has subtle bugs that I managed to find through fairly cursory human review. The copilot code review feature on GitHub also tends to miss actual bugs and report nonexistent bugs, making it worse than useless. So in my view, the death of the benefits of human code review have been wildly exaggerated.
[1]: https://en.wikipedia.org/wiki/The_Power_of_10:_Rules_for_Dev...
No, that's not what I wrote, and it's not the correct conclusion. What I wrote (and what you, in fact, also wrote) is that in reality we generally do not actually need provably correct software except in rare cases (e.g., safety-critical applications). Suggesting that human review cannot be reduced or phased out at all until we can automatically prove correctness is wrong, because fully 100% correct and bug-free software is not needed for the vast majority of code being produced. That does not mean we immediately throw out all human review, but the bar for making changes for how we review code is certainly much lower than the above poster suggested.
I don't really buy your premise. What you're suggesting is that all code has bugs, and those bugs have equal severity and distribution regardless of any forethought or rigor put into the code.
You're right, human review and thorough design are a poor approximation of proving assumptions about your code. Yes bugs still exist. No you won't be able to prove the correctness of your code.
However, I can pretty confidently assume that malloc will work when I call it. I can pretty confidently assume that my thoroughly tested linked list will work when I call it. I can pretty confidently assume that following RAII will avoid most memory leaks.
Not all software needs meticulous careful human review. But I believe that the compounding cost of abstractions being lost and invariants being given up can be massive. I don't see any other way to attempt to maintain those other than human review or proven correctness.
I did suggest all code has bugs (up to some limit -- while I wasn't careful to specify this, as discussed above, there does exist an extraordinary level of caution and review that if used can approximate perfect bug-free code, as in your malloc example and in the example of NASA, but that standard is not currently applied to 99.9% of human-generated and human-reviewed code, and it doesn't need to be). I did not suggest anything else you said I suggested, so I'm not sure why you made those parts up.
"Not all software needs meticulous careful human review" is exactly the point. The question of exactly what software needs that kind of review is one whose answer I expect to change over the next 5-10 years. We are already at the point where it's so easy to produce small but highly non-trivial one-off applications that one needn't examine the code at all -- I completely agree with the above poster that we're rapidly discovering new examples of software development where output-verification is all you need, just like right now you don't hand-inspect the machine code generated by your compiler. The question is how far that will be able to go, and I don't think anybody really knows right now, except that we are not yet at the threshold. You keep bringing up examples where the stakes are "existential", but you're underestimating how much software development does not have anything close to existential stakes.
I agree that's remarkable, and I do expect a proliferation of LLM-assisted development in similar niches where verification is easy and correctness isn't critical. But I don't think most software developers today are in such niches.
Most enterprise software I use has serious defects. Professional CAD software for infrastructure is awful. Many are just incremental improvements piled upon software from the 1990s. Bugs last for decades because nobody can understand how the program works so they just work on one more little VBA plugin at a time. Meanwhile, the capabilities of these programs have fallen completely behind game studios with no budget and no business plan. Where are the results of this human excellence and code quality process? There are 10s of thousands of new CVEs every year from code hand crafted by artisans on their very own MacBooks. How? Perhaps there is the tiny possibility that maybe code quality is mostly an aesthetic judgment that nobody can really define, and just maybe this effort is mostly spent on vague concepts like maintainability or preferential decisions instead of the basics: does it meet the specification? Is the performance getting better or worse?
This is the game changer for me: I don’t have to evaluate tens or hundreds of market options that fit my problem. I tell the machine to solve it, and if it works, then I’m happy. If it doesn’t I throw it away. All in a few minutes and for a few cents. Code is going the way of the disposable diaper, and, if you ever washed a cloth diaper you will know, that’s a good thing.
> I tell the machine to solve it, and if it works, then I’m happy. If it doesn’t I throw it away.
What happens when it seems to work, and you walk away happy, but discover three months later that your circular components don't line up because the LLM-written CAD software used an over-rounded PI = 3.14? I don't work in industrial design, but I faced a somewhat similar issue where an LLM-written component looked fine to everyone until final integration forced us to rewrite it almost entirely.
This is basically me at my job right now. My boss used Claude Code in his spare time to write a "proof of concept" Electron app. It mostly worked but had some weird edge case behaviors. Now it's handed off to me, and fixing those edge cases is requiring me to refactor basically every single thing Claude touched. Vast majority I'm just tossing and redoing from scratch.
The original code "looks" fine, and it works pretty well even, but an LLM cannot avoid critical oversights along the way, and is fundamentally designed to its mistakes look as plausibly correct as possible. This makes correcting the problems down the line much more annoying (unless you can afford to live with the bugs and keep slapping on more band aids, i guess)
Most people don't have a problem with using genai for stuff like throwaway UI's. That's not even remotely relevant to the criticisms. People reject having it forced down their throats by companies who are desperate to make us totally reliant on it to justify their insane investments. And people reject the evangelicals who claim that it's going to replace developers because it can spit out mostly working boilerplate.
It's like watching somebody argue that code linting is going to change the face of the world and the rebuttals to the skeptics are arguing that akshually code linting is quite useful....
I have found value for one off tasks. I forget the exact situation, but I wanted to do some data transformation, something that would normally take me a half hour of awk/sed/bash or python scripting. AI spit it out right away.
> You can vibe-code a throwaway UI for investigating some complex data in less than 30 minutes. The code quality doesn't matter, and it will make your life much easier.
I think the throwaway part is important here and people are missing it, particularly for non-programmers.
There's a lot of roles in the business world that would make great use of ephemeral little apps like this to do a specific task, then throw it away. Usually just running locally on someone's machine, or at most shared with a couple other folks in your department.
Code doesn't have to be good, hell it doesn't even have to be secure, and certainly doesn't need to look pretty. It just needs to work.
There's not enough engineering staff or time to turn every manager's pet excel sheet project into a temporary app, so LLMs make perfect sense here.
I'd go as far to say more effort should be put into ephemeral apps as a use case for LLMs over focusing on trying to use them in areas where a more permanent, high quality solution is needed.
Improve them for non-developers.
Perhaps. But does it matter? There is a million tools to investigate complex data already. Are you suggesting it is more useful to develop a new tool from scratch, using LLM-type tools, than it is to use a mature tool for data analysis?
If you don't know how to analyze data, and flat out refuse to invest in learning the skill, then I guess that could be really useful. Those users are likely the ones most enthusiastic about AI. But are those users close to as productive as someone who learns a mature tool? Not even close.
Lots of people appreciate an LLM to generate boiler plate code and establish frameworks for their data structures. But that's code that probably shouldn't be there in the first place. Vibe coding a game can be done impressively quick, but have you tried using a game construction kit? That's much faster still.
You should try telling management it’s throwaway
One thing people often don't realize or ignore: these LLMs are trained on the internet, the entire internet.
There's a shit-ton of bad and inefficient code on the internet. Lots of it. And it was used to train these LLMs as much as the good code.
In other words, the LLMs are great if you're OK with mediocrity at best. Mediocrity is occasionally good enough, but it can spell death for a company when key parts of it are mediocre.
I'm afraid a lot of the executives who fantasize about replacing humans with AI are going to have to learn this the hard way.
Except when your AI psychosis PM / manager sees your throwaway vibe-coded garbage and demands it gets shipped to customers.
It's infinitely worse when your PM / manager vibe-codes some disgusting garbage, sees that it kind of looks like a real thing that solves about half of the requirements (badly) and demands engineers ship that and "fix the few remaining bugs later".
[dead]
I would say it is like that. No one HAS to use AI. But the shared goal is to get a change to the codebase to achieve a desired outcome. Some will outsource a significant part of that to AI, some won't.
And its tricky because I'm trying not to appeal to emotion despite being fascinated with how this tool has enabled me to do things in a short amount of time that it would have taken me weeks of grinding to get to and improves my communication with stakeholders. That feels world changing. Specifically my world and the day-to-day roll I play when it comes to getting things done.
I think it is fine that it fell short of your expectations. It often does for me as well but it's when it gets me 80% of the way there in less than a day's work, then my mind is blown. It's an imperfect tool and I'm sorry for saying this but so are we. Treat its imperfections in the same way you would with a JR developer- feedback, reframing, restrictions, and iterate.
> No one HAS to use AI.
Well… That's no longer true, is it?
My partner (IT analyst) works for a company owned by a multinational big corporation, and she got told during a meeting with her manager that use of AI is going to become mandatory next year. That's going to be a thing across the board.
And have you called a large company for any reason lately? Could be your telco provider, your bank, public transport company, whatever. You call them, because online contact means haggling with an AI chatbot first to finally give up and shunt you over to an actual person who can help, and contact forms and e-mail have been killed off. Calling is not exactly as bad, but step one nowadays is 'please describe what you're calling for', where some LLM will try to parse that, fail miserably, and then shunt you to an actual person.
AI is already unavoidable.
> My partner (IT analyst) works for a company owned by a multinational big corporation, and she got told during a meeting with her manager that use of AI is going to become mandatory next year. That's going to be a thing across the board.
My multinational big corporation employer has reporting about how much each employee uses AI, with a naughty list of employees who aren't meeting their quota of AI usage.
Nothing says "this product is useful" quite like forcing people to use it and punishing people who don't. If it was that good, there'd be organic demand to use it. People would be begging to use it, going around their boss's back to use it.
The fact that companies have to force you to use it with quotas and threats is damning.
> My multinational big corporation employer has reporting about how much each employee uses AI, with a naughty list of employees who aren't meeting their quota of AI usage.
“Why don’t you just make the minimum 37 pieces of flAIr?”
Those kinds of reports seem to be a thing at all big tech corps now.
Yeah. Well. There are company that require TPS reports, too.
It's mostly a sign leadership has lost reasoning capability if it's mandatory.
But no, reporting isn't necessarily the problem. There are plenty of places that use reporting to drive a conversation on what's broken, and why it's broken for their workflow, and then use that to drive improvement.
It's only a problem if the leadership stance is "Haha! We found underpants gnome step 2! Make underpants number go up, and we are geniuses". Sadly not as rare as one would hope, but still stupid.
> And have you called a large company for any reason lately? Could be your telco provider, your bank, public transport company, whatever. You call them, because online contact means haggling with an AI chatbot first to finally give up and shunt you over to an actual person who can help, and contact forms and e-mail have been killed off. Calling is not exactly as bad, but step one nowadays is 'please describe what you're calling for', where some LLM will try to parse that, fail miserably, and then shunt you to an actual person
All of this predates LLMs (what “AI” means today) becoming a useful product. All of this happened already with previous generations of “AI”.
It was just even shittier than the version we have today.
It was also shittier than the version we had before it (human receptionists).
This is what I always think of when I imagine how AI will change the world and daily life. Automation doesn't have to be better (for the customer, for the person using it, for society) in order to push out the alternatives. If the automation is cheap enough, it can be worse for everyone, and still change everything. Those are the niches in ehich I'm most certain will be here to stay— because sometimes, it hardly matters if it's any good.
> where some LLM will try to parse that, fail miserably, and then shunt you to an actual person.
If you're lucky. I've had LLMs that just repeatedly hang up on me when they obviously hit a dead end.
It isn't a universal thing. I have no doubt there is a job out there that that isn't a requirement. I think the issue is the C-level folks are seeing how more productive someone might be and making it a demand. That to me is the wrong approach. If you demonstrate and build interest, the adoption will happen.
As opposed to reaching, say, somebody in an offshored call center with an utterly undecipherable accent reading a script at you? Without any room for deviation?
AI's not exactly a step down from that.
> But the shared goal is to get a change to the codebase to achieve a desired outcome.
I'd argue that's not true. It's more of a stated goal. The actual goal is to achieve the desired outcome in a way that has manageable, understood side effects, and that can be maintained and built upon over time by all capable team members.
The difference between what business folks see as the "output" of software developers (code) and what (good) software developers actually deliver over time is significant. AI can definitely do the former. The latter is less clear. This is one of the fundamental disconnects in discussions about AI in software development.
In my personal use case, I work at a company that has SO MUCH process and documentation for coding standards. I made an AI agent that knows all that and used it to update legacy code to the new standard in a day. Something that would have taken weeks if not more. If your desire is manageable code, make that a requirement.
I'm going to say this next thing as someone with a lot of negative bias about corporations. I was laid off from Twitter when Elon bought the company and at a second company that was hemorrhaging users.
Our job isn't to write code, it's to make the machine do the thing. All the effort for clean, manageable, etc is purely in the interest of the programmer but at the end of the day, launching the feature that pulls in money is the point.
It's not just about coding standards. It's about, over time, having a team of people with a built-up set of knowledge about how things work and how they're expected to work. You don't get that by vibe coding and reviewing numerous PRs written by other people (or chatbots).
If everyone on your team is doing that, it's not long before huge chunks of your codebase are conceptually like stuff that was written a long time ago by people who left the company. Except those people may have actually known what they were doing. The AI chatbots are generating stuff that seems to plausibly work well enough based on however they were prompted.
There are intangible parts of software development that are difficult to measure but incredibly valuable beyond the code itself.
> Our job isn't to write code, it's to make the machine do the thing. All the effort for clean, manageable, etc is purely in the interest of the programmer but at the end of the day, launching the feature that pulls in money is the point.
This could be the vibe coder mantra. And it's true on day one. Once you've got reasonably complex software being maintained by one or more teams of developers who all need to be able to fix bugs and add features without breaking things, it's not quite as simple as "make the machine do the thing."
How did you verify that your AI agent performed the update correctly? I've experienced a number of cases where an AI agent made a change that seemed right at first glance, maybe even passed code review, but fell apart completely when it came time to build on top of it.
> made a change that seemed right at first glance, maybe even passed code review, but fell apart completely when it came time to build on top of it
Maybe I'm not understanding you're point, but this is the kind of thing that happens in software teams all the time and is one of those "that's why they call it work" realities of the job.
If something "seems right/passed review/fell apart" then that's the reviewer's fault right? Which happens, all the time! Reviewers tend to fall back to tropes and "is there tests ok great" and whatever their hobbyhorses tend to be, ignoring others. It's ok because "at least it's getting reviewed" and the sausage gets made.
If AI slashed the amount of time to get a solution past review, it buys you time to retroactively fix too, and a good attitude when you tell it that PR 1234 is why we're in this mess.
> If something "seems right/passed review/fell apart" then that's the reviewer's fault right?
No, it's the author's fault. The point of a code review is not to ensure correctness, it is to improve code quality (correctness, maintainability, style consistency, reuse of existing functions, knowledge transfer, etc).
I mean, that's just not true when you're talking about varying levels of experience. Review is _very_ important with juniors, obviously. If you as sr eng let a junior put code in the codebase that messes up later, you share that blame for sure.
Unit tests, manual testing the final product, PR with two approvals needed (and one was from the most anal retentive reviewer at the company who is heavily invested in the changes I made), and QA.
>AI is certainly not reliable enough for me to jeopardize the quality of my work by using it heavily.
I mean this in sincerity, and not at all snarky, but - have you considered that you haven't used the tools correctly or effectively? I find that I can get what I need from chatbots (and refuse to call them AI until we have general AI just to be contrary) if I spend a couple of minutes considering constraints and being careful with my prompt language.
When I've come across people in my real life who say they get no value from chatbots, it's because they're asking poorly formed questions, or haven't thought through the problem entirely. Working with chatbots is like working with a very bright lab puppy. They're willing to do whatever you want, but they'll definitely piss on the floor unless you tell them not to.
Or am I entirely off base with your experience?
It would be helpful if you would relate your own bad experiences and how you overcame them. Leading off with "do it better" isn't very instructive. Unfortunately there's no useful training for much of anything in our industry, much less AI.
I prefer to use LLM as a sock puppet to filter out implausible options in my problem space and to help me recall how to do boilerplate things. Like you, I think, I also tend to write multi-paragraph prompts repeating myself and calling back on every aspect to continuously hone in on the true subject I am interested in.
I don't trust LLM's enough to operate on my behalf agentically yet. And, LLM is uncreative and hallucinatory as heck whenever it strays into novel territory, which makes it a dangerous tool.
> have you considered that you haven't used the tools correctly or effectively?
The problem is that this comes off just as tone-deaf as "you're holding it wrong." In my experience, when people promote AI, its sold as just having a regular conversation and then the AI does thing. And when that doesn't work, the promoter goes into system prompts, MCP, agent files, etc and entire workflows that are required to get it to do the correct thing. It ends up feeling like you're being lied to, even if there's some benefit out there.
There's also the fact that all programming workflows are not the same. I've found some areas where AI works well, but a lot of my work it does not. Usually things that wouldn't show up in a simple Google search back before it was enshittified are pretty spotty.
I suspect AI appeals very strongly to a certain personality type who revels in all the details in getting a proper agentic coding environment bootstrapped for AI to run amok in, and then supervises/guides the results.
Then there’s people like me, who you’d probably term as an old soul, who looks at all that and says, “I have to change my workflow, my environment, and babysit it? It is faster to simply just do the work.” My relationship with tech is I like using as little as possible, and what I use needs to be predictable and do something for me. AI doesn’t always work for me.
Yes, this rings true, it took me over a month to actually get to at least 1x of my usual productivity with Claude Code. There is a ton of setup and ton of things to learn and try to see what works. What to watch out for and how to babysit it so it doesn't go off the rails (quite heavy handed approach works best for me). It's kind of like a shitty, but very fast and very knowledgable junior developer. At this moment it still maybe isn't "worth it" for a lot of devs if productivity (and developer ergonomics) is the only goal, but it is clear to me that this is where the industry is heading and I think every dev will eventually have to get on board. These tools really just started to be somewhat decent this year. I'm 100% sure that in a year or two it will be the default for everyone in a way that you simply won't be able to compete without it at all. It would be like using a shovel instead of an excavator. Remember, right now is the worst it'll ever be.
> In my experience, when people promote AI, its sold as just having a regular conversation and then the AI does thing.
This is almost the complete opposite of my experience. I hear expressions about improvements and optimism for the future, but almost all of the discussion from active people productivly using AI is about identifying the limits and seeing what benefits you can find within those limits.
They are not useless and they are also not a panacea. It feels like a lot of people consider those the only available options.
AI is okay (not great) at generating low- to mid-skill code. If you are working in a high-skill software domain that requires pervasive state-of-the-art or first-principles implementation then AI produces consistently terrible code. It frequently is flatly incorrect about esoteric technical details that really matter.
It can't reason from first principles and there isn't training data for a lot of state-of-the-art computer science and code implementations. Nothing you can prompt will make it produce non-naive output because it doesn't have that capability.
AI works for a lot of things because, if we are honest, AI generated slop is replacing human generated slop. But not all software is slop and there are software domains where slop is not even an option.
[dead]
I think it's more a continuation of IDE versus pure editor.
More precisely:
In one side, it's the "tools that build up critical mass" philosophy. AI firmly resides here.
On the other, it's the "all you need is brain and plain text" philosophy. We don't see much AI in this camp.
One thing I learned is that you should never underestimate the "all you need is brain and plain text" camp. That philosophy survived many, many "fatal blows" and has come up on top several times. It has one unique feature: resilience to bloat, something that the current smart tools camp is obviously overlooking.
I'm probably one of the people that would say AI (at least LLMs) isn't all its cracked up to be and even I have examples where it has been useful to me.
I think the feeling stems from the exaggeration of the value it provides combined with a large number of internal corporate LLMs being absolute trash.
The overvaluation is seen in effect everywhere from the stock market, the price of RAM, the cost of energy as well as IP theft issues etc etc. AI has taken over and yet it still feels like just a really good fuzzy search. Like yeah I can search something 10x faster than before but might get a bad answer every now and then.
Yeah its been useful (so have many other things). No it's not worth building trillion dollar data centers for. I would be happier if the spend went towards manufacturing or semiconductor fabs.
Lol you made me think my power bill has gone up but I didn't get a pay rise for my increased productivity.
Similar experience. I think it's become an identity politics concept. To those who consider themselves to be anti AI, the concept of the tool having any use is haram.
It feels awkward living in the "LLMs are a useful tool for some tasks" experience. I suspect this is because the two tribes are the loudest.
I see LLM's as kinda the new hotness in IDEs. And some people will use vi forever.
Right this is what I can’t quite understand. A lot of HN folks appear to have been burned by e.g. horrible corporate or business ideas by non technical people that don’t understand AI, that is completely understandable. What I never understand is the population of coders that don’t see any value in coding agents or are aggressively against them, or people that deride LLMs as failing to be able to do X (or hallucinate etc) and are therefore useless and every thing is AI Slop, without recognizing that what we can do today is almost unrecognizeable from the world of 3 years ago. The progress has moved astoundingly fast and the sheer amount of capital and competition and pressure means the train is not slowing down. Predictions of “2025 is the year of coding agents” from a chorus of otherwise unpalatable CEOs was in fact absolutely true…
> Predictions of “2025 is the year of coding agents” from a chorus of otherwise unpalatable CEOs was in fact absolutely true…
... but maybe not in the way that these CEOs had hoped.[0]
Part of the AI fatigue is that busy, competent devs are getting swarmed with massive amounts of slop from not-very-good developers. Or product managers getting 5 paragraphs of GenAI bug reports instead of a clear and concise explanation.
I have high hopes for AI and think generative tooling is extremely useful in the right hands. But it is extremely concerning that AI is allowing some of the worst, least competent people to generate an order of magnitude more "content" with little awareness of how bad it is.
[0] https://github.com/ocaml/ocaml/pull/14369
> busy, competent devs are getting swarmed with massive amounts of slop from not-very-good developers
that is a real issue and yet a normal problem and so has an obvious response.
oh wow that PR
> What I never understand is the population of coders that don’t see any value in coding agents or are aggressively against them, or people that deride LLMs as failing to be able to do X (or hallucinate etc) and are therefore useless and every thing is AI Slop, without recognizing that what we can do today is almost unrecognizeable from the world of 3 years ago.
I don't recognize that because it isn't true. I try the LLMs every now and then, and they still make the same stupid hallucinations that ChatGPT did on day 1. AI hype proponents love to make claims that the tech has improved a ton, but based on my experience trying to use it those claims are completely baseless.
> I try the LLMs every now and then, and they still make the same stupid hallucinations that ChatGPT did on day 1.
One of the tests I sometimes do of LLMs is a geometry puzzle:
They all used to get this wrong all the time. Now the best ones sometimes don't. (That said, only one to succed just as I write this comment was DeepSeek; the first I saw succeed was one of ChatGPT's models but that's now back to the usual error they all used to make).Anecdotes are of course a bad way to study this kind of thing.
Unfortunately, so are the benchmarks, because the models have quickly saturated most of them, including traditional IQ tests (on the plus side, this has demonstrated that IQ tests are definitely a learnable skill, as LLMs loose 40-50 IQ points when going from public to private IQ tests) and stuff like the maths olympiad.
Right now, AFAICT the only open benchmarks are the METR time horizon metric, the ARC-AGI family of tests, and the "make me an SVG of ${…}" stuff inspired by Simon Willison's pelican on a bike.
Out of interest, was your intended answer "where you started, facing east"?
FWIW, Claude Opus 4.5 gets this right for me, assuming that is the intended answer. On request, it also gave me a Mathematica program which (after I fixed some trivial exceptions due to errors in units) informs me that using the ITRF00 datum the actual answer is 0.0177593 degrees north and 0.168379 west of where you started (about 11.7 miles away from the starting point) and your rotation is 89.98 degrees rather than 90.
(ChatGPT 5.1 Thinking, for me, get the wrong answer because it correctly gets near the South Pole and then follows a line of latitude 200 times round the South Pole for the second leg, which strikes me as a flatly incorrect interpretation of the words "move forward along the surface of the earth". Was that the "usual error they all used to make"?)
> Out of interest, was your intended answer "where you started, facing east"?
Or anything close to it so long as the logic is right, yes. I care about the reasoning failure, not the small difference between the exact quarter-circumferences of these great circles and 10,000km; (Not that it really matters, but now you've said the answer, this test becomes even less reliable than it already was).
> FWIW, Claude Opus 4.5 gets this right for me, assuming that is the intended answer.
Like I said, now the best ones sometimes don't [always get it wrong].
For me yesterday, Claude (albeit Sonnet 4.5, because my testing is cheap) avoided the south pole issue, but then got the third leg wrong and ended up at the north pole. A while back ChatGPT 5 (I looked the result up) got the answer right, yesterday GPT-5-thinking-mini (auto-selected by the system) got it wrong same way as you report on the south pole but then also got the equator wrong and ended up near the north pole.
"Never" to "unreliable success" is still an improvement.
Yeah, I'm pretty sure that's correct. Just whipped this up, using the WGS-84 datum.
Running this yields: Surely the discrepancy is down to spheroid vs sphere, yeah?This fascinates me. Just observing but because it hasn't worked for you, everyone else must be lying? (I'm assuming that's what you mean by baseless)
How does that bridge get built? I can provide tangible real life examples but I've found push back from that in other online conversations.
My boss has been passing off Claude generated code and documentation to me all year. It is consistently garbage. It consistently hallucinates. I consistently have to rewrite most, if not all, of what I'm handed.
I do also try and use Claude Code for certain tasks. More often than not, i regret it, but I've started to zero in on tasks it's helpful with (configuration and debugging, not so much coding).
But it's very easy then for me to hear people saying that AI gives them so much useful code, and for me to assume that they are like my boss: not examining that code carefully, or not holding their output to particularly high standards, or aren't responsible for the maintenance and thus don't need to care. That doesn't mean they're lying, but it doesn't mean they're right.
"Claude Code" by itself is not specific enough; which model are we talking about?
> it hasn't worked for you, everyone else must be lying?
Well, some non-zero amount of you are probably very financially invested in AI, so lying is not out of the question
Or you simply have blinders on because of your financial investments. After all, emotional investment often follows financial investment
Or, you're just not as good as you think you are. Maybe you're talking to people who are much better at building software than you are, and they find the stuff the AI builds does not impress them, while you are not as skilled so you are impressed by it.
There are lots of reasons someone might disagree without thinking everyone else is lying
[flagged]
What have you tried? How much time have you spent? Using AI is it’s own skill set separate from programming
There is zero guarantee that these tools will continue to be there. Those of us who are skeptical of the value of the tools may find them somewhat useful, but are quite wary of ripping up the workflows we've built for ourselves over decade(s)(+) in favor of something that might be 10-20% more useful, but could be taken away or charged greater fees or literally collapse in functionality at any moment, leaving us suddenly crippled. I'll keep the thing I know works, I know will always be there (because it's open source, etc), even if it means I'm slightly less productive over the next X amount of time otherwise.
What would you imagine a plausible scenario would possibly be that your tools would be taken away or “collapse in functionality”? I would say Claude right now has probably made worse code and wasted time than if I had coded things myself, but it’s because this is like the first few hundred days of this. Open weight models are also worse but they will never go away and improve steadily as well. I am all for people doing whatever works for them I just don’t get the negativity or the skepticism when you look at the progress over what has been almost zero time. It’s crappy now in many respects but it’s like saying “my car is slow” in the one millisecond after I floor the gas pedal
My understanding is that all the big AI companies are currently offering services at a loss, doing the classic Silicon Valley playbook of burning investor cache to get big, and then hope to make a profit later. So any service you depend on could crash out of the race, and if one emerges as a victorious monopoly and you rely on them, they can charge you almost whatever they like.
To my mind, the 'only just started' argument is wearing off. It's software, it moves fast anyway, and all the giants of the tech world have been feverishly throwing money at AI for the last couple of years. I don't buy that we're still just at the beginning of some huge exponential improvement.
My understanding is they make a loss overall due to the spending on training new models, that the API costs are profit making if considered in isolation. That said, this is based on guestimates based on hosting costs of open-weight models, owing to a lack of financial transparancey everywhere for the secret-weights models.
> that the API costs are profit making if considered in isolation.
no, they are currently losing money on inference too
> What would you imagine a plausible scenario would possibly be that your tools would be taken away or “collapse in functionality”?
Simple. The company providing the tool needs actual earning suddenly. Therefore, they need to raise the prices. They also need users to spend more tokens, so they will make the tool respond in a way that requires more refinement. After all, the latter is exactly what happened with google search.
At this point, that is pretty normal software cycle - try to attract crowd by being free or cheap, then lock features behind paywall. Then simultaneously raise prices more and more while making the product worst.
This literally NEEDS to happen, because these companies do not have any other path to profitability. So, it will happen at some point.
Sure but you’re forgetting that competition exists. If anthropic investors suddenly say “enough” and demand positive cash flow it wouldn’t be that hard, everyone is capturing users for flywheels and capex for model improvements because if they don’t they will be guaranteed to lose.
It’s going to definitely be crappy, remember Google in 2003 with relevant results and no endless SEO , or Amazon reviews being reliable, or Uber being simple and cheap, etc. once growth phase ends monetization begins and experience declines but this is guard railed by the fact that there are many players.
Comsidering what I described is how tech companies actually function and functioned in the past, theoretical competition wont help.
They are competing themselves into massive unprofitability. Eventually they will die or do the above in cooperation. Maybe there will bw minor snandal about it, but that sort of collution is not prosecuted or seriously investigated if done by big companies.
So, it will happen exactly as it always happens with tech.
AI is in a hype bubble that will crash just like every other bubble. The underlying uses are there but just like Dot Com, Tulips, subprime mortgages, and even Sir Isaac Newton's failings with the South Sea Company the financial side will fall.
This will cause bankruptcies and huge job losses. The argument for and against AI doesn't really matter in the end, because the finances don't make a lick of sense.
Ok sure the bubble/non-bubble stuff, fine, but in terms of “things I’d like to be a part of” it’s hard to imagine a more transformative technology (not to again turn off the anti-hype crowd). But ok, say it’s 1997, you don’t like the valuations you see. But as a tech person you’re not excited by browsers, the internet, the possibilities? You don’t want to be a part of that even if it means a bubble pops? I also hear a lot of people argue “finances don’t make a lick of sense” but i don’t think things are that cut and dried and I don’t see this as obvious. I don’t think really many people know how things will evolve and what size a market correction or bubble would have.
What precisely about AI is transformative, compared to the internet? E-mail replaced so much of faxing, phoning and physical mail. Online shopping replaced going to stores and hoping they have what you want, and hoping it is in stock, and hoping it is a good price. It replaced travel agents to a significant degree and reoriented many industries. It was the vehicle that killed CDs and physical media in general.
With AI I can... generate slop. Sometimes that is helpful, but it isn't yet at the point where it's replacing anything for me aside from making google searches take a bit less time on things that I don't need a definitive answer for.
It's popping up in my music streams now and then, and I generally hate it. Mushy-mouthed fake vocals over fake instruments. It pops up online and aside from the occasional meme I hate it there too. It pops up all over blogs and emails and I profoundly hate it there, given that it encourages the actual author to silence themselves and replaces their thoughts with bland drivel.
Every single software product I use begs me to use their AI integration, and instead of "no" I'm given the option of "not now", despite me not needing it, and so I'm constantly being pestered about it by something.
It has, thus far, made nearly everything worse.
> With AI I can... generate slop. Sometimes that is helpful, but it isn't yet at the point where it's replacing anything for me aside from making google searches take a bit less time on things that I don't need a definitive answer for.
I think this is probably the disconnect, this seems so wildly different from my experience. Not only that, I’ll grant that there are a ton of limitations still but surely you’d concede that there has been an incredible amount of progress in a very short time? Like I can’t imagine someone who sits down with Claude like I do and gets up and says “this is crap and a fad and won’t go anywhere”.
As for generated content, I again agree with you and you’d be surprised to learn that _execs_ agree with you but look at models from 1, 2, 3 years ago and tell me you don’t see a frightening progression of quality. If you want to say “I’ll believe it when I see it” that’s fine but my god just look at the trajectory.
For AI slop text, once again agree, once again I think we all have to figure out how to use it, but it is great for e.g. helping me rewrite a wordy message quickly, making a paper or a doc more readable, combining my notes into something polished, etc, and it’s getting better and better and better.
So I disagree it has made everything worse but I definitely agree that it has made a lot of things worse and we have a lot of Pets.com ideas that are totally not viable today, but the point I think people are maybe missing (?) is that it’s not about where we are it’s about the velocity and the future. You may be terrified and nauseated by $1T in capex on AI infra, fine but what that tells you is the scale is going to grow even further _in addition_ to the methodological / algorithmic improvements to tackle things like continual learning, robustness, higher quality multimodal generation with e.g. true narrative consistency, etc etc etc. in 5 years I don’t think many people will think of “slop” so negatively
Where you see exponential growth in capability and value, I see the early stages of logarithmic growth.
A similar thing played out a bit with IoT and voice controlled systems like Alexa. They've got their places, but nobody needs or wants the Amazon Dash buttons, or for Alexa to do your shopping for you.
Setting an alarm or adding a note to a list is fine, remote monitoring is fine, but when it comes to things that really matter like spending money autonomously, it completely falls flat.
Long story short, I see a fad that will fall into the background of what people actually do, rather than becoming the medium that they do it by.
I could not disagree more but you are far from alone and I respect a lot of the reasons I’ve gathered why you and others have this belief
Maybe those people do different work than you do? Coding agents don’t work well in every scenario.
Yet people imply that because it doesn’t work in their scenario that it’s not good?
[dead]
Most of the people against “AI” are not against it because they think it doesn’t work.
It’s because they know it works better every day and the people controlling it are gleefully fucking over the rest of the world because they can.
The plainly stated goal is TO ELIMINATE ALL HUMAN EMPLOYEES, with no plan for how those people will feed, clothe, or house themselves.
The reactions the author was getting was the reaction of a horse talking to someone happily working for the glue factory.
I don't think you're qualified to speak for most of the people against AI.
My experience is the productivity gains are negative to neutral. Someone else basically wrote that the total "work" was simply being moved from one bucket to another. (I can't find the original link.)
Example: you might spend less time on initial development, but more time on code review and rework. That has been my personal experience.
The thing that changed my view on LLMs was solo traveling for 6 months after leaving Microsoft. There were a lot of points on the trip where I was in a lot of trouble (severe food sickness, stolen items, missed flights) where I don't know how I would have solved those problems without chatGPT helping.
This is one of the most depressing things I have ever read on Hacker News. You claim to have become so de-actualized as a human being that you cannot fulfill the most basic items of Maslow’s Hierarchy of Needs (food, health, personal security, shelter, transportation) without the aid of an LLM.
IDK I got really sick in a foreign country, I wasn't sure how to get to the hospital and I was alone in a hotel room. I don't really know how using chatgpt to help me isn't actualizing.
We used to have Google search and Google maps which solved this problem of finding information about symptoms and finding medical centers near you. LLM doesn’t make anything better it just confidently asserts things about medicine that may be wrong and always need to be verified with the real sources anyway.
Well google search is a little nerfed since it went full ad revenue focused
If you are operating under the constraint that talking to strangers is impossible then I could see why ChatGPT feels like a godsend...
did you try asking at the reception desk?
Growing up in the internet age (I'm 28 now) it took me until well into my 20s to realize how many classes of problems can be solved in 30 seconds on a phone call vs hours on a computer.
The hotel owner eventually half carried me to the hospital because I got so weak from dehydration, though I'm glad I left my hotel room when I did I had difficulty avoiding fainting.
Sounds like it was the hotel owner not chatgpt who saved your ass in the end.
Rafael was the absolute best. He also made sure the hospital saw me right away since I was so weak. But once I was hooked up I used ChatGPT to scan the ivs they had me hooked up to since I had no idea what they were pumping into me since it was all in Spanish.
I can't imagine being in this situation and thinking "I will ask ChatGPT" instead of "I will ask the people at the front desk of this hotel I'm staying at"
I saw a post on Reddit the other day where the user was posting screenshots from ChatGPT about how they were using ChatGPT as a “Human OS” and outsourcing all decisions and information to ChatGPT. It made me queasy.
So basically Manna, but for your life?
god forbid you outsource easy things to technology - that's how humanity has been progressing since forever. but sure, throw away your calculator and do it by hand if that makes you feel any better.
Well, if your calculator has a loose wire that sometimes flips a random bit somewhere, you might find that a slide rule that is consistently correct has a certain value.
Extremely uncharitable reading. Plausibly they were in a foreign country where they didn't speak any of the language and didn't know how anything worked. This kind of situation was never easy for anyone.
I have been in situations where either I or someone in my party was sick and needed medical care in a foreign country where I didn't speak the language. In all cases I used my brain to figure out a solution quickly without the aid of CharGPT, and the trip continued on.
This falls in the category of life skills or maybe just "adulting." Sure, maybe ChatGPT can be considered a life skill, but you need others compiled into your brain to fall back on when it fails. If ChatGPT is the only skill you have, what do you do if your phone gets stolen?
What a strange thing to say.. ChatGPT is not a skill, its just a tool. It helps much faster and better than google searching. Why the fuss about it?
Would you say the same to someone using Google?
"Sure, maybe Google can be considered a life skill, but you need others compiled into your brain to fall back on when it fails. If Google is the only skill you have, what do you do if your phone gets stolen?"
Your post is actually one of the most patronising things I have read. The person just used ChatGPT like Google to solve their problems and your reply is about Maslow's Hierarchy of Needs?
Relax
[flagged]
This is what people used to use Google for; I remember so many times between 2000-2020 that Google saved my bacon for exactly those things (travel plans, self-diagnosis, navigating local bureaucracies, etc.)
It's a sad commentary on the state of search results and the Internet now that ChatGPT is superior, particularly since pre-knowledge-panel/AI-overview Google was superior in several ways (not hallucinating, for one, and being able to triangulate multiple sources to tell the truth).
That's pretty sad tbh
Severe food sickness? I know WebMD rightly gets a lot of hate, but this is one thing where it would be good for. Stolen items? Depending on the items and the place, possibly police. Missed flights? Customer service agent at the airport for your airline or call the airline help line.
Well I got so weak I needed to go to the hospital, and that was tough.
Is it true that it's bad for learning new skills? My gut tells me it's useful as long as I don't use it to cheat the learning process and I mainly use it for things like follow up questions.
It is, it can be an enormous learning accelerator for new skills, for both adults and genuinely curious kids. The gap between low and high performancer will explode. I can tell you that if I had LLMs I would've finished schooling at least 25% quicker, while learning much more. When I say this on HN some are quick to point out the fallibility of LLMs, ignoring that the huge majority of human teachers are many times more fallible. Now this is a privileged place where many have been taught by what is indeed the global top 0.1% of teachers and professors, so it makes more sense that people would respond this way. Another source of these responses is simply fear.
In e.g. the US, it's a huge net negative because kids aren't probably taught these values and the required discipline. So the overwhelming majority does use it to cheat the learning process.
I can't tell you if this is the same inside e.g. China. I'm fairly sure it's not nearly as bad though as kids there derive much less benefit from cheating on homework/the learning process, as they're more singularly judged on standardized tests where AI is not available.
Fallibility isn't the problem. It's probably a net benefit for learning.
Promoting dependency is the problem. Replacing effort is the problem. Making self-discipline be a thing only for suckers is the problem.
I don't get this line of thinking. Never in my life have I heard the reasoning "replacing effort is the problem" when talking about children who are able to afford 24/7 brilliant private tutors. Having access to that has always been seen as an enormous privilege.
I learnt the most from bad teachers#, but only when motivated. I was forced to go away and really understand things rather than get a sufficient understanding from the teacher. I had to put much more effort in. Teachers don't replace effort, and I see no reasons LLMs will change that. What they do though is reduced the time to finding the relevant content, but I expect at some poorly defined cost.
# The truly good teachers were primarily motivation agents, providing enough content, but doing so in a way that meant I fully engaged.
I think what it comes down to, and where many people get confused, is separating the technology itself from how we use it. The technology itself is incredible for learning new skills, but at the same time it incentivizes people to not learn. Just because you have an LLM doesn't mean you can skip the hard parts of doing textbook exercises and thinking hard about what you are learning. It's a bit similar to passively watching youtube videos. You'd think that having all these amazing university lectures available on youtube makes people learn much faster, but in reality in makes people lazy because they believe they can passively sit there, watch a video, do nothing else, and expect that to replace a classroom education. That's not how humans learn. But it's not because youtube videos or LLMs are bad learning tools, it's because people use them as mental shortcut where they shouldn't.
I fully agree, but to be fair these chatbots hack our reward systems. They present a cost/benefit ratio where for much less effort than doing it ourselves we get a much better result than doing it ourselves (assuming this is a skill not yet learned). I think the analogy to calculators is a good one if you're careful with what you're considering: calculators did indeed make people worse at mental math, yet mental math can indeed be replaced with calculators for most people with no great loss. Chatbots are indeed making people worse at mental... well, everything. Thinking in general. I do not believe that thinking can be replaced with AI for most people with no great loss.
I found it useful for learning to write prose. There's nothing quite like instantaneous feedback when learning. The downside was that I hit the limit of the LLM's capabilities really quickly. They're just not that good at writing prose (overly flowery and often nonsensical).
LLMs were great for getting started though. If you've never tried writing before, then learning a few patterns goes a long way. ("He verbed, verbing a noun.")
My friends and I have always wondered as we've gotten older what's going to be the new tech that the younger generation seems to know and understand innately while the older generations remain clueless and always need help navigating (like computers/internet for my parents' generation and above). I am convinced that thing is AI.
Kids growing up today are using AI for everything, whether or not that's sanctioned or if it's ultimately helpful or harmful to their intellectual growth. I think the jury is still out on that. But I do remember growing up in the 90s, spending a lot of time on the computer, older people would remark how I'll have no social skills, I won't be able to write cursive or do arithmetic in my head, won't learn any real skills, etc, turns out I did just fine and now those same people always have to call me for help when they run into the smallest issue with technology.
I think a lot of people here are going to become roadkill if they refuse to learn how to use these new tools. I just built a web app in 3 weeks with only prompts to Claude Code, I didn't write a single line of code, and it works great. It's pretty basic, but probably would have taken me 3+ months instead of 3 weeks doing it the old fashioned way. If you tried it once a year ago and have written it off, a lot has changed since then and the tools continue to improve every month. I really think that eventually no one will be checking code just like hardly anyone checks the assembly output of a compiler anymore.
You have to understand how the context window works, how to establish guardrails so you're not wasting time repeating the same things over and over again, force it to check its own work with lots of tests, etc. It's really a game changer when you can just say in one prompt "write me an admin dashboard that displays users, sessions, and orders with a table and chart going back 30 days" or "wire up my site for google analytics, my tag code is XXXXXXX" and it just works.
The thing is, Claude Code is great for unimportant casual projects, and genuinely very bad at working in big, complex, established projects. The latter of course being the ones most people actually work on.
Well either it's bad at it, or everyone on my team is bad at prompting. Given how dedicated my boss has been to using Claude for everything for the past year and the output continuing to be garbage, though, i don't think it's a lack of effort on the team's part, i have to believe Claude just isn't good at my job.
As context size increases, AI becomes exponentially dumber. Most established software is far, FAR too large for AI. But small, greenfield projects are amazing for something like Claude Code.
This is why I argue that the impact of LLMs is in the tail. Its all the small to midsize shops that want something done, but don't have money to hire a programmer. Its small tasks, like pushing data around, writing a quick interface to help day to day jobs in niche jobs and technical problems. Its the ability to quickly generate prototype logos and scripts for small scale ad campaigns, for solving Nancy's Excel issue, etc. Big companies have big software and code stacks with tons of dependencies. Small shops have little project needs that solve significant issues facing their operations, but will unlikely become large enough that things like scaling issues, maintenance, integration, are ever a problem at all. Its a tail, but its long in small to midsize businesses. In research labs, which I have personal experience, AI is rapidly making feasible more ambitious projects, quicker timelines, and better code, generally.
I was going to try having an AI agent analyze a well-established open source project. I was thinking of trying something like Bitcoin Core or an open-source JavaScript library, something that has had a lot of human eyes on it. To me, that seems like a good use case, as some of those projects can get pretty complex in what they're aiming to accomplish. Just the sheer amount of complexity involved in Bitcoin, for instance, would be a good candidate for having an AI agent explain the code to you as you're reviewing it. A lot of those projects are fairly well-written as they are, with the higher-level concepts being the more difficult thing to grasp.
Not attempting to claim anything against your company, but I've worked for enterprises where code bases were a complete mess and even the product itself didn't have a clear goal. That's likely not the ideal candidate for AI systems to augment.
I basically agree. OK: Small focused models for specific use cases, small models like the new mistral-3-3B that I found today to be good at tool use I and thus for building narrow ranged applications.
I have been mostly been paid to work on AI projects since 1982, but I want to pull my hair out and scream over the big push in the USA to develop super-AGI. Such a waste of resources and such a hit on society that needs resources used for better purposes.
As a gamedev, there's nothing I hate more than AI concept art. It's always soulless. The best thing about games is there's no limit to human imagination, and you can make whatever you want. But when we leave the imagination stage to a computer then leave the final brushing up to humans, we're getting the order completely backwards. It's bonkers and just disgusting to me.
That said, game engine documentation is often pretty hard to navigate. Most of the best information is some YouTube video recorded by some savant 15 year old with a busted microphone. And you need to skim through 30 minutes of video until you find what you need. The biggest problem is not knowing what you don't know, so it's hard to know where to begin. There are a lot of things you may think you need to spend 2 days implementing, but the engine may have a single function and a couple built in settings to do it.
Where LLMs shine is that I can ask a dumb question about this stuff, and can be pointed in the right direction pretty quickly. The implementation it spits out is often awful (if not unusable), but I can ask a question and it'll name drop the specific function and setting names that'll save me a lot of work. And from there, I know what to look up and it's a clear path from there.
And gamedev is a very strong case of not needing a correct solution. You just need things to feel right for most cases. Games that are rough around the edges have character. So LLM assistance for implementation (not art) can be handy.
> [...] a large number of places where LLMs make apparently-effective tools that have negative long-term consequences (see: anything involving learning a new skill, [...]
Don't people learn from imperfect teachers all the time?
Yes, they do. In fact, imperfect teachers can sometimes induce more learning than more perfect ones. And that's what is insidious about learning from AI. It looks like something we've seen before, something where we know how to make it useful and take advantage even of the gaps and inadequacies.
AI can be effective for learning a new skill, but you have to be constantly on your guard to prevent it from hacking your brain and making you helpless and useless. AI isn't the parent holding your bicycle and giving you a push and letting go when you're ready. It's the welded-on training wheels that become larger and more structurally necessary until the bike can't roll forward at all without them. It feeds you the lie that all you need is the theory, you don't ever need to apply it because the AI will do that for you so don't worry your pretty little head over it. AI teaches you that if something requires effort, you're just not relying on the AI enough. The path to success goes only through AI, and those people who try to build their own skills without it are suckers because the AI can effortlessly create things 100x bigger and better and more complex.
Personally, I still believe that human + AI hybrids have enormous potential. It's just that using AI constantly pushes away from beneficial hybridization and towards dependency. You have to constantly fight against your innate impulses, because it hacks them to your detriment.
I'd actually like to see an AI trained to not give answers, but to search out the point where they get you 90% of the way there and then steadfastly refuse to give you the last 10%. An AI built with the goal not of producing artifacts or answers, but of producing learning and growth in the user. (Then again, I'd like to see the same thing in an educational system...)
> Personally, I still believe that human + AI hybrids have enormous potential. [...]
That was true in chess for a long time, but since at least 20 years or so, approximately anytime the human deviates from what the AI suggests, it's a mistake.
Not sure if that's also category #2 or a new one, but also: Places where AI is at risk of effectively becoming a drug and being actively harmful for the user: Virtual friends/spouses, delusion-confirming sycophants, etc.
I also would like to see AI end up dying off except for a few niches, but I find myself using it more and more. It is not a productivity boost in the way I end up using it, interestingly. Actually I think it is actively harming my continued development, though that could just be me getting older, or perhaps massive anxiety from joblessness. Still, I can't help but ask it if everything I do is a good idea. Even in the SO era I would try to find a reference for every little choice I made to determine if it was a good or bad practice.
That honestly sounds like addiction.
> must be plausible, need not be accurate
This includes IME the initial stages of art creation (the planning, not generating, stage). It's kind of like having someone to bounce ideas off of at 3am. It's a convenient way of trigging your own brain to be inspired.
I also hoped it would crash and burn. The real value added usecases will remain. The overhyped crap won't.
But the shockwave will cause a huge recession and all those investors that put up trillions will not take their losses. Rich people never get poorer. One way or another us consumers will end up paying for their mistakes. Either by huge inflation, job losses, energy costs, service enshittification whatever. We're already seeing the memory crisis having huge knock on effects with next year's phones being much more expensive. That's one of the ways we are going to be paying for this circus.
I really see value in it too, sure. But the amount of investment that goes into it is insane. It's not that valuable by far. LLMs are not good for everything and the next big thing is still a big question mark. AI is dragged in by the hair into usecases where it doesn't belong. The same shit we saw with blockchains, but now on a world crashing scale. It's very scary seeing so much insanity.
But anyway whatever I think doesn't matter. Whatever happens will happen.
Very new ex-MSFT here. I couldn’t relate more with your friend. That’s exactly what happened. I left Microsoft about 5 weeks ago and it’s been really hard to detox from that culture.
AI pushed down everywhere. Sometimes shitty-AI that needed to be proved at all cost because it should live up to the hype.
I was in one of such AI-orgs and even there several teams felt the pressure from SLT and a culture drift to a dysfunctional environment.
Such pressure to use AI at all costs, as other fellows from Google mentioned, has been a secret ingredient to a bitter burnout. I’m going to therapy and under medication now to recover from it.
(Fellow ex-msftie here too; but I left for a startup almost exactly 10 years ago, and I miss how that older culture is apparently gone).
What I don't understand is where the AI irrationality is coming from: the C-suite (still in B37?) are all incredibly smart people who must surely be aware of the damage this top-down policy is having on morale, product-quality, and how the company is viewed by its own customers - and yet, they do.
I'm not going to pretend things were being run perfectly when I was at MS: there were plenty of slow-motion mistakes playing-out right in front of us all[1] - and as I look back, yes, I was definitely frustrated at these clear obvious mistakes and their resultant unimaginable waste of human effort and capital investment.
Actually, come to think about it... maybe perhaps things really haven't changed as much? Clearly something neurotoxic got into the Talking Rain cans sometime around 2010-2011 - then was temporarily abated in 2014-2015; then came back twice as hard in 2022.
-------
[1]: Windows 8 and the Start Screen; the SurfaceRT; Visual Studio 2012 with SHOUTY MENUS and monochrome toolbar icons; the laggy and sluggish Office 2013; the crazy simultaneous development of entirely separate new reimplementations of the Office apps for iOS, Android, WinRT, the web. While ignoring the clear market-demand for cloud-y version of Active Directory without on-prem DCs (instead we got Entra, then InTune).
Best of luck with this transition. I had a really hard time when I left Microsoft. It’s took longer than I expected to feel better but I do now.
Jesus, is it actually a thing to grieve after leaving a job? (doesn't sound like you were laid off)
Yes
Hey man- hang in there.
FWIW: I realized this year that there are whole cohorts of management people who have absolutely zero relationship with the words that they speak. Literal tabula rasas who convert their thoughts to new words with no attachment to past statements/goals.
Put another way: Liars exist and operate all around you in the top tier of the FAANGS rn.
Reminded me of: https://ludic.mataroa.blog/blog/brainwash-an-executive-today...
> AI-powered map
> none of it had anything to do with what I built. She talked about Copilot 365. And Microsoft AI. And every miserable AI tool she's forced to use at work. My product barely featured. Her reaction wasn't about me at all. It was about her entire environment.
She was given two context clues. AI. And maps. Maps work, which means all the information in an "AI-powered map" descriptor rests on the adjective.
The product website isn't convincing either. It's only in private beta, and the first example shows 'A scenic walking tour of Venice' as the desired trip. I'll readily believe LLMs will gladly give you some sort of itinerary for walking in Venice, including all highlights people write and post about a lot on social media to show how great their life is. But if you asked anyone knowledgable about travel in that region, the counter questions would be 'Why Venice specifically? I thought you hated crowds — have you considered less crowded alternatives where you will be appreciated more as a tourist? Have you actually been to Italy at all?'.
LLMs are always going to give you the most plausible thing for your query, and will likely just rehash the same destinations from hundreds of listicles and status signalling social media posts.
She probably understood this from the minimal description given.
> I'll readily believe LLMs will gladly give you some sort of itinerary for walking in Venice
I tried this in Crotone in September. The suggested walking tour was shit. The facts weren't remarkable. The stops were stupid and stupidly laid out. The whole experience was dumb and only redeeming because I was vacationing with a friend who founded on the of the AI companies.
> if you asked anyone knowledgable about travel in that region, the counter questions would be 'Why Venice specifically?
In the region? Because it's a gorgeous city with beautiful architecture, history and festivals?
> In the region? Because it's a gorgeous city with beautiful architecture, history and festivals?
That would be a great answer to continue from. Would you come for the Biennale specifically? Do you care greatly about sustainability? Would you enjoy yourself more in a different gorgeous city without the mass-tourism problem if that meant you would feel more welcome? Is there a way you can visit Venice without contributing to the issue as much? Off-season perhaps?
Venice is unique, but there are a lot of gorgeous places in the region, from Verona to Trieste.
If it’s your first time going to Italy you absolutely should visit Venice. The crowds are unpleasant, but so what? Are you going to avoid Rome too? Only go to little provincial villages?
Why should you absolutely visit Venice? It's not just the crowds that are unpleasant, you are actively contributing to a problem.
No, you don't have to avoid Rome — it's not as bad as Venice, and can support more people — but plan ahead and don't just do a tour of all the 'must see' highlights. Look into the off season if you are a history buff with a hyperfocus on Rome — you won't be able to finish your list otherwise due to all the pointless waiting around.
And yes, visit provincial villages and eat in an authentic Italian restaurant where tourists are mostly other Italians. Experience the difference. But you are not limited to villages. Italy is huge, and there are a lot of cities with remarkable museums, world-renowned festivals, great cuisine, and where your money is more than welcome and your stay won't be marred by extreme crowds and pushy con artists in faux Roman gladiator gear.
> Bring up AI in a Seattle coffee shop now and people react like you're advocating asbestos.
I don't know who first uses the asbestos analogy, but it's 1000% on point.
I think Cory Doctrow says it best,
"AI is the asbestos we're shoveling into the walls of our society — and our descendants will be digging it out for generations."
I believe that's exactly the language to combat AI hype.
As a place with a high density of people with agency to influence the outcome, I think it's important for people here to acknowledge that much of what the negative people think is probably 100% true.
There will absolutely some cases where AI is used well. But probably the larger fraction will be where AI does not give better service, experience or tool. It will be used to give a cheaper but shittier one. This will be a big win for the company or service implementing it, but it will suck for literally everybody else involved.
I really believe there's huge value in implementing AI pervasively. However it's going to be really hard work and probably take 5 years to do it well. We need to take an engineering and human centred approach and do it steadily and incrementally over time. The current semi-religious fervour about implementing it rapidly and recklessly is going to be very harmful in the longer term.
From someone who's mostly avoided the AI craze so far, I think it comes down to a combination of two things.
1) "AI" is in many ways like the unreliable coworker so many of us have had in the past - maybe someone who talked a good game in interviews, but after you'd worked with them for a while you realize that you have to double-check everything they do for stupid/careless problems. In the worst case, you also have to do some hand-holding as they ask you for help with things that they should know how to do. They can produce good output but they can't be trusted to produce good (or even marginal) output so they're a net time sink.
2) In a frightening number of companies right now, that problem coworker is the owner's or manager's relative and cannot be avoided.
So boom, there you go, bad coworkers and a toxic culture that not just protects but promotes them.
Instead of admitting you built the wrong thing you denigrate a friend and someone whom you admire. Instead of reconsidering the value of AI you immediately double down.
This is a product of hurt feelings and not solid logic.
I like AI to the extent that it can quickly solve well-worn, what I've taken to calling "embarrassingly solved problems", in your environment, like "make an animation subsystem for my program". A Qt timeline is not hard, but it is tedious, so the AI can do it.
And it turns out that there are some embarrassingly solved problems, like rudimentary multiplayer games, that look more impressive than they really are when you get down to it.
More challenging prompts like "change the surface generation algorithm my program uses from Marching Cubes to Flying Edges", for which there are only a handful of toy examples, VTK's implementation, and the paper, result in an avalanche of shit. Wasted hours, quickly becoming wasted days.
I feel the same way about those embarrassingly solved problems! Though oftentimes the trick is knowing what to ask for. I remember grinding for weeks on a front end but until I realized what the problem was (not the exact bug just what the general concept should be) Claude then fixed it in 10 seconds.
Thanks for the post - it's work to write and synthesize, and I always appreciate it!
My first reaction was "replace 'AI' with the word 'Cloud'" ca 2012 at MS; what's novel here?
With that in mind, I'm not sure there is anything novel about how your friend is feeling or the organizational dynamics, or in fact how large corporations go after business opportunities; on those terms, I think your friends' feelings are a little boring, or at least don't give us any new market data.
In MS in that era, there was a massive gold rush inside the org to Cloud-ify everything and move to Azure - people who did well at that prospered, people who did not, ... often did not. This sort of internal marketplace is endemic, and probably a good thing at large tech companies - from the senior leadership side, seeing how employees vote with their feet is valuable - as is, often, the directional leadership you get from a Satya who has MUCH more information than someone on the ground in any mid-level role.
While I'm sure there were many naysayers about the Cloud in 2012, they were wrong, full stop. Azure is immensely valuable. It was right to dig in on it and compete with AWS.
I personally think Satya's got a really interesting hyper scaling strategy right now -- build out national-security-friendly datacenters all over the world -- and I think that's going to pay -- but I could be wrong, and his strategy might be much more sophisticated and diverse than that; either way, I'm pretty sure Seattleites who hate how AI has disrupted their orgs and changed power politics and winners and losers in-house will have to roll with the program over the next five years and figure out where they stand and what they want to work on.
It does feel like without a compelling AI product Microsoft isn't super differentiated. Maybe Satya is right that scale is a differentiation, but I don't think people are as trapped in an AI ecosystem as they were in Azure.
Their hyper scale data centers are super compelling. And they get OpenAI IP for some time. I don’t think we’ve really seen what they want to launch on the product side yet.
Satya mentioned recently that computer use agents use like 5x the windows license time on azure over a single person - they see a lotttt of inference growth coming and its multiplicative in that it uses their compute and azure infra.
Lol. You don't think that Microsoft has _a_ compelling AI product? The new version of 365 Copilot is objectively compelling, even if it is a work in progress. And Github Copilot is also objectively compelling.
I don’t think anyone would choose GitHub copilot over Cursor
Moving to the Cloud proved to be a pretty nice moneymaker far faster and more concretely than AI has been for these companies. It's a fair comparison regarding corporate pushes but not anything more than that.
There has always been a lot of Microsoft hate, but now its a whole new level. Windows now really sucks, My new laptop is all Linux for the first time ever. I dont see why this company is still so valuable. Most people only use a browser now and some ios apps, there is no need for Windows or Microsoft (and of course Azure is never anyone's first choice). Steam makes the gamers happy to leave too.
They do seem to be collecting a lot of self inflicted Ls lately
Gaming.
wow — this hit me hard.
I live in Seattle, and got laid off from Microsoft as a PM in Jan of this year.
Tried in early 2024 to demonstrate how we could leverage smaller models (such as Mixtral) to improve documentation and tailor code samples for our auth libraries.
The usual “fiefdom” politics took over and the project never gained steam. I do feel like I was put in a certain “non-AI” category and my career stalled, even though I took the time to build AI-integrated prototypes and present them to leadership.
It’s hard to put on a smile and go through interviews right now. It feels like the hard-earned skills we bring to the table are being so hastily devalued, and for what exactly?
I’m glad it resonated. I’ve found a lot of people in Microsoft have some shared struggles right now. It’s really hard get excited about jobs after that, but you only need one job to be the right fit. It sounds like you were working on some great stuff and you should keep pursuing that interest in the meantime. You never know where it might lead you.
> But then I realized this was bigger than one conversation. Every time I shared Wanderfugl with a Seattle engineer, I got the same reflexive, critical, negative response. This wasn't true in Bali, Tokyo, Paris, or San Francisco—people were curious, engaged, wanted to understand what I was building. But in Seattle? Instant hostility the moment they heard "AI."
So what's different between Seattle and San Francisco? Does Seattle have more employee-workers and San Francisco has more people hustling for their startup?
I assume Bali (being a vacation destination) is full people who are wealthy enough to feel they're insulated from whatever will happen.
I live in Seattle now, and have lived in San Francisco as well.
Seattle has more “normal” people and the overall rhetoric about how life “should be” is in many ways resistant to tech. There’s a lot to like about the city, but it absolutely does not have a hustle culture. I’ve honestly found it depressing coming from the East Coast.
Tangent aside, my point is that Seattle has far more of a comparison ground of “you all are building shit that doesn’t make the world better, it just devalues the human”. I think LLMs have (some) strong use cases, but it is impossible to argue that some of the societal downsides we see aren't ripe for hatred - and Seattle will latch on to that in a heartbeat.
Edit: are -> aren't. Stupid autocorrect.
Western Washington is very much a "work to live" place, and in a lot of ways there's a feedback loop to ensure it stays that way: surrounded by fellow "work to live" folks who would far rather just get our work done well and head out to the mountains, forests, and seas, the hustle bros will usually leave within a few years. I've watched it happen with quite a number of type-A folks. Exceptions for folks who make it into certain orgs in Amazon or into startup leadership, those seem to be safe places for hustlers around here.
Anyway. I think you're spot on with the "you all are building shit that doesn't make the world better, it just devalues the human" vibe. Regardless of what employers in WA may force folks to build, that's the mentality here, and AI evangelists don't make many friends... nor did blockchain evangelists, or evangelists of any of the spin-off hype trains ("Web3", NFTs, etc). I guess the "cloud" hype train stuck here, but that happened before I moved out west.
Seattle has always been a second-mover when it comes to hype and reality distortion. There is a lot more echo chamber fervor (and, more importantly, lots of available FOMO money to burn) in SF around whatever the latest hotness is.
My SF friends think they have a shot at working at a company whose AI products are good (cursor, anthropic, etc.), so that removes a lot of the hopelessness.
Working for a month out of Bali was wonderful, it's mostly Australians and Dutch people working remotely. Especially those who ran their own businesses were super encouraging (though maybe that's just because entrepreneurs are more supportive of other entrepreneurs).
in the first paragraph, he drops a link to the startup he's working on:
> I wanted her take on Wanderfugl, the AI-powered map I've been building full-time.
this seems to me like pretty obvious engagement-bait / stealth marketing - write a provocative blog post that will get shared widely, and some fraction of those people will click through to see what the product is all about.
but, apparently it's working because this thread is currently at 400+ comments after 3 hours.
I’ve been really tempted to put up a “wall of baiters” for such cases.
'If you could classify your project as "AI," you were safe and prestigious. If you couldn't, you were nobody. Overnight, most engineers got rebranded as "not AI talent."'
It hits weirdly close to home. Our leadership did not technically mandate use, but 'strongly encourages' it. I did not even have my review yet, but I know that once we get to the goals part, use of AI tools will be an actual metric ( which is.. in my head somewhere between skeptic and evangelist.. dumb ).
But the 'AI talent' part fits. For mundane stuff like data model, I need full committee approval from people, who don't get it anyway ( and whose entire contribution is: 'what other companies are doing' ).
The full quote from that section is worth repeating here.
---------
"If you could classify your project as "AI," you were safe and prestigious. If you couldn't, you were nobody. Overnight, most engineers got rebranded as "not AI talent." And then came the final insult: everyone was forced to use Microsoft's AI tools whether they worked or not.
Copilot for Word. Copilot for PowerPoint. Copilot for email. Copilot for code. Worse than the tools they replaced. Worse than competitors' tools. Sometimes worse than doing the work manually.
But you weren't allowed to fix them—that was the AI org's turf. You were supposed to use them, fail to see productivity gains, and keep quiet.
Meanwhile, AI teams became a protected class. Everyone else saw comp stagnate, stock refreshers evaporate, and performance reviews tank. And if your team failed to meet expectations? Clearly you weren't "embracing AI." "
------------
On the one hand, if you were going to bet big on AI, there are aspects of this approach that make sense. e.g. Force everyone to use the company's no-good AI tools so that they become good. However, not permitting employees outside of the "AI org" to fix things neatly nixes the gains you might see while incurring the full cost.
It sounds like MS's management, the same as many other tech corp's, has become caught up in a conceptual bubble of "AI as panacea". If that bubble doesn't pop soon, MS's products could wind up in a very bad place. There are some very real threats to some of MS's core incumbencies right now (e.g. from Valve).
I know of at least one bigco that will no longer hire anyone, period, who doesn't have at least 6 months of experience using genai to code and isn't enthusiastic about genai. No exceptions. I assume this is probably true of other companies too.
I think it makes some amount of sense if you've decided you want to be "an AI company", but it also makes me wary. Apocryphally Google for a long period of time struggled to hire some people because they weren't an 'ideal culture fit'. i.e. you're trying to hire someone to fix Linux kernel bugs you hit in production, but they don't know enough about Java or Python to pass the interview gauntlet...
Like any tool, the longer you use it the better you learn where you can extract value from it and where you can't, where you can leverage it and where you shouldn't. Because your behaviour is linked to what you get out of the LLM, this can be quite individual in nature, and you have to learn to work with it through trial and error. But in the end engineers do appear to become more productive 'pairing' with an LLM, so it's no surprise companies are favouring LLM-savvy engineers.
> But in the end engineers do appear to become more productive 'pairing' with an LLM
Quite the opposite: LLMs reduce productivity, they don't increase it. They merely give the illusion of productivity because you can generate code real fast, but that isn't actually useful when you spend time fixing all the mistakes it made. It is absolutely insane that companies are stupid enough to require people use something which cripples them.
So far, for me, it's just an annoying tool that gets worse outcomes potentially faster than just doing it by hand.
It doesn't matter how much I use it. It's still just an annoying tool that makes mistakes which you try to correct by arguing with it but then eventually just fix it yourself. At best it can get you 80% there.
I’m all for neurodivergent acceptance but it has caused monumentally obnoxious people like this to assume everyone else is the problem. A little self awareness would solve a lot of problems.
HN guidelines ask commenters to avoid name-calling. You can critique the article without slurs.
Exhibit B
I don't think the phenomenon is limited to Seattle.
Its not. I know some ex bay area devs who are the same mind, and i'm not too far off.
I think its definitely stronger in MS as my friend on the inside tells me, than most places.
There are alot of elements to it, one being profits at all costs, the greater economy, FOMO, and a resentment of engineers and technical people who have been practicing, what execs i can guess only see as alchemy, for a long time. They've decided that they are now done with that and that everyone must use the new sauce, because reasons. Sadly until things like logon buttons dis-appear and customers get pissed, it won't self-correct.
I just wish we could present the best version of ourselves and as long as deadlines are met, it'll all work out, but some have decided for scorched-earth. I suppose its a natural reaction to always want to be on the cutting edge, even before the cake has left the oven.
I think reading the room is required here. You and your friend can both be right at the same time. You want to build an AI-enabled app, and indeed there's plenty of opportunity for it I'm sure. And your friend can hate what it's done to their job stability and the industry. Also, totally unrelated, but is the meaning or etymology behind the app name Wanderfugl? I initially read it as Wanderfungl.
I "spoke" it to myself while reading, and instantly heard "Wonderfuckle".
It's wandering bird in Norwegian
There's a great non-AI point in this article - Seattle has great engineers. In pursuing startups, Seattle engineers are relatively unambitious compared to the Bay Area. By that I mean there's less "shooting for unicorns" and a comparatively more reserved startup culture and environment.
I'm not sure why. I don't think it's access to capital, but I'd love to hear thoughts.
one reason is that startup culture is cringe as hell
I'm being course, but like... it is though.
My pet theory is that most of the investor class in seattle is ex microsoft and ex amazon. Neither microsoft nor amazon are really big splashy unicorns. Amazon's greatest innovation (aws) isn't even their original line of business and is now 'boring'. No doubt they've innovated all over their business in both little and big ways, but not splashy ways, hell every time amazon tries to splash they seem to fall on their ass more often than not (look at their various cancelled hardware lines, their game studios, etc. Alexa still chugs on, but she's not getting appreciably better to the end user over even the last 10 years).
Microsoft is the same, a generally very practical company just trying to practical company stuff.
All the guys that made their bones, vested and rested and now want to turn some of that windfall into investments likely don't have the kind of risk tolerance it takes to fund a potential unicorn. All smart people I'm sure, smart enough to negotiate big windfalls from ms/az but far less risk tollerant than a guy in SF who made their investment nestegg building some risky unicorn.
I'm not surprised you're getting bad reactions from people who aren't already bought-in. You're starting from a firm "I'm right! They're wrong!" with no attempt to understand the other side. I'm sure that comes across not just in your writing
I'm a former AI-hater and sceptic. I do B2B consultancy/development work for my clients.
I understand why people are irritated by this.
However, recently I tried the GitHub Copilot agent with VS Code using Claude Opus 4.5. It literally implemented, tested and fixed entire new features in minutes, that otherwise would have taken days or even weeks of routine repetitive work from me. All while mimicking style and patterns in my existing codebase which made me instantly understand exactly what it was doing. I found it to be an insane productivity boost and I can see how it might be affecting hiring processes in numerous industries, especially in software engineering space.
It's satisfying to hear that Microsoft engineers hate Microsoft's AI offerings as much as I do.
Visual Studio is great. IntelliSense is great. Nothing open-source works on our giant legacy C++ codebase. IntelliSense does.
Claude is great. Claude can't deal with millions of lines of C++.
You know what would be great? If Microsoft gave Claude the ability to semantic search the same way that I can with Ctrl-, in Visual Studio. You know what would be even better? If it could also set breakpoints and inspect stuff in the Debugger.
You know what Microsoft has done? Added a setting to Visual Studio where I can replace the IntelliSense auto-complete UI, that provides real information determined from semantic analysis of the codebase and allows me to cycle through a menu of possibilities, with an auto-complete UI that gives me a single suggestion of complete bullshit.
Can't you put the AI people and the Visual Studio people in a fucking room together? Figure out how LLMs can augment your already-really-good-before-AI product? How to leverage your existing products to let Claude do stuff that Claude Code can't do?
So, they were laid-off because they stubbornly resisted adopting AI? Remember those who were laid-off in the 90s because they resisted working at a computer because they hated computers? History repeating.
I don't think the root cause here is AI. It's the repeated pattern of resistance to massive technological change by system-level incentives. This story has happened again and again throughout recent history.
I expect it to settle out in a few years where: 1. The fiduciary duties of company shareholders will bring them to a point of stopping to chase AI hype and instead derive an understanding of whether it's driving real top-line value for their business or not. 2. Mid to senior career engineers will have no choice but to level up their AI skills to stay relevant in the modern workforce.
It's probably good if some portion of the engineering culture is irrationally against AI and like refuses to adopt it sort of amish style. There's probably a ton still good work that can only be done if every aspect of a product/thing is given focused human attention to it, some that might out-compete AI aided ones.
I think you hit the nail in the head there. There's absolutely nothing we can do with AI that we can't do without it. And the level of understanding of a large codebase that a solid group of engineers has is paramount to moving fast once the product is live.
> level of understanding of a large codebase that a solid group of engineers has is paramount to moving fast once the product is live.
Trying hiring and retaining that solid group of engineers if you are a small/mid sized company without FAANG-level resources to offer.
Seattle is the type of economy that is heavily threatened by AI. Desk jobs that Claude basically already knows how to do, and just needs to be integrated into the existing systems to have impact.
Most people in Seattle "tech" are middle management with no discernible skills other than organizational deckchair arrangement. It is a place to optimize for work-life balance, and not take risk - this is why the region, despite its technology density, has such a disproportionately small startup scene.
AI IS a huge threat to a place like this and I am not optimistic about the ability for people to adapt.
When companies mandate tools regardless of effectiveness and punish engineers for not using them, it's governance failure dressed as innovation.
The distinction between "real products" (solving actual problems) and "hype products" (exciting investors) reflects a pragmatic engineering perspective.
The situation seems less about AI itself and more about corporate dysfunction using AI as cover for broader organizational failures.
We have these weekly rah rah AI meetings where we swap tips on what we've achieved with copilot and devin. Mostly crickets but everyone is talking with lots of enthusiasm. Its starting to get silly though now, most people can't even get the tools to do anything useful more than trivial things we used to see on stack overflow.
Our (on-the-way-out) mayor likes it!
"I said, Imagine how cool would this be if we had like, a 10-foot wall. It’s interactive and it’s historical. And you could talk to Martin Luther King, and you could say, ‘Well, Dr, Martin Luther King, I’ve always wanted to meet you. What was your day like today? What did you have for breakfast?’ And he comes back and he talks to you right now."
The thing about dismissing AI in 2025 is that it's on par with dismissing the wearable computing group at MIT in the 1980s.
But admittedly, if one had tried to productize their stuff in the 1980s it would have been hilarious. So the rewards here are going to go to the people who read the right tea leaves and follow the right path to what's inevitable.
In the short term, a lot of not so smart, people are going to lose a lot of money believing some of the ludicrous short-term claims. But when has that not been the case?
This is not the right time of year to pitch in Seattle. The days are short and the people are cranky. But if they want to keep hating on AI as a technology because of Microsoft and Amazon, let them, and build your AI technology somewhere else. San Francisco thinks the AGI is coming any day now so it all balances out, no?
I might have written in broad strokes but I was really annoyed. Like I want to get out of the Amazon/Microsoft bubble. And yeah everyone is cranky rn
The name "Wanderfugl" is wanderfully fugly.
Oddly, the screenshots in the article show the name as "Wanderfull".
> After a pause I tried to share how much better I've been feeling—how AI tools helped me learn faster, how much they accelerated my work on Wanderfugl. I didn't fully grok how tone deaf I was being though. She's drowning in resentment.
Here's the deal. Everyone I know who is infatuated with AI shares things AI told them with me, unsolicited, and it's always so amazingly garbage, but they don't see it or they apologize it away [1]. And this garbage is being shoved in my face from every angle --- my browser added it, my search engine added it, my desktop OS added it, my mobile OS added it, some of my banks are pushing it, AI comment slop is ruining discussion forums everywhere (even more than they already were, which is impressive!). In the mean time, AI is sucking up all the GPUs, all the RAM, and all the kWH.
If AI is actually working for you, great, but you're going to have to show it. Otherwise, I'm just going to go into my cave and come out in 5 years and hope things got better.
[1] Just a couple days ago, my spouse was complaining to her friend about a change that Facebook made, and her friend pasted an AI suggestion for how to fix it with like 7 steps that were all fabricated. That isn't helpful at all. It's even less helpful than if the friend just suggested to contact support and/or delete the facebook account.
>It's even less helpful than if the friend just suggested to contact support and/or delete the facebook account.
To be fair, pretty much all advice in life is less helpful than 'delete the facebook account'
I've recently found that it can be a useful substitute for stackoverflow. It does occasionally make shit up, but stackoverflow and forums searching also has a decently high miss rate as well, so that doesn't piss me off too much. And it's usually immediately obvious when a method doesn't exist, so it doesn't waste a lot of time for each incident.
Specifically I was using Gemini to answer questions about Godot specifically for C# (not gdscript or using the IDE, where documentation and forums support are stronger), and it was mostly quite good for that.
It's like porn: use it privately if you have to, but don't make it my problem.
speaking of caves...
I just picked up an old gamecube. it's refreshing to play purely offline content from an age without any AI art of any kind. some games, like animal crossing, will break in 2031 though, so there's only a good 5 more years left to enjoy it.
Well, the Gamecube is probably fine, but the Dreamcast was thinking, so watch out :P
I know Animal Crossing is sensitive to the RTC, but could you set the clock back 28 years and go from there? You'll have the same days of the week and what not, just the year number will be wrong.
My previous software job was for a Seattle-based team within Amazon's customer support org.
I consider it divine intervention that I departed shortly before LLMs got big. I can't imagine the unholy machinations my former team has been tasked with working on since I left.
I believe that most headlines with the format "everybody thinks X" would be more honest if rewritten as "I believe X".
Or should I say... Everybody thinks that titles using the format "everybody thinks X" would be more honest if they instead said: "I believe X."
I'm surprised that nobody at the tech companies seems to realize basic psychology: The harder you try to force something on people, the less they want it.
He describes his startup as an ai-oriented map... to me that sounds amazing and totally at my alley. But then it's actually about trip planning... to me is too constrained and specific. What I would love is a map type experience that gives me an AI type interface for interesting things in any given area that might be near me and worth checking out.
And not just for travel by the way... I love just exploring maps and seeing a place.. I'd love to learn more about a place kind of like a mesh between Wikipedia and a map and AI could help
So actually that’s what I’ve been working on. It’s been taking so long because I’ve been building my own ai focused geocoder to do just that
Look forward to trying it!
Was this written by AI? It sounds like the writing style of an elementary school student. Almost entirely made of really simple sentence structures, and for whatever reason I find it really annoying to read.
It has all the signs. Em-dashes. That distinct way of using short sentences to drive a point. Like this.
Whenever I see "everyone", and broad statements that try to paint an entire geography based on one company "Microsoft" I'm suspect of the motives of the author at worst, or just dismissive of the premise at best.
I see what the author is saying here, but they're painting with an overly broad brush. The whole "San Francisco still thinks it can change the world" also is annoying.
I am from the Seattle area, so I do take it a bit personally, but this isn't exactly my experience here.
> Instead, she reacted to it with a level of negativity I'd never seen her direct at me before.
A few months ago, a friend of mine showed a poem she wrote for her newborn. Or more specifically, she asked ChatGPT to write for her newborn.
I almost acted like this ex-Microsoft senior. Tbh if I didn't know it was for her own child, I would have acted this way.
I (thought that I) managed ignoring my opinions about whether writing poems is a good use of AI and steering the topic to baby formula milk instead.
I think we should be honest and consistent about losing our jobs to AI. For decades we justified automation by saying things like: "Sure, the toothpaste tube machine replaced 30 workers, but someone will need to maintain and operate it." And whenever someone pointed out that one mechanic doesn't replace those 30 lost jobs, everyone went quiet.
Now we're using the same logic again: "Well, you just need to learn to use the AI before someone else does."
And if anyone doubts that the world can move on without the software engineer, remember that it moved on just fine after eliminating the toothpaste tube fillers. The world kept turning, just a little colder and more indifferent each time another role disappeared.
Maybe instead of pretending this time is different, we should focus on writing the best epitaph we can.
The only clear applications for AI in software engineering are for throwaway code, which interestingly enough isn't used in software engineering at all, or for when you're researching how to do something, for which it's not as reliable as reading the docs.
They should focus more on data engineering/science and other similar fields which is a lot more about those, but since there are often no tests there, that's a bit too risky.
Most codebases are pretty proprietary and so out of distribution for the AI which causes poor performance and you really have to fight some of the training to use internal libraries and conventions.
Still useful but certainly not PhD-level when it imports X, you remind it's instructions are to use Y, it apologizes, imports Y but then immediately imports X again.
So when your project gets cancelled for AI and haven't gotten a raise while AI researchers in the same company are getting generational wealth- it does feel pretty bad.
According to demos, AI coding tools are allowing neophytes to instantly create working apps and websites with mere descriptions of what they want. According to devs, they're 10x as productive because certain time-consuming tasks are condensed like unit test writing, code reviews, and code refactors and clean-up. So we're to assume that in the age when the typical App Store offers a million apps we'll never be interested in, soon that number will be a billion.
In comes Wanderfugl. A tool for traveling that I will never need, where just trying to figure out what it does used more time than I wanted to spend on it. Now with AI, there will be several shiny new travel apps like Wanderfugl for you to learn and choose from literally every time you go on another vacation.
Wanderfugl may be wonderful, and an achievement. But the reaction of this Seattleite is "What's the point anymore?" This is why I am uninterested in the AI coding trend. It's just a part of a lot of new stuff I don't need.
this reads like an ad for your project
It reads like it's AI-edited, which is deliciously ironic.
(Protip: if you're going to use em—dashes—everywhere, either learn to use them appropriately, or be prepared to be blasted for AI—ification of your writing.)
My creative writing teacher in college drilled the m dash into me. I can’t really write without them now
I think the presence of em dashes is a very poor metric for determining if something is AI generated. I'm not sure why it's so popular.
For me it is that they are wrongly used in this piece. Em dashes as appositives have the feel of interruption—like this—and are to be used very sparingly. They're a big bump in the narrative's flow, and are to be used only when you want a big bump. Otherwise appositives should be set off with commas, when the appositive is critical to the narrative, or parentheses (for when it isn't). Clause changes are similar—the em dash is the biggest interruption. Colons have a sense of finality: you were building up to this: and now it is here. Semicolons are for when you really can't break two clauses into two sentences with a full stop; a full stop is better most of the time. Like this. And so full stops should be your default clause splice when you're revising.
Having em-dashes everywhere—but each one or pair is used correctly—smacks of AI writing—AI has figured out how to use them, what they're for, and when they fit—but has not figured out how to revise text so that the overall flow of the text and overall density of them is correct—that is, low, because they're heavy emphasis—real interruptions.
(Also the quirky three-point bullet list with a three-point recitation at the end with bolded leadoffs to each bullet point and a final punchy closer sentence is totally an AI thing too.)
But, hey, I guess I fit the stereotype!—I'm in Seattle and I hate AI, too.
> Semicolons are for when you really can't break two clauses into two sentences with a full stop; a full stop is better most of the time.
IIRC (it's been a while) there are 2 cases where a semi-colon is acceptable. One is when connecting two closely-related independent clauses (i.e. they could be two complete sentences on their own, or joined by a conjunction). The other is when separating items in a list, when the items themselves contain commas.
OMG, beautifully described! (not sarcastic!)
Ironically, years ago I fell into the habit of using too many non-interrupting em dashes because people thought semicolons were pretentious.
But introductory rhetorical questions? As sentence fragments? There I draw the line.
Also, for sheer delightful perversity, I ran the above comment through Copilot/ChatGPT and asked it to revise, and this is what I got. Note the text structuring and how it has changed! (And how my punctuation games are gone, but we expected that.)
>>>
For me, the issue is that they’re misused in this piece. Em dashes used as appositives carry the feel of interruption—like this—and should be employed sparingly. They create a jarring bump in the narrative’s flow, and that bump should only appear when you want it. Otherwise, appositives belong with commas (when they’re integral to the sentence) or parentheses (when they’re not). Clause breaks follow the same logic: the em dash is the strongest interruption. Colons convey a sense of arrival—you’ve been building up to this: and now it’s here. Semicolons are for those rare cases when two clauses can’t quite stand alone as separate sentences; most of the time, a full stop is cleaner. Like this. Which is why full stops should be your default splice when revising.
Sprinkling em dashes everywhere—even if each one is technically correct—feels like AI writing. The system has learned what they are, how they work, and when they fit, but it hasn’t learned how to revise for overall flow or density. The result is too many dashes, when the right number should be low, because they’re heavy emphasis—true interruptions.
(And yes, the quirky three-point bullet list with bolded openers and a punchy closer at the end is another hallmark of AI prose.)
But hey, I guess I fit the stereotype—I’m in Seattle, and I hate AI too.
I think it's because it is difficult to actually add an em dash when writing with a keyboard (except I heard on Macs). So it's either they 1)memorized the em dash alt code, 2)had a keyboard shortcut for the key, or 3)are using the character map to insert it every time, all of which are a stretch for a random online post.
You just type hyphen twice in many programs... Or on mobile you hold hyphen for a moment and choose em dash. I don't use it, but it's very easy to use.
Related article posted here https://news.ycombinator.com/item?id=46133941 explains it: "Within the A.I.’s training data, the em dash is more likely to appear in texts that have been marked as well-formed, high-quality prose. A.I. works by statistics. If this punctuation mark appears with increased frequency in high-quality writing, then one way to produce your own high-quality writing is to absolutely drench it with the punctuation mark in question. So now, no matter where it’s coming from or why, millions of people recognize the em dash as a sign of zero-effort, low-quality algorithmic slop."
So the funny thing is m dashes have always been a great trick to help your writing flow better. I guess gpt4o figured this out in RLHF and now it's everywhere
Ironic? The author is working on an AI project.
The irony is that AI writing style is pretty off-putting, and the story itself was about people being put off by the author's AI project.
You mean Wanderfugl???
An iconic name
Everyone in Detroit hates EVs.
AI for decades has been the word to mean "frontier capability which is not fully developed yet". It is not a pitch for end users. Perhaps your product produces quality code. Perhaps it produces highly novel trip itineries. Say that, but don't say AI. The end user does not know the difference between a neural net and a for loop.
Basically everyone I know in engineering share this resentment in some way, and the AI industry has itself to blame.
People are fed up and burned out from being forced to try useless AI tools by non-technical leaders who do not understand how LLM works nor understand how they suck, and now resent anything related to AI. But for AI companies there is a perverse incentive to push AI on people until it finally works, because the winner of the AI arms race won't be the company that waits until they have a perfect, polished product.
I have myself had "fun" trying to discuss LLMs with non technical people, and met a complete wall trying to explain why LLMs aren't useful for programming - at least not yet. I argue the code is often of low quality, very unmaintainable, and usually not useful outside quick experimentation. They refuse to believe it, even though they do hit a wall with their vibe-coded project after a few months when claude stops generating miracles any more - they lack the experience with code to understand they are hitting maintainability issues. Combine that with how every "wow!" LLM example is actually just the LLM regurgitating a very common thing to write tutorials about, and people tend to over-estimate its abilities.
I use claude multiple times a week because even though LLM-generated code is trash I am open to try new tools, but my general experience is that Claude is unable to do anything well that I can't have my non-technical partner do. It has given me a sort of superiority complex where I immediately disregard the opinion of any developer who thinks its a wondertool, because clearly they don't have high standards for the work they were already doing.
I think most developers with any skill to their name agree. Looking at how Microsoft developers are handling the forced AI, they do seem desperate: https://news.ycombinator.com/item?id=44050152 even though they respond with the most "cope" answers I've ever read when confronted about how poorly it is going.
> and met a complete wall trying to explain why LLMs aren't useful for programming - at least not yet. I argue the code is often of low quality, very unmaintainable, and usually not useful outside quick experimentation.
There are quite a few things they can do reasonably well - but they mostly are useful for experienced programmers/architecs as a time safer. Working with a LLM for that often reminds me of when I had many young, inexperienced Indians to work with - the LLM comes up with the same nonsense, lies and excuses, but unlike the inexperienced humans I can insult it guilt free, which also sometimes gets it back on track.
> They refuse to believe it, even though they do hit a wall with their vibe-coded project after a few months when claude stops generating miracles any more - they lack the experience with code to understand they are hitting maintainability issues.
For having a LLM operate on a complete code base there currently seems to be a hard limit of something like 10k-15k LOC, even with the models with the largest context windows - after that, if you want to continue using a LLM, you'll have to make it work only on a specific subsection of the project, and manually provide the required context.
Now the "getting to 10k LOC" _can_ be sped up significantly by using a LLM. Ideally refactor stupid along the way already - which can be made a bit easier by building in sensible steps (which again requires experience). From my experiments once you've finished that initial step you'll then spend roughly 4-5 times the amount of time you just spent with the LLM to make the code base actually maintainable. For my test projects, I roughly spent one day building it up, rest of the week getting it maintainable. Fully manual would've taken me 2-3 weeks, so it saved time - but only because I do have experience with what I'm doing.
I think there's a lot of reason to what you are saying. The 4-5 amount of time to make the codebase readable resonates.
If i really wanted to go 100% LLM as a challenge I think I'd compartmentalize a lot and maybe rely on OpenAPI and other API description languages to reduce the complexity of what the LLM has to deal with when working on its current "compartment" (i.e the frontend or backend). Claude.md also helps a lot.
I do believe in some time saving, but at the same time, almost every line of code I write usually requires some deliberate thought, and if the LLM makes that thought, I often have to correct it. If i use English to explain exactly what I want it is some times ok, but then that is basically the same effort. At least that's my empirical experience.
> almost every line of code I write usually requires some deliberate though
That's probably the worst case for trying to use a LLM for coding.
A lot of the code it'll produce will be incorrect on the first try - so to avoid sitting through iterations of absolute garbage you want the LLM to be able to compile the code. I typically provide a makefile which compiles the code, and then runs a linter with a strict ruleset and warnings set to error, and allow it to run make without prompting - so the first version I get to see compiles, and doesn't cause lint to have a stroke.
Then I typically make it write tests, and include the tests in the build process - for "hey, add tests to this codebase" the LLM is performing no worse than your average cheap code monkey.
Both with the linter and with the tests you'll still need to check what it's doing, though - just like the cheap code monkey it may disable lint on specific lines of code with comments like "the linter is wrong", or may create stub tests - or even disable tests, and then claim the tests were always failing, and it wasn't due to the new code it wrote.
I did some contract work for Microsoft a few years ago (2011-2013). It was striking how much pressure was put on you to dogfood Microsoft stuff at the expense of basically everything. I can only imagine what it must be like at the moment.
I think treating AI as the best possible field for everyone smart and capable is itself very narrow minded and short sighted. Some people just aren't interested in that field, what's so hard to accept it? World still needs experts in other fields even within computing.
Well written! I’m Seattle based (although at Google) I think the mood is only slightly better than what you describe. But the general feeling that the company has no interest in engineering innovation is alive and well. Everything needs to be standardized and engineers get shuffled between products in a way that discourages domain knowledge.
Thanks!
Did he try any control topics?
It’s possible that no matter what he asked, the people of Seattle would respond negatively.
The reason is quite straightforward. LLMs excel at mapping tasks but suck at first principles reasoning and validation.
When you are working on the AI map app, you are mapping your new idea to code.
When people are working with legacy code and fixing bugs, they are employing reasoning and validation.
The problem is management doesn't allow the engineers to discern which is which and just stuff it down their throats.
Making no statement about the value or lack of value in AI itself:
When people talk about it like this (this author is hardly the only example) they sound like an evangelist proselytizing and it feels so weird to me.
This thing could basically read “people in Seattle don’t want to believe in God with me, people in San Francisco have faith though. I’m sad my friends in Seattle won’t be going to heaven.”
Interesting that this talks about people in tech who hate AI; it's true, tech seems actually fairly divided with respect to AI sentiment.
You know who's NOT divided? Everyone outside the tech/management world. Antipathy towards AI is extremely widespread.
And yet there are multiple posts ITT (obviously from tech-oriented people) proclaiming that large swaths of the non-tech world love AI.
An opinion I've personally never encountered in the wild.
I think they exist as a "market segment" (i.e, there are people out there who will use AI), but in terms of how people talk about it, sentiment is overwhelmingly negative in most circles. Especially folks in the arts and humanities.
The only non-technical people I know who are excited about AI, as a group, are administrator/manager/consultant types.
This isn’t just a Seattle thing, but I do think the outsized presence of specific employers there contributes to an outsized negativity around AI.
Look, good engineers just want to do good work. We want to use good tools to do good work, and I was an early proponent of using these tools in ways to help the business function better at PriorCo. But because I was on the wrong team (On-Prem), and because I didn’t use their chatbots constantly (I was already pitching agents before they were a defined thing, I just suck at vocabulary), I was ripe for being thrown out. That built a serious resentment towards the tooling for the actions of shitty humans.
I’m not alone in these feelings of resentment. There’s a lot of us, because instead of trusting engineers to do good work with good tools, a handful of rich fucks decided they knew technology better than the engineers building the fucking things.
Roughly a third of the engineers in the greater Seattle area work for Microsoft, so we needn't conjure up any strange quality of the local culture to explain this.
The problem with AI is that the media and the tech hype machine wants everyone to believe that it is more than a glorified randomized text generator. Yes, for many problems this is just what you need, but not to create reliable software. Somehow, they want everyone to go into a state of disbelief and agree that it is a superior intelligence or at least the clear sign of something of this sorts, and that we should stop everything we're doing right now to give more money and attention to this endeavor.
I've lived in Seattle my whole life, and have worked in tech for 12+ years now as a SWE.
I think the SEA and SF tech scenes are hard to differentiate perfectly in a HN comment. However, I think any "Seattle hates AI" has to do more with the incessant pushing of AI into all the tech spaces.
It's being claimed as the next major evolution of computing, while also being cited as reasons for layoffs. Sounds like a positive for some (rich people) and a negative for many other people.
It's being forced into new features of existing products, while adoption of said features is low. This feels like cult-like behavior where you must be in favor of AI in your products, or else you're considered a luddite.
I think the confusing thing to me is that things which are successful don't typically need to be touted so aggressively. I'm on the younger side and generally positive to developments in tech, but the spending and the CEO group-think around "AI all the things" doesn't sit well as being aligned with a naturally successful development. Also, maybe I'm just burned out on ads in podcasts for "is your workforce using Agentic AI to optimize ..."
textbook way to NOT rollout AI for your org. AI has genuine benefits to white collar workers, but they are not trained for the use-cases that would actually benefit them, nor are they trained in what the tech is actually good at. they are being punished for using the tools poorly (with no guidance on how to use them "good"), and when they use the tools well, they fear being laid off once an SOP for their AI workflows are written.
The article seems full of made up things. The "coworker" isn't a real person, some kind of "composite of people", I'm then curious if the "she" is simply used as "a random made up person".
Then it says: "Engineers don't try because they think they can't." They don't try AI is what I understand, but that contradicts the whole article, that every engineer in Seattle is actively using AI, even forced too.
Then it says: "now believes she's both unqualified for AI work", why would they believe that? She's supposedly has been using AI constantly, has not been part of those "layed off", so must be a great AI talent.
Finally it says: "now believes she's both unqualified for AI work and that AI isn't worth doing anyway. She's wrong on both counts, but the culture made sure she'd land there." Which is completely usubstantiated and also coming from a person trying to grift us with their AI product which they want to promote and sell.
I don't know, it read like a shill article from a grifter.
> like building an AI product made me part of the problem.
I don't see how the author can believe that quitting their job to work on an AI startup is NOT contributing to the problem of "AI products being shoved down everyone's throats."
Except, of course, that their financial bottom line depends on not believing this.
The big revenue isn't going to come from improvements in coding, or writing better emails, or protein folding. It's going to come from more seductive and compelling ads (using all the data vacuumed up from your apps and your psychological profile).
Reminds me of this post from a week or so ago: https://news.ycombinator.com/item?id=46027290
Seattle sounds kinda nice now. AI fatigue is real. I just had to swap eye doctors because they changed their medical records to some AI powered bullshit and wanted me to re-enter all my info into the new system in order to check-in for my appointment. A website that when I looked at their EULA page redirected to an empty page, no clear mention of HIPAA anywhere on the website's other pages. The eye doctor seemed confused why I wanted to stop using them after ten years as a patient even after I pointed out the flaws. It's madness.
This sentiment is not only in Seattle.
> But you weren't allowed to fix them—that was the AI org's turf. You were supposed to use them, fail to see productivity gains, and keep quiet.
Yep, big orgs doing big org things. Don't miss it a bit.
Some massive bait in this article. Like come on author - do you seriously have these thoughts and beliefs?
> It felt like the culture wanted change. > > That world is gone.
Ummm source?
> This belief system—that AI is useless and that you're not good enough to work on it anyway
I actually don't know anyone with this belief system. I'm pretty slow on picking up a lot of AI tooling, but I was slow to pick up JS frameworks as well.
It's just smart to not immediately jump on a bandwagon when things are changing so fast because there is a good chance you're backing the wrong horse.
And by the way, you sound ridiculous when you call me a dinosaur just because I haven't started using a tool that didn't even exist 6 months ago. FOMO sales tactics don't work on everyone, sorry to break it to you.
When the singularity hits in who knows how many years from now, do you really think it's one of these llm wrapper products that's going to be the difference maker? Again, sorry to break it to you but that's a party you and I are not going to get invited to. 0% chance governments would actually allow true super intelligence as a direct to consumer product.
It’s like you saw all the evidence and drew the conclusion you were most comfortable with, despite what the evidence suggests.
I live in seattle and I love AI lol.
But yea, AI companies should pay into a UBI fund. The value of collective creative output of humanity should go back to the living not the select few.
UBI will never make financial sense.
UBI isn't about making financial sense it's about keeping the last traces of society duct taped together before it all collapses. Remove all pathways to a middle-class life and you're left with a populace on the precipice of violent revolts.
What I mean is that it simply would not work. The math doesn't add up. It would directly lead to Weimar levels of hyperinflation. Which is a far worse outcome.
I'm in Seattle at AWS and haven't encountered this attitude towards AI at all. All of my coworkers pretty much love using Cline and Kiro.
As I've said before: AI mandates, like RTO mandates, are just another way to "quiet fire" people, or at least "quiet renegotiate" their employment.
That said, AI resistance is real too. We see it on this forum. It's understandable because the hype is all about replacing people, which will naturally make them defensive, whereas the narrative should be about amplifying them.
A well-intentioned AI mandate would either come with a) training and/or b) dedicated time to experiment and figuring out what works well for you. Instead what we're seeing across the industry is "You MUST use AI to do MORE with LESS while we layoff even more people and move jobs overseas."
My cynical take is, this is an intentional strategy to continue culling headcount, except overindexing on people seen as unaligned with the AI future of the company.
> AI mandates, like RTO mandates, are just another way to "quiet fire" people
That's a recurring argument, and I don't believe it, especially in large tech companies. They have no problem doing multiple large non-quiet lay-offs, why would they need moustache-twirling level schemes to get people to quit.
I don't believe companies to be well intentioned, but the simplest explanation is often the best:
1. RTO are probably driven by people in power who either like to be in the office, believe being in the office is the most efficient way to work (be that it's true or not), or have financial stakes in having people occupy said offices.
2. "AI" mandate is probably driven by people in power who either do see value in AI, think it's the most efficient way to work (be that it's true or not), have FOMO on AI, or have financial stakes in having people use it.
> They have no problem doing multiple large non-quiet lay-offs, why would they need moustache-twirling level schemes to get people to quit.
So the thing about all large layoffs is that there is actually some non-obvious calculus behind them.
One thing for instance, is that typically in the time period soon after layoffs, there is some increased attrition in the surviving employees, for a multitude of reasons. So if you layoff X people you actually end up with X + Y lower headcount shortly after. There are also considerations like regulations.
What this means is that planning layoffs has multiple moving parts:
1) The actual monetary amount to cut -- it all starts with $$$;
2) The absolute number of headcount that translates to;
3) The expected follow-on attrition rate;
4) The severance (if any) to offer;
5) The actual headcount to cut with a view of the attrition and severance;
6) Local labor regulations (e.g. WARN) and their impact, monetary or otherwise;
7) Consideration, impact on internal morale and future recruitment.
So it's a bit like tuning a dynamic system with several interacting variables at play
Now the interesting bit of tea here is that in the past couple of years, the follow-on (and all other) attrition has absolutely plummeted, which has thrown the standard approaches all out of whack. So companies are struggling a bit to "tune" their layoffs and attrition.
I had an exec frankly tell me this after one of the earliest waves of layoffs a couple years ago, and I heard from others that this was happening across the industry. Sure enough, there have been more and more seemingly haphazard waves of layoffs and the absolute toxicity this has introduced into corporate culture.
Due to all this and the overal economy and labor market, employee power has severely weakened, so things like morale and future recruitment are also lower priorities.
Given all this calculus, a company can actually save quite some money (severance) and trouble if people quit by themselves, with minimal negative repercussions.
Not quite moustache-twirling but not quite savory either.
I got a thread on SomethingAwful gassed [1] because it was about an AI radio station app I was working on. People on that forum do not like AI.
I think some of the reasons that they gave were bullshit, but in fairness I have grown pretty tired of how much low-effort AI slop has been ruining YouTube. I use ChatGPT all the time, but I am growing more than a little frustrated how much shit on the internet is clearly just generated text with no actual human contribution. I don’t inherently have an issue with “vibe coding”, but it is getting increasingly irritating having to dig through several-thousand-line pull requests of obviously-AI-generated code.
I’m conflicted. I think AI is very cool, but it is so perfectly designed to exploit natural human laziness. It’s a tool that can do tremendous good, but like most things, it requires people use it with effort, which does seem to be the outlier case.
[1] basically the hall of shame for bad threads.
I live and work in Seattle and I don't hate AI. Further, I know people here that are just as overly excited about AI as it's proponents on HN.
I've also heard complaints about the mandatory use of the tools in the office and the pageantry involved.
I've seen people in love with garbage they produced with AI.
I'm annoyed by the way they are being pushed in my face but hate is really too strong. I've tried using them and gotten total garbage. I think that's because my prompting sucks because I know people that love the tools and have shared great output from them. Those people are a minority in my opinion.
Trying to over simplify the experiences of humanity is a fool's game.
something I havent seen mentioned: the people who are AI's biggest proponents are distinctly unlikeable humans. Look at Elon Musk with Grok, its disgusting. Look at Sam Altman, Alex Karp, look at Peter "Im not sure humanity should continue" Thiel. These people are wildly misanthropic, of course it gives the whole thing an unpleasant miasma.
This is making me gain significant respect for Seattle.
This person crafts quite the straw man!
> This belief system—that AI is useless and that you're not good enough to work on it anyway—hurts three groups
I don't know anyone who thinks AI is useless. In fact, I've seen quite a few places where it can be quite useful. Instead, I think it's massively overhyped to its own detriment. This article presents the author as the person who has the One True Vision, and all us skeptics are just tragically undereducated.
I'm a crusty old engineer. In my career, I've seen RAD tooling, CASE tools, no/low-code tools, SGML/XML, and Web3 not live up to the lofty claims of the devotees and therefore become radioactive despite there being some useful bits in there. I suspect AI is headed down the same path and see (and hear of) more and more projects that start out looking really impressive and then crumble after a few promising milestones.
> I don't know anyone who thinks AI is useless.
By my reading, there are several people on this discussion thread right now who think it (in the form of LLMs) is useless?
This person wrote a blog post admitting to tone-deafness in cheerleading AI and talking about all the ways AI hype has negatively impacted peoples' work environment. But then they wrap up by concluding that its the anti-AI people that are the problem. Thats a really weird conclusion to come to at the end of that blog post. My expectation was that the end result was "We should be measured and mindful with our advocacy, read the room, and avoid aggressively pushing AI in ways that negatively impact peoples' lives."
My 2¢... LLMs are kind of amazing for structured text output like code. I have a completely different experience using LLMs for assistance writing code (as a relative novice) than I do in literally every other avenue of life.
Electricl engineering? Garbage.
Construction projects? Useless.
But code is code everywhere, and the immense amount of training data available in the form of working code and tutorials, design and style guides, means that the output as regards software development doesn't really resemble what anybody working in any other field sees. Even adjacent technical fields.
I'm experimenting with the gemini 3 and will do opus 4.5 soon, but I've seen huge jumps doing EE for construction over the last batch of models.
I'm working on a harness but I think it can do some basic revit layouts with coaxing (which with a good harness should be really useful!)
Let me know what you've experienced. Not many construction EE on HN.
False.
- The entire community @ https://seattlefoundations.org
Are they good?
Tech professionals that depend on their work to survive and have not been thinking about capital vs labor are on delulu land.
AI is in the Radium phase of its world-changing discovery life cycle. It's fun and novel, so every corporate grifter in the world is cramming it into every product that they can, regardless of it making sense. The companies being the most reckless will soon develop a cough, if they haven't already.
"But in San Francisco, people still believe they can change the world-so sometimes they actually do."
? For the better, or for the worse ?
Always amazed to see people who don't hate AI.
The author has unquestioning assumption that the only innovation possible is the one with AI. That is genuinely weird. Even if one believes in AI, innovation in non-AI space should be possible, no?
Second, engineering and innovation are two different categories. Most of engineering is about ... making things work. Fixing bugs, refactoring fragile code, building new features people need or want. Maybe AI products would be hated less if they were just a little less about being able to pretend they are an innovation and just a little more about making things works.
A sizable fraction of current AI results are wrong. The key to using AI successfully is imposing the costs of those errors on someone who can't fight back. Retail customers. Low-level employees. Non-paying users.
A key part of today's AI project plan is clearly identifying the dump site where the toxic waste ends up. Otherwise, it might be on top of you.
"everyone"? Clickbait.
I have some trouble reconciling the conclusion of this article with everything else it described. How is "Clearly my coworker just wasn't believing hard enough in AI, and it harms everyone!" the conclusion the author comes to?
It might just be an ESL issue on my end, but I seriously feel some huge dissonance between the explanations of "how the tech was made the main KPI, used to justify layoffs and forced in a way that hinders productivity", and the conclusion that seems to say "the real issue with those people complaining is that they just don't believe in AI".
I don't understand this article, it seems to explain all the reasons people in Seattle might have grievances, and then completely dismisses those to adopt the usual "you're using it wrong".
Is this article just a way to advertise for Wanderfugl? Because this reads like the usual "Okay your grievances are fine and all, but consider the following: it allows me to make a SaaS really fast!" that I became accustomed to see in HN discussions.
My esteem of Seattle area engineers compared to Silicon Valley engineers has just gone up.
I’m never leaving Seattle.
iykyk
Wanderfugl is a strange for an "AI" powered map. The Wandervogel movement was against industrialization and pro nature. I'm sure they would have looked down on iPhones and centralized "AI" that gives them instructions where to go.
Again a somewhat positive term (if you focus on "back to nature" and ignore the nationalist parts) is taken, assimilated and turned on its head.
I honestly expected this to be about sanctimonious lefties complaining about a single chatgpt query using an Olympic swimming pool worth of water, but it was actually about Seattle big tech workers hating it due to layoffs and botched internal implementations which is a much more valid reason to hate it.
My buddies still or until recently still at Amazon have definitely been feeling this same push. Internal culture there has been broken since the post covid layoffs, and layering "AI" over the layoffs leaves a bad taste.
Is this a sophisticated ad?
Tech company leadership sees AI as a shortcut to success. You know how in project planning meetings engineers are usually asked how they can pull in the schedule by x number of months? AI is now that thing. Obviously, this is a mistake.
The cult of AI maximalists aren't helping the situation.
There are a few clashing forces. One is the power of startups - what people love is what will prevail. It made macs and iphones grab marketshare back from "corporate" options like windows and palm pilot. Its what keeps tiktok running.
An opposing force is corporate momentum. Its unfortunately true that people are beholden to what companies create. If there are only a few phones available, you will have to pick. If there are only so many shows streaming, you'll probably end up watching the less disgusting of the options.
They are clashing. The ppl's sentiment is AI bad. But if tech keeps making it and pushing it long enough, ppl will get older, corporate initiatives will get sticky, and it will become ingrained. And once its ingrained, its gonna be here forever.
> Every time I shared Wanderfugl with a Seattle engineer, I got the same reflexive, critical, negative response. This wasn't true in Bali, Tokyo, Paris, or San Francisco—people were curious, engaged, wanted to understand what I was building
Believe me, the same reflexive, critical, negative response is true for most of Europe too
> like building an AI product made me part of the problem.
It's not about their careers. It's about the injustice of the whole situation. Can you possibly perceive the injustice? That the thing they're pissed about is the injustice? You're part of the problem because you can't.
That's why it's not about whether the tools are good or bad. Most of them suck, also, but occasionally they don't--but that's not the point. The point is the injustice of having them shoved in your face; of having everything that could be doing good work pivot to AI instead; of everyone shamelessly bandwagoning it and ignoring everything else; etc.
> It's not about their careers.
That's the thing, though, it is about their careers.
It's not just that people are annoyed that someone who spends years to decades learning their craft and then someone who put a prompt into a chatbot that spit out an app that mostly works without understanding any of the code that they 'wrote'.
It's that the executives are positively giddy at the prospect that they can get rid of some number their employees and the rest will use AI bots to pick up the slack. Humans need things like a desk and dental insurance and they fall unconscious for several hours every night. AI agents don't have to take lunch breaks or attend funerals or anything.
Most employees that have figured this out resent AI getting shoved into every facet of their jobs because they know exactly what the end goal is: that lots of jobs are going to be going away and nothing is going to replace them. And then what?
disagree completely. You're doing the thing I described: assuming it's all ultimately about personal benefit when they're telling you directly that it's not. The same people could trivially capitalize on the shifting climate and have a good career in the new world. But they'd still be pissed about it.
I'm one of these people. So is everyone I know. The grievance is moral, not utilitarian. I don't care about executives getting rid of people. I care that they're causing obviously stupid things to happen, based on their stupid delusions, making everyone's lives worse, and they're unaccountable for it. And in doing so they devalue all of the things I consider to be good about tech, like good software that works and solves real problems. Of course they always did that but it's especially bad now.
> You're doing the thing I described: assuming it's all ultimately about personal benefit when they're telling you directly that it's not.
It doesn't matter how much astroturf I read, I can see what's happening with my own eyes.
> The grievance is moral, not utilitarian.
Nope, it's both.
Businesses have no morals. (Most) people do. Everything that a business does is in service of the bottom line. They aren't pushing AI everywhere out of some desire to help humanity, they're doing it because they sunk a lot of resources into it and are trying to force an ROI.
There are a lot of people who have fully bought in to AI and think that it's more capable than it is. We just had a thread the other day where someone was using AI to vibe code an app, but managed to accidentally tell the LLM to delete the contents of his hard drive.
AI apologists insist that AI agents are a vital tool for doing more faster and handwave any criticism. It doesn't matter that AI agents consume an obscene amount of resources to do it, or that pretend developers are using it to write code they don't understand and can't test that they're shoving into production anyway. That's all fine because a loud fraction of senior developers are using it to bypass the 'boring parts' of writing programs to focus on the interesting bits.
I feel like this is a textbook example of how people talk past each other. There are people in this world who operate under personal utility maximization, and they think everyone else does also. Then there are people who are maximizing for justice: trying to do the most meaningful work themselves while being upset about injustices. Call it scrupulosity, maybe. Executives doing stupid pointless things to curry favor is massively unjust, so it's infuriating.
If you are a utilitarian person and you try to parse a scrupulous person according to your utilitarianism of course their actions and opinions will make no sense to you. They are not maximizing for utility, whatsoever, in any sense. They are maximizing for justice. And when injustices are perpetrated by people who are unaccountable, it creates anger and complaining. It's the most you can do. The goal is to get other people to also be mad and perhaps organize enough to do something about it. When you ignore them, when you fail to parse anything they say as about justice, then yes, you are part of the problem.
> like [being involved in creation of the problem] made me a part of the problem.
Yeah, that's weird. Why would anyone think that? /s
I live in seattle and I love AI lol.
My main problem is that at this point, the value of entire collective creative output of humanity should go to the living not the select few.
IMHO AI companies should pay into some kind of UBI fund/ Sovergeign fund.
Time for capitalism to evolve, yo!
Author here if anyone has thoughts
Out of curiosity, is this piece just some content that you created in the hopes of boosting your company's mindshare?
I'm just really isolated right now, I've been building solo for a long time. I don't have anyone to share my thoughts with, which is something I used to really value at Microsoft.
Howdy! I personally don't really understand the "point" the article is trying to make. I mostly agree with your sentiment that AI can be useful. I too have seen a massive increase in productivity in my hobbies, thanks to LLMs.
As to the point of the article, is it just to say "People shouldn't hate LLMs"? My takeaway was more "This person's future isn't threatened directly so they just aren't understanding why people feel this way." but I also personally believe that, if the CEOs have their way, AI will threaten every job eventually.
So yeah I guess I'm just curious what the conclusion presented here is meant to be?
I guess in conclusion I'm saying that it's hard to build in Seattle, and that's really unfortunate.
I don't follow why it's hard to build in Seattle. Do you mean before this "AI summer" they struggled, or that with AI they have become too slow because they won't adopt it?
I was under the distinct impression that Seattle was somewhat divided over 'big tech', with many long-term residents resenting Microsoft and Amazon's impact on the city (and longing for the 'artsy and free-spirited' place it used to be). Do you think those non-techies are sympathetic to the Microsofties and Amazonians? This is a genuine question, as I've never lived in Seattle, but I visit often, and live in the PNW.
> Do you think those non-techies are sympathetic to the Microsofties and Amazonians?
As somebody who has lived in Seattle for over 20 years and spent about 1/3 of it working in big tech (but not either of those companies), no, I don't really think so. There is a lot of resentment, for the same reasons as everywhere else: a substantial big tech presence puts anyone who can't get on the train at a significant economic disadvantage.
It depends on how AI affects your economy.
If you are a writer or a painter or a developer - in a city as expensive as Seattle - then one may feel a little threatened. Then it becomes the trickle down effect, if I lose my job, I may not be able to pay for my dog walker, or my child care or my hair dresser, or...
Are they sympathetic? It depends on how much they depend on those who are impacted. Everyone wants to get paid - but AI don't have kids to feed or diapers to buy.
They kind of are, though I think so many locals now work in big tech in some way that it's shifted a bit. I wish we could return to being a bit more artsy and free spirited
I've lived in the Seattle area most of my life and lived in San Francisco for a year.
SF embraces tech and in general (politics, etc) has a culture of being willing to try new things. Overall tech hostility is low, but the city becoming a testbed for projects like Waymo is possibly changing that. There is a continuous argument that their free-spirited culture has been cannibalized by tech.
Seattle feels like the complete opposite. Resistant to change, resistant to trying things, and if you say you work in tech you're now a "techbro" and met with eyerolls. This is in part because in Seattle if you are a "techbro" you work for one of the megacorps whereas in SF a "techbro" could be working for any number of cool startups.
As you mentioned, Seattle has also been taken over by said megacorps which has colored the impressions of everyone. When you have entire city blocks taken over by Microsoft/Amazon and the roads congested by them it definitely has some negative domino effects.
As an aside, on TV we in the Seattle area get ads about how much Amazon has been doing for the community. Definitely some PR campaign to keep local hostility low.
I'm sure the 5% employee tax in Seattle and the bill being introduced in Olympia will do more to smooth things over than some quirky blipvert will.
I think most people in Seattle know how economics works, logic follows: while "techbro" don't work is true: if "techbro" debt > income: unless assets == 0: sellgighustle else sellhousebeforeforeclosure nomoreseattleforyou("techbro") end else "gigbot" isn't summoned and people don't get paid. "techbro" health-- due to high expense of COBRA. [etc...] end end
'how much they do for the community' like trying to buy elections so we won't tax them, same thing boeing and microsoft did. Anytime out local government gets a little uppity suddenly these big corps are looking to move like boeing largely did. Remember Amazon HQ2, at least part of the reasoning behind that disaster was seattlites asking, 'what the hell is amazon doing for us besides driving up rents and snarling traffic?'
(.. and exactly how is boeing doing since it was forced to move away from 'engineering culture' by moving out of the city where their workforce was trained and training the next generation. Oh yeah planes are falling out of the sky and their software is pushing planes into the ground.)
It kinda seems like you're conflating Microsoft with Seattle in general. From the outside, what you say about Microsoft specifically seems to be 100% true: their leadership has gone fucking nuts and their irrational AI obsession is putting stifling pressure on leaf level employees. They seem convinced that their human workforce is now a temporary inconvenience. But is this representative of Seattle tech as a whole? I'm not sure. True, morale at Amazon is likely also suffering due to recent layoffs that were at least partly blamed on AI.
Anecdotally, I work at a different FAANMG+whatever company in Seattle that I feel has actually done a pretty good job with AI internally: providing tools that we aren't forced to use (i.e. they add selectable functionality without disrupting existing workflows), not tying ratings/comp to AI usage (seriously how fucking stupid are they over in Redmond?), and generally letting adoption proceed organically. The result is that people have room to experiment with it and actually use it where it adds real value, which is a nonzero but frankly much narrower slice than a lot of """technologists""" and """thought leaders""" are telling us.
Maybe since Microsoft and Amazon are the lion's share (are they?) of big tech employment in Seattle, your point stands. But I think you could present it with a bit of a broader view, though of course that would require more research on your part.
Also, I'd be shocked if there wasn't a serious groundswell of anti-AI sentiment in SF and everywhere else with a significant tech industry presence. I suspect you are suffering from a bit of bias due to running in differently-aligned circles in SF vs. Seattle.
I think probably the safest place to be right now emotionally is a smaller company. Something about the hype right now is making Microsoft/Amazon act worse. Be curious to hear what specifically your company is doing to give people agency.
> Be curious to hear what specifically your company is doing to give people agency.
Wrt. AI specifically, I guess we are simply a) not using AI as an excuse to lay off scores of employees (at least, not yet) and b) not squeezing the employees who remain with arbitrary requirements that they use shitty AI tools in their work. More generally, participation in design work and independent execution are encouraged at all levels. At least in my part of the company, there simply isn't the same kind of miserable, paranoid atmosphere I hear about at MS and Amazon these days. I am not aware of any rigidly enforced quota for PIPing people. Etc.
Generally, it feels like our leadership isn't afflicted with the same kind of desperate FOMO fever other SMEGMAs are suffering from. Of course, I don't mean to imply there haven't been layoffs in the post free money era, or that some people don't end up on shitty teams with bad managers who make them miserable, or that there isn't the usual corporate bullshit, etc.
I get the feeling that this is supposed to be about the economics of a fairly expensive city/state and that "six-figure salary", but you don't really call it out.
If it was about the technology, then it would be no different than being a java/c++ developer and calling someone who does html and javascript their equal so pay them. It's not.
People get anxious when something may cause them to have to change - especially in terms of economics and the pressures that puts on people beyond just "adulting". But I don't really think you explained the why of their anxiety.
Pointing the finger at AI is like telling the Germans that all their problems are because of Jews without calling out why the Germans are feeling pressure from their problems in the first place.
Nope, no one does. This thread is devoid of opinion on the topic.
I think people just have a lot of frustration to get off their chest, which is fine.
Regarding "And then came the final insult: everyone was forced to use Microsoft's AI tools whether they worked or not."
As a customer, I actually had an MS account manager once yelled at me for refusing to touch <latest newfangled vaporware from MS> with a ten foot pole. Sorry, burn me a dozen times; I don't have any appendages left to care. I seriously don't get Microsoft. I am still flabbergasted anytime anyone takes Microsoft seriously.
> MS account manager once yelled at me
Presumably the account manager is under a lot of pressure internally...
Do they repeatedly yell at you?
Do you know how your <vaporware> usage was measured - what metrics was the account manager supposed to improve?
He was trying to get people to use the Azure unnamed service. I assume others like me did a trial, POC, and immediately ran away screaming.
Would love to hear more anecdotes from former colleagues.
One fun one was the leadership of Windows Update became obsessed with shipping AI models via Windows update, but they can't safely ship files larger than 200mb inside of an update.
I like that you shared the insight. Feels like you shared a secret to the world that is not so secret if you work a Microsoft (I guess this is less about the city)
I feel bad for people who work at dystopian places where you can't just do the job, try to get ahead etc. It is set up to make people fail and play politics.
I wonder if the company is dying slowly but with AI hype qaand old good foundations keeping her stock price going.
Well I think it's interesting how much what goes on inside of the major employers that affects Seattle. Like crappy behavior inside of Microsoft is felt outside of it.
from your post:
> Bring up AI in a Seattle coffee shop now and people react like you're advocating asbestos.
can you please share the methodology you used to reach this conclusion?
in other words - what is the sample size? how many Seattle coffee shops did you walk into and yell out "hey, what do people think about AI?" (or did you gather the data in a different way, such as by approaching individual people at the coffee shop?)
what is your control group? in other words, how many SF coffee shops did you visit and conduct the same experiment?
Out of curiosity, did you redact this with AI?
It has all the telltale signs: lots of em-dashes but also "punched up" paragraphs, a lot of them end with a zinger, e.g.
> Amazon folks are slightly more insulated, but not by much. The old Seattle deal—Amazon treats you poorly but pays you more—only masks the rot.
or
> Seattle has talent as good as anywhere. But in San Francisco, people still believe they can change the world—so sometimes they actually do.
Once or twice can be coincidence, but a full article of it reads a tiny bit like AI slop.
I wrote it by hand but I had an AI do some edits. I got the m dash drilled into me by my creative writing teacher in college.
I actually think your usage is pretty different from the usual ai style, if that means anything. More traditional?
I'm not sure why you needed it for edits though, since you seem good at writing generally.
"Grabbed lunch" is an awful phrase
Oh, and there's also "grok" just few paragraphs later!
It kind of is
In my opinion, the issue in AI is similar to the issue in self driving cars. I think the last “five percent” of functionality for agents etc. will be much, much more difficult to nail down for production use, just like snow weather and strange roads proved to be much more difficult for self-driving car technology rollout. They got to 95% and assumed they were nearing completion but it turned out there was even more work to be done to get to 100%. That’s kind of my take on all the AI hype. It’s going to take a lot more work to get the final five percent done.
the author makes the connection, people see AI as Asbestos, shoved in everything by profit hungry corps that don't care about what damage it will do in the long term.
Seattle has been screwed over so many times in the last 20 years that its a shell of itself.
I love AI but I find Microsoft AI to be mostly useless. You'd think that anything called Copilot can do things for you, but most of the time it just gives you text answers. Even when it is in the context of the application it can't give you better answers than ChatGPT, Claude or Perplexity. What is the point of that?
Satya has completely wasted their early lead in AI. Google is now the leader.
I feel like a lot of people hate AI
Is it that everyone in Seattle hates AI, or that Seattle is the only place you know people well enough they’ll tell you the truth? The bar for that is much lower in Seattle too, compared to say, Japan. And the author seems tone deaf enough to not know the difference.
AI is such a blessing. I use it almost every day at work, and I've spent this evening getting a Bluetooth to USB mapper for a ps4 controller working by having ChatGPT write it for me, for a bigger project I'm working on. Yes, it's going to take some time to fully understand the code and adjust it to my own standards, but i've been playing a game a few hours now and I feel zero latency and plenty of controller rumble that I'm having fun giving some extra power. It pretty much worked with the first 250 lines of C it spew out.
What's gonna be super interesting is that I'm going to have an rpi zero 2 power up my machine when I press the controller's ps-button. That means I might need to solder and do some electrical voodoo that I've never tried. Crossing my fingers that the plan ChatGPT has come up with won't electrocute me.
Been meaning to visit Seattle, seems like my kinda place
Why did OP have to mention AI and just ask for feedback on the solution itself and let anything come up organically?
Big tech workers might be perceiving writing on the wall sooner - there have already been some layoffs.
I also find a lot of technpeollemsurprisinglynhabent spent as much time with AI over the past 3 years compared to other techs.
Reading some of these comments from fellow seattleites, I'm really quite thankful for having the privilege of being able to completely ignore all of this noise.
There is zero push in my org to use any of these tools. I don't really use them at all but know some coworkers who do and that's fine. Sounds like this is a rare and lucky arrangement.
I'm stuck between feeling bad because this is my field–I spend most days worrying about not being able to pay my bills or get another job–and wanting to shake every last tech worker by the shoulders and yell "WAKE UP!" at them. If you are unhappy with what your employer is doing, because they have more power over you, you don't have to just sit there and take it. You can organize.
Of course, you could also go online and sulk, I suppose. There are more options between "ZIRP boomtimes lol jobs for everyone!" and "I got fired and replaced with ELIZA". But are tech workers willing to expore them? That's the question.
It just feels like it's in bad taste that we have the most money and privilege and employment left (despite all of the doom and gloom), and we're sitting around feeling sorry for ourselves. If not now, when? And if not us, who?
This submarines a few blame the AI haters ideas while promoting some honest aspects of reality.
Hating AI in Portland too! :wave:
That kind of explained half of comments on HN. When it's your induatry that's being disrupted suddenly a lot of people are on the edge.
To the extent that Microsoft pushes their employees to use all their other shitty products, Copilot seeks like just another one (it can't be more miserable/broken than Sharepoint).
I don't know if anyone has been reading cover letters recently but it seems that people are prompting the LLMs with the same shit, dusting their hands and thinking "done" and what the reader then sees is the same repetitive, uncreative and instantly recognizable boilerplate.
The people prompting don't seem to realize what's coming out the other end is boilerplate dreck, and you've got to think - if you're replaceable with boilerplate dreck maybe your skills weren't all that, anyway?
The hate is justified. The hype, is not.
I stopped reading after the third em dash. I'm sorry if you use them, but they are my AI copy/paste blog red flag these days
Finance and HR are supposed to demoralize parts of organizations asking for too many resources.
> My former coworker—the composite of three people for anonymity—now believes she's both unqualified for AI work and *that AI isn't worth doing anyway*. *She's wrong on both counts*, but the culture made sure she'd land there.
I'm not sure they're as wrong as these statements imply?
Do we think there's more or less crap out now with the advent and pervasiveness of AI? Not just from random CEOs pushing things top down, but even from ICs doing their own gig?
Getting a real Jonathan Coulton "Skullcrusher Mountain" vibe here from the author (where vibe has its traditional meaning)...
"I made this half-pony, half-monkey monster to please you
But I get the feeling that you don't like it
What's with all the screaming?
You like monkeys, you like ponies
Maybe you don't like monsters so much
Maybe I used too many monkeys
Isn't it enough to know that I ruined a pony
Making a gift for you?
Oh but we're all supposed to swoon over the author's ability to make ANOTHER AI powered mapping solution! Probably vibecoded and bloated too. Just what we need, obviously all the haters are wrong! /s
Honestly if it's using a swiss-army-knife framework it's already bloated.
I live in Seattle (well a 20 min ferry from Seattle) and I too hate AI. In fact I have a Kanji learning app which I am trying to push on to people, and I brand it as AI free. No AI was used to develop it, no AI used to write content, no AI is there to “help you learn”.
When I see apps like Wanderfugl, I get the same sense of disgust as OPs ex coworker. I don‘t want to try this app, I don’t want to see it, just get it away from me.
I wonder if I'm the guy in the bubble or if all these people are in the bubble. Everyone I know is really enjoying using these tools. I wrote a comment yesterday about how much my life has improved https://news.ycombinator.com/item?id=46131280
But also, it's not just my own. My wife's a graphic designer. She uses AI all the time.
Honestly, this has been revolutionary for me for getting things done.
I recently returned to the world of education and it's _everywhere_. I feel for those people who hate LLMs because they've already lost the war.
I’m pretty sure the other 80% of non-tech Seattlites would love AI to take away the techies
“I didn't fully grok how tone deaf I was being though.
[…]
Seattle has talent as good as anywhere. But in San Francisco, people still believe they can change the world—so sometimes they actually do.”
Nope, still completely fucking tone deaf.
I mean how can you blame her for not being excited at yet-another-AI-powered planner.
> And then came the final insult: everyone was forced to use Microsoft's AI tools whether they worked or not.
Copilot for Word. Copilot for PowerPoint. Copilot for email. Copilot for code. Worse than the tools they replaced. Worse than competitors' tools. Sometimes worse than doing the work manually.
This is revolting. Three years ago I’d have said this is a terrible black mirror plot
Seattle hits a doom (helped by the gloom) loop every winter. This too shall pass.
It's almost like the hype of AI is massively ahead of the reality, and the people being directly squeezed by that dynamic don't like how it feels.
Negative? You mean realistic. You mean “refuse to tell lies.”
Just stop lying about AI. Thank you.
The author’s AI app looks like something that drains all the fun and challenge out of travel planning.
Unlike Seattle, in Los Angeles there are few software engineers but I would not utter AI at all here
Its an infinite moving goalpost of hate, if its an actor, "creative", writer, AI is a monolithic doom, next its theoretical public policy or the lack thereof, and if they have nothing that affects them about it then it's about the energy use and environment
nobody is going to hear about what your AI does, so don't mention anything about AI unless you're trying to earn or raise money. Its a double life
I was just about to post that this entire story could have been completely transposed to almost every conversation I've had in Los Angeles over the past year and a half. Looks like you beat me to it!
the only difference is that I don't have the conversation ha, I don't tell people about anything I do that's remotely close to that, rarely even mention anything in tech. I listen to enough other conversations to catch on to how it goes, very easy to get roped into an AI doomer conversation that's hard to get out of
This isn’t really a common-folk-vs-tech-bros story. It’s about one specific part of Seattle’s tech culture reacting to AI hype. People outside that circle often have very different incentives.
Literally everyone I know is sick of AI. Sick of it being crowbar'd into tools we already use and find value in. Sick of it being hyped at us as though it's a tech moment it simply isn't. Sick of companies playing at being forward thinking and new despite selling the same old shit but they've bolted a chatbot to it, so now it's "AI." Sick of integrations and products that just plain do not fucking work.
I wouldn't shit talk you to your face if you're making an AI thing. However I also understand the frustration and the exhaustion with it, and to be blunt, if a product advertises AI in it, I immediately do treat it more skeptically. If the features are opt-in, fine. If however it seems like the sort of thing that's going to start spamming me with Clippy-style "let our AI do your work for you!" popups whilst I'm trying to learn your fucking software, I will get aggravated extremely fast.
Oh, I will happily get in your face and tell you your AI garbage sucks. I'm not afraid of these people, and you shouldn't be, either. Bring back social pressure. We successfully shamed Google Glassholes into obscurity, we can do it again. This shit has infested entire operating systems now, all so someone can get another billion dollars, while the rest of us struggle to make rent. It's made my career miserable, for so many reasons. It's made my daily life miserable. I'm so sick and tired of it.
> "shamed Google Glassholes into obscurity"
Except it didn't stick? https://news.ycombinator.com/item?id=43088369
> This shit has infested entire operating systems now
Well, it's not the fault on a random person doing some project that may even be cool.
I'll certainly adjust my priors and start treating the person as probably an idiot. But if given evidence they are not, I'm interested on what they are doing.
The thing that stops me being outwardly hostile is that there are a minority, and it is a minor, minor minority, of applications for AI that are actually pretty interesting and useful. It's just catastrophically oversaturated with samey garbage that does nothing.
I'm all for shaming people who just link to ChatGPT and call their whatever thing AI powered. If you're actually doing some work though and doing something interesting, I'll hear you out.
AI the hype beast product and the club for workers is a plague that I frankly hate.
AI the manual algorithm to generate code and analyze images is quite an interesting underlying tech.
Seattle is going to tax the fuck out of big-tech, for better or worse.
AI/LLMs provide an absolutely perfect excuse for halfwit managers and other fail upwards type people
I hate the entire premise even though some of it has been useful but at worst you're creating code and/or information that's just wrong and "can get someone killed" (metaphorical, but also probably literal), you're creating absolutely unrealistic expectations
Zuck said they'd be able to replace engineers with AI. Well, that tells you everything you need to know, doesn't it? With all of the scandals Facebook properties have had over the years. A real engineer/competent CEO wouldn't say that
Lots of creators (e.g., writers, illustrators, voice actors) hate "AI" too.
Not only because it's destroying creator jobs while also ripping off creators, but it's also producing shit that's offensively bad to professionals.
One thing that people in tech circles might not be aware of is that people outside of tech circles aren't thinking that tech workers are smart. They haven't thought that for a long time. They are generally thinking that tech workers are dimwit exploiter techbros, screwing over everyone. This started before "AI", but now "AI" (and tech billionaires backing certain political elements) has poured gasoline on the fire. Good luck getting dates with people from outside our field of employment. (You could try making your dating profile all about enjoying hiking and dabbling with your acoustic guitar, but they'll quickly know you're the enemy, as soon as you drive up in a Tesla, or as soon you say "actually..." before launching into a libertarian economics spiel over coffee.)
You have a very cartoon villain image of tech workers in your head. While Tesla-driving libertarian techbros are certainly a thing, this doesn't even accurately represent the majority of employees in Big Tech, nevermind the industry as a whole.
What impression do you think non-tech workers have of tech workers, these days.
While everybody else is ranting about AI, I'll rant about something else: trip planning apps. There have been literally thousands of attempts at this and AFAICT precisely zero have ever gotten any traction. There are two intractable problems in this space.
1) A third party app simply cannot compete with Google Maps on coverage, accuracy and being up to date. Yes, there are APIs you can use to access this, but they're expensive and limited, which leads us to the second problem:
2) You can't make money off them. Nobody will pay to use your app (because there's so much free competition), and the monetization opportunities are very limited. It's too late in the flow to sell flights, you can't compete with Booking etc for hotel search, and big ticket attractions don't pay commissions for referrals. That leaves you with referrals for tours, but people who pay for tours are not the ones trying to DIY their trip planning in the first place.
There just isn't much friction between having a few tabs open (maps, booking site, airplane site, google search) and a notepad. The friction of searching for an app, downloading it, and then learning how to use it is just higher.
So many products are like this - it sounds good on paper to consolidate a bunch of tasks in one place but it's not without costs and the benefit is just not very high.
That’s why I built all my geo infrastructure from scratch from osm, there’s still some issues but for AI location grounding it outperforms google places for $300/mo
I use and pay for Wanderlog. Idk how their business is doing, but I love it as a user. They use an embedded Google Maps viewer for locations, so there is no problem for coverage.
> They use an embedded Google Maps viewer for locations
If they become popular they'll have to move to OSM, Google's steep charging for their Maps API at high usage that has brought companies to their knees is well known [1].
[1]: https://news.ycombinator.com/item?id=35089776
“They’ll have to” is a pretty strong statement assuming you don’t actually have insider information on their API spending.
For now, they do use Google Maps and I’m happy with it. If they stop and it’s no longer as useful to me, maybe I’ll stop using it.
Love Wanderlog!
But I use it as a glorified notes app to keep track of flights, reservations, rental cars, confirmation numbers, etc, in one place, not for trip planning.
They do have user generated lists to help find things to do, but yeah, I mainly use it for organizing ideas and building my final itinerary. It’s really nice for this. I can then also share the itinerary with everybody else on the trip.
It's just another business/service niche that is solved until the current Big Provider becomes Evil or goes under.
Similar to "made for everyone" social networks and video upload platforms.
But there are niches that are trip planning + there are no one solving the pain! For example Geocaching. I always dreamed about an easy way to plan Geocaching routes to travel and find interesting caches on the way. Currently you gotta filter them out and then eyeball the map what seems to be nearby, despite there, maybe, not being any real roads there, or the cache is probably maybe actually lost or has to be accessed at specific time of day.
So... No one wants apps that are already solved + boring.
[dead]
[dead]
[dead]
[flagged]
[flagged]
[flagged]
Good for them. It turns out, the common folk have more wisdom than tech bros with regard to AI.
Ah yes. The big tech employees of Amazon and Microsoft, the common folk.
This article is about how the tech bros in Seattle hate AI.
The article reports Microsoft SDEs complaining about Copilot and being forced to use it. It's "worse than competitors' tools."
No shit. But that's hardly everyone is Seattle. I'd imagine people at Amazon aren't upset about being forced to use Copilot, or Google folks.
Slop.
206dev here...
Oh yeah, call out a tech city and all the butt-hurt-ness comes out. Perfect example of "Rage Bait".
People here aren't hurt because of AI - people here are hurt because they learned they were just line items in a budget.
When the interest rates went up in 2022/2023 and the cheap money went away, businesses had to pivot their taxes while appeasing the shareholder.
Remember that time when Satya went to a company sponsored rich people thing with Aerosmith or whomever playing while announcing thousands of FTE's being laid off? Yeah, that...
If your job can be done by a very small shell script, why wasn't it done before?
Everyone who has been told AI is a panacea by executive leadership who barely understand it feels this way.
But AI takes job of average "western" IT employee. Like in meme AI stands for Actually Indians.