Tell HN: I'm 60 years old. Claude Code has re-ignited a passion
I’m ready to retire. In my younger days, I remember a few pivotal moments for me as a young nerd. Active Server Pages. COM components. VB6. I know these are laughable today but back then it was the greatest thing in the world to be able to call server-side commands. It kept me up nights trying to absorb it all. Fast forward decades and Claude Code is giving me that same energy and drive. I love it. It feels like it did back then. I’m chasing the midnight hour and not getting any sleep.
50 here. Years ago I completely stopped coding, becoming tired of the never ending rat race of keeping up with the latest bizarre web stacks, frameworks for everything, node for this, npm for that, Angular, React, Vue, whatever - as if solving business problems just became too boring for software developers, so we decided to spend our cycles on the new hotness at every turn.
Tools like Claude Code are the ultimate cheat code for me and have breathed new life into my desire to create. I know more than enough about architecture and coding to understand the plumbing and effectively debug, yet I don't have to know or care about implementation details. It's almost an unfair unlock.
It'll also be good to see leetcode die.
Tools like Claude Code are the ultimate cheat code for me and have breathed new life into my desire to create
I'm in my 60s and retiring this summer. I feel the opposite. Agents have removed most of the satisfaction and fulfilment from designing, building, testing and completing a feature or component. And if frameworks are a problem, learning to create simply and efficiently without them has its own sense of satisfaction.
Maybe it's a question of expectations. I suspect weavers felt the same with the arrival of mechanised looms in the industrial revolution. And it may be that future coders learn to get their fulfilment otherwise using agents.
I can absolutely see the attraction to business of agents and they may well make projects viable that weren't previously. But for this Luddite, they have removed the joy.
OldAF. I have more ideas than I have time to code up prototypes. Claude code has changed all that, And given it cannot improve the performance of optimized code I've written so far, it's like having a never tiring eager junior engineer to work out how to make use of frameworks and APIs to deploy my code.
A year ago, cursor was flummoxed by simple things Claude code navigates with ease. But there are still corner cases where it hallucinates on the strangest seemingly obvious things. I'm working on getting it to write code to make what's going on in front of its face more visible to it currently.
I guess it's a question of where you find joy in life. I find no joy in frameworks and APIs. I find it entirely in doing the impossible out of sample things for which these agents are not competitive yet.
I will even say IMO AI coding agents are the coolest thing I've seen since I saw the first cut of cuda 20 years ago. And I expect the same level of belligerence and resistance to it that I saw deployed against cuda. People hate change by and large.
Can you elaborate on "resistance against cuda"? What were people clinging to instead?
IMO it was mostly that people didn't want to rewrite (and maintain) their code for a new proprietary programming model they were unfamiliar with. People also didn't want to invest in hardware that could only run code written in CUDA.
Lots of people wanted (and Intel tried to sell, somewhat succesfully) something they could just plug-and-play and just run the parallel implementations they'd already written for supercomputes using x86. It seemed easier. Why invest all of this effort into CUDA when Intel are going to come and make your current code work just as fast as this strange CUDA stuff in a year or two.
Deep learning is quite different from the earlier uses of CUDA. Those use cases were often massive, often old, FORTRAN programs where to get things running well you had to write many separate kernels targeting each bit. And it all had to be on there to avoid expensive copies between GPU and CPU, and early CUDA was a lot less programmable than it is now, with huge performance penalties for relatively small "mistakes". Also many of your key contributers are scientists rather than profressional programmers who see programming as getting in the way of doing what they acutally want to do. They don't want to spend time completely rewriting their applications and optimizing CUDA kernels, they want to keep on with their incremental modifications to existing codebases.
Then deep learning came along and researchers were already using frameworks (Lua Torch, Caffe, Theano). The framework authors only had to support the few operations required to get Convnets working very fast on GPUs, and it was minimal effort for researchers to run. It grew a lot from there, but going from "nothing" to "most people can run their Convnet research" on GPUs was much eaiser for these frameworks than it was for any large traditional HPC scientific application.
Thanks!
It seems funny though: The advantages of GPGPU are so obvious and unambiguous compared to AI. But then again, with every new technology you probably also had management pushing to use technology_a for <enter something inappropriate for technology_a>.
Like in a few decades when the way we work with AI has matured and become completely normal it might be hard to imagine why people nowadays questioned its use. But they won't know about the million stupid uses of AI we're confronted with every day :)
> The advantages of GPGPU are so obvious and unambiguous
I remember being a bit surprised when I started reading about GPUs being tasked with processes that weren't what we'd previously understood to be their role (way before I heard of CUDA). For some reason that I don't recall, I was thinking about that moment in tech just the other day.
It wasn't always obvious that the earth rotated around the sun. Or that using a mouse would be a standard for computing. Knowledge is built. We're pretty lucky to stand atop the giants who came before us.
I didn't know about CUDA until however many years ago. Definitely didn't know how early it began. Definitely didn't know there was pushback when it was introduced. Interesting stuff.
I'm dealing with someone in 2026 insisting that everything has to be written in Python and rely on entirely torch.compile for acceleration rather than any bespoke GPU kernels. Times change, people don't.
In the beginning, valid claims of 100x to 1,000x for genuine workloads due to HW level advances enabled by CUDA were denied stating that this ignored CPU and memory copy overhead, or it was only being measure relative to single core code etc. No amount of evidence to the contrary was sufficient for a lot of people who should have known better. And even if they believed the speedups, they were the same ones saying Intel would destroy them with their roadmap. I was there. I rolled my eyes every single time but then AI happened and most of them (but not all of them) denied ever spouting such gibberish.
Won't name names anymore, it really doesn't matter. But I feel the same way about people still characterizing LLMs as stochastic parrots and glorified autocomplete as I feel about certain CPU luminaries (won't name names) continuing to state that GPUs are bad because they were designed for gaming. Neither sorts are keeping up with how fast things change.
The divide seems to come down to: do you enjoy the "micro" of getting bits of code to work and fit together neatly, or the "macro" of building systems that work?
If it's the former, you hate AI agents. If it's the latter, you love AI agents.
I'd say that the divide seems to come down to whether you want to be a manager or a hacker. Skimming the posts in this submission, many of the most enamored with LLMs seem to be project managers, people managers, principal+ engineers who don't code much anymore, and other not hands-on people who are less concerned with quality or technical elegance than getting some kind of result.
Bear in mind also that the inputs to train LLMs on future languages and frameworks necessarily have to come from the hacker types. Somebody has to get their hands dirty, the "micro" of the parent post, to write a high quality corpus of code in the new tech so that LLMs have a basis to work from to emit their results.
I do love the former, but it's been nice to take a break from that and work at a higher level of abstraction.
That is an amazing summary. It might not seem that amazing, but I feel like I've read pages about this, but nothing has expressed as elegantly and succinctly.
I enjoy both. There’s still plenty of micro to do even in web dev if you have high standards. Read Claude’s output and you’ll find issues. Code organization, style, edge cases, etc.
But the important thing is getting solutions to users. Claude makes that easier.
Maybe have a play with them a bit more. LLMs are quite good at coding, but terrible at software engineering. You hear people talk about “guiding them” which is what I think they are getting at. You still need to know what you are doing or you’ll just drive off a cliff eventually.
At the moment I am trying to fix a vibe coded application and while each individual function is ok, the overall application is a dog’s breakfast of spaghetti which is causing many problems.
If you derive all your pleasure from actually typing the code then you’re probably toast, but if you like building whole systems (that run on production infrastructure) there is still heaps of work to do.
Scale the Lego pieces more and it’s the same. Bigger projects have more moving parts and require the same thinking.
> Agents have removed most of the satisfaction and fulfilment from designing, building, testing and completing a feature or component
I highly recommend not using these tools in their "agentic" modes. Stay in control. Tell them exactly what to write, direct the architecture explicitly.
You still get the tremendous benefit of being unlocked from learning tedious syntax and overcoming arcane infra bottlenecks that suck the joy out of the process for me, but you get freed from the tedious and soul crushing parts.
But then you don't get the same gains in output that agentic modes get you. It just goes off and does stuff, sometimes for hours if you get the loop tuned right.
Obviously you should do whatever you want, however you want to do it, and not just do whatever some Internet rando tells you to do, but glorified autocomplete is so 1 year ago. Everyone knows the $20/month plans aren't going to last, time will tell if the $100/month ones do. The satisfaction is now in completing a component and getting to polish it in a way you never had time for before. And then totally crushing the next one in record time. To each their own, of course, but personally, what's been lost with agentic mode has been replaced by quantity and quality.
Yes I'm not recommending "glorified autocomplete". Just shortening the cycle. Give it tasks that would involve maybe a couple of hundred lines of code at a time. I find this captures both the rewarding aspects and gets a lot of the productivity gain - and I'll argue a lot of the remainder of that "productivity gain" sits in somewhat debatable territory : how well all this code holds up that has been developed without oversight is going to be something we only really find out in a few years.
I am in my 50s. I agree with what others have said about your happy place. For me, it is not APIs and fine details of operator overloading. I love solving problems. So much so that I hope I never retire. Tools like Claude Code give me wings.
The need for assembly programmers diminished over the decades. A similar thing will happen here.
> I'm in my 60s and retiring this summer.
Congrats! I'm in that age where I'm envying more the ones like you than the 20-something :)
Id agree it splits both ways. I think in the short run it can be super fun but once you expand your thoughts to the long run it takes the steam out of rediscovered joy of discovery and creation.
Its almost like it reignites novelty at things that were to administratively heavy to figure out. Im not sure if its fleeting or lasting.
I'm 56 and still coding full-time. My least favorite part of the job has always been trying to learn some brand new tech, googling with 47 tabs open, and you don't even know enough to ask the right questions yet. Turns out you were stuck on something so beginner that Stack Overflow didn't even have a post on it. ChatGPT has made that part of the job soooooo much less painful. But I'm not ready to let Claude run wild yet. I still want to understand everything I'm pasting.
There is a lot more Claude Code can do for you that an AI chat bot can't because it a) has tool access b) has access to your source code.
- Root cause and fix failures.
- Run any code "what if scenario".
- Performance optimizations.
- Refactor.
There's no reason why you shouldn't (and you should) read all the code and understand it after Claude does any work for you but the experience vs. the "old" SO model of looking for some technical detail is very different.
Same age, same situation.
I got completely fed up of continually having to learn new incantations to do the same shit I’ve been doing for decades without enough of a value add on top. I know what I want to build, and I know how to architect and structure it, but it’s simply not a good investment of my increasingly limited time to learn the umpteenth way to type code in simply to display text, data, and images on the web - especially when I know that knowledge will be useful for maybe, if I’m lucky, a handful of years before I have to relearn it again for some other opinionated framework.
It’s just not interesting and I’ve become increasingly resentful of and uninterested in wasting time on it.
Claude, on the other hand, is a massive force multiplier that enables me to focus on the parts of software development I do enjoy: solving the problems without the bother of having to type it all in (like, in days of old, I’d already solved the problem before my fingers touched the keyboard but the time-consuming bit was always typing it all in, testing and debugging - all of that is now faster but especially the typing part), focussing on use cases and user experience.
And I don’t ever have to deal directly with CSS or Tailwind: I simply describe the way I want things to look and that’s how the end up looking.
It’s - so far at any rate - the ultimate in declarative programming. It’s awesome, and it means I can really focus on the quality of the solution, which I’m a big fan of.
Will be 60 this year, and have felt the same for years already. You get to a point where you look ahead and realize you've got maybe another 10-20 decent years left if you're lucky and for me, more and more, I don't want to spend it running on this treadmill.
Computers do not feature at all in my ideal retirement. Maybe a phone or tablet so I can do the minimal email and bill paying.
You know you could just choose a framework and stick with it? The way you look down on "the whole profession" for what's basically a straw man and your own decision is a bit bizarre. Especially coupled with the fact that tech has never moved so fast as right now, being on top of the AI-game is a target changing a hundred times faster than frontend frameworks back in the days.
> You know you could just choose a framework and stick with it? The way you look down on "the whole profession" for what's basically a straw man and your own decision is a bit bizarre.
I'm only in my forties. I've been nostalgic for the days when I'd stay up all night exploring new frontiers (for me) in tech for a number of years. I could not disagree more with your take on this.
Someone said they value their time before death and you're pretty dismissive. Priorities change. Values change. Conditions change.
> Especially coupled with the fact that tech has never moved so fast as right now, being on top of the AI-game is a target changing a hundred times faster than frontend frameworks back in the days.
I mean, isn't that what people in this thread have been saying about frameworks? How many hours have been lost relearning how to solve a problem that has already been solved? It's like when I tried to fix a date-time issue on Windows as a Mac / Linux user. I knew NTP was the answer but I had to search the web to find out where to turn it on. Stuff like that is pretty frustrating and I didn't even have to do it every five to ten years.
You don't always have the option. AngularJS, for example, EOLed in 2021.
It is a huge stretch to call transitioning from angularjs to angular learning a new framework.
At the time that’s precisely how it felt though. So much so that I personally felt it wasn’t worth it relearning everything. Had shipped several projects with AngularJS at my very first dev job, and have never written a line of Angular v2+
It confuses me when people talk about frameworks as being totally different. They solve the same problems, slightly differently. It’s not a big lift to learn a new one if you are familiar with one or two already.
That might be generally true for frontend frameworks these days, because they’ve all converged around the same ideas. But in the mid 2010s, Backbone was very different from jQuery, which was very different from Knockout, Ember, ReactJS etc. certain frameworks embraced certain programming paradigms, others embraced others.
Some of my colleagues didn’t make the jump. Those that were the most into AngularJS back then are still writing Angular apps today.
“yet I don't have to know or care about implementation details”
Implementation details can very much matter though. I see this attitude from my managers that now submit huge PRs, and it is becoming a big problem.
I definitely agree that these tools allow one with an in-depth developer background to cover territory that was too much work previously. But plop me into a Haskell codebase, and I guarantee I’d cause all kinds of problems even with the best intentions and newest models. But the ramp up for learning these things has collapsed dramatically, and that’s very cool.
I still don’t want to have to learn all the pitfalls of those frameworks though. Hopefully we will converge on a smaller number, even if it’s on tooling that isn’t my favourite.
Merges can become more fraught with multiple engineers vibe coding on the same codebase. However, LLMs will become delegates for that too.
Conflicts are the least of our worries, and yes llms can handle that well. I’m taking about the things you can’t easily handle, the complexity that slowly overwhelms a codebase with no easy way out except a rewrite.
And a rewrite of a non-trivial application, even with the AI goodness, is still a big proposition and full of all kinds of risk. If you have a trivial application, you probably don’t have much protecting you from someone else vibing up a competing replacement either.
Not nearly your age but I agree with your sentiments entirely. I mainly focused on using computing not for business purposes but scientific purposes and how we can forward science using compute and technology and I’ve felt much the same way for some time. The new layers and layers of abstraction added little in the way of productivity to getting to the root problems I wanted to and there have always only been so many hours in the day and dollars in the sponsoring agency’s purse to pursue new innovative work.
Now a lot can be cast off to LLMs to focus on the problem space and the innovative computing use around them. It’s been exciting to not worry about arbitrary idiosyncrasies and machete through jungles of technical minutia to get to the clearing. I still have to deal with them but less of them. And I don’t have to commit nearly as much in the technical space to memory to address problems, I can often focus on higher level architectural decisions or new approaches to problems. It’s been quite enjoyable as well.
> yet I don't have to know or care about implementation details
Where do I even begin...yes, you should care about implementation details unless you're only going to write stuff you run locally for your own amusement.
until you learn to trust the system and free mental capacity for more useful thinking. at some point compilers became better at assembly instructions than humans. seems inevitable this will happen here. caring about the details and knowing the details are two different things.
LLMs lie constantly. There should be no trust in that system. And no I don't think they will "get better".
Turning 50 this year.
Coding has never _stopped_ being a passion for me, but my increasingly limited time becomes an issue.
And Claude code (and cursor) saves me So. Much. Time.
I only have 10-20 active years ahead of me, so this is really, really important. Young ppl don’t get it.
> Angular, React, Vue, whatever - as if solving business problems just became too boring for software developers, so we decided to spend our cycles on the new hotness at every turn.
They often do solve business problems around responsive design, security and ux.
Currently working maintenance with one foot in a real legacy system and the other foot in modern systems the difference is immense.
> It'll also be good to see leetcode die.
Agreed. Leetcode caused more harm than good.
Still causing it!
> Years ago I completely stopped coding, becoming tired of the never ending rat race of keeping up with the latest bizarre web stacks, frameworks for everything, node for this, npm for that, Angular, React, Vue, whatever
Have you tried Claude? No, Opus? No, not that version, it's two weeks old, positively ancient lol. Oh wait, now OpenClaw is the cool thing around the block.
My dude, the rat race just became a rat sprint. I hope you're keeping up, you're no spring chicken any more.
Your comment is timeless. Just replace your tech keywords with those from the past or the future.
I’ve been around for a while. You didn’t have a game changing new framework or library every month.
Only running one agent? You should have a distributed network of them at least, if you don’t you will get left behind! Running on the cloud? Stupid, buy hardware for tens of thousands of dollars to run it locally, own your tools. Etc etc, I haven’t seen a crazier rat race in tech ever, the JavaScript framework era is looking like the most stable of software times compared to where we are right now.
> with the latest bizarre web stacks, frameworks for everything, node for this, npm for that, Angular, React, Vue, whatever - as if solving business problems just became too boring for software developers, so we decided to spend our cycles on the new hotness at every turn
I kinda feel the same way when I visit Home Depot once a year
I also find these things incredibly annoying. But I've been actively working in webdev the past couple of years so I was actually keeping up with stuff. And I still consider this a cheat-code.
It makes it so easy to cut through the bullshit. And I've never considered myself scared of asking "stupid" questions. But after using these AI tools I've noticed that there are actually quite a few cases where I wouldn't ask (another human) a question.
Two examples: - What the hell does React mean when they say "rendering"? Doesn't it just output HTML/a DOM tree and the browser does the actual rendering? Why do they call it rendering? - Why are the three vectors in transformer models named query, key & value? It doesn't really make sense, why do they call it that?
In both cases it turns out, the question wasn't really that stupid. But they're not the kind of question I'd have turned to Stackoverflow for.
It really is a bit like having a non-human quasi-expert on most topics at your fingertips.
> as if solving business problems just became too boring
And yet, having customers and listening to them is the whole point.
Anything that re-ignites a person's zest for thinking and creating is a net gain.
That said, it is paradoxical that the catalyst in this case is a technology that replaces thinking.
You truly speak for many. I don't have the energy to center a div anymore, and to be honest, that time was thrown away [excluding money, a pretty big exclusion]. I am sure my boss's "Uber for cats" will work, I just like using AI at this point. I can iterate on 15 "Uber for cats" with 200 centered divs, spitting out documentation and excellent objects all day.
But the real talk we need to have is... "Uber for cats"
To those of you reading the comment section thinking something like the following:
>"Wait a moment! Being forced to use AI gave me depression, and I'm really aware of the fact that it's only going to become better and better the more developers are using it, to the point where the 10 job openings of yesterday are 1 job opening tomorrow. Why are people so excited", remember this:
You are reading HN, the survivorship bias and groupthink is just as high as any other self-calibrating online community ("upvote if you agree" -> self-calibration of the popular opinion), and there's an extremely high survivorship bias because people who are into this LLM craze have a higher probability of browsing HN.
As for you, OP, I have no idea why age is a factor to consider to this. I'm 45, and while I programmed as a hobby since I was 16 I turned it into a career during COVID, and all the pressure cooking LLM watch-six-agents-writing-and-you-proofreading gave me so much existential crisis and depression that I seriously can't even get myself writing anything "over the weekend".
I hope to God the next generation of wonder kids that is the equivalent of the 12 year old discovering how to bent the computer to do what they want it to do enjoy arguing with multiple agents concurrently back and forth.
> LLM watch-six-agents-writing-and-you-proofreading gave me so much existential crisis and depression
this is extremely bizarre because I’m 53, been coding since 12, and it has had literally the exact opposite effect on me, I find it tremendously exciting, like riding a snowmobile instead of manually cross-country skiing
but I do think that if you’re not ready to work like this, you may need to consider a career pivot in the short term
Your analogy is a bit weird to me. Snowmobile is exciting for a short while, but I'm much more fond of cross country skiing. The connection with nature, the silence.
Or maybe your analogy is correct. AI is a bit as if everyone in the mountains drove around in a snowmobile, noisy and a smell of gasoline.
The analogy makes sense. Some people love riding snowmobiles, some people love cross country skiing, and some people love both. It makes sense that some of the people who love snowmobiling think cross country skiing is boring and tedious, and some people who love cross country skiing think snow mobiles are loud and obnoxious.
I don't think people are confused why there are the different types of people who like different winter sports, but people seem shocked that opinions differ on the enjoyment of using an LLM
I think the analogy hits home on both sides. You go faster, but you miss the meditative experience of going slow.
My knee jerk is that there are quite a few people who can't or won't snowmobile when needed and ski when needed.
That's where the analogy starts to break a bit. You can't mode switch between skis and snowmobile, but you sure can ai assist/not pretty quickly.
One more quick one - imagine skiers showing up to the snowmobile club hating on snowmobiles and vice versa.
I, for one, have still not properly got a grip on how tech enables this sort of a analogy-breaking reality.
Effing go ski then; there's even a club for that! (rhetorical, not directed at anyone in particular) And shame on me cause I show up to the ski club on a snowmobile with skis on my back.
Coding since 11. Using AI makes me completely lethargic. I really don't know how to fill the minutes that AI takes to write code. Maybe if AI gets faster I will be able to enjoy it.
That said, like many people here I have invested quite some time in becoming a skilled and experienced coder, so there is no denying that this whole AI craze makes me feel like something is taken away from me.
> I really don't know how to fill the minutes that AI takes to write code.
The AI should be spending most of its time helping you spec out new revisions to the codebase, the code-writing time is just the last step and if you've planned the work in depth, you'll understand what the AI is trying to do (and be able to stop and revise if anything is going off the rails). This is a healthier approach than "just spec out something else in the meantime" IMHO, but of course that happens too.
"just spec out something else in the meantime"
Yeah, I've learned that if I do too much of that I'll spend more time catching up in terms of consolidating gains through review of code and functionality. That's just me, people are clearly developing a few different and not "wrong" ways of going about things.
52 here, been a full time people manager for about a decade now. Coding manually makes me tired just thinking about it. When I think about embark on a new project my mind goes back to all the times I worked 12 hour days trying to get some basic system to function. I’m too old for that now, my back hurts if I sit too long and occasionally get migraines if I look at a screen too much.
Using AI has been really perfect for me. I can build stuff while I do other things, walk the dog, make lunch, sit on the porch.
Sometimes i realize that my design was flawed and I just delete it all and start again, with no loss aversion.
> Using AI has been really perfect for me. I can build stuff while I do other things, walk the dog, make lunch, sit on the porch.
this resonates with me strongly, while i like coding, and understanding it, I understand my human limitations. I couldn't possibly write by hand the stuff I've been making, in the time I am making it, without a team these past few months. I would be coding literally all day, which while I sometimes enjoy the zoning out process of wiring stuff up, what i really enjoy is exactly what you described.
I enjoy being outside and walking my dog, taking a long shower, and cooking. All of these things are simple tasks with a good bit of repetition, and unlike wiring up some code or whatever, they allow my thoughts to flow, and I can think about where my projects are likely heading and what needs to be done next.
Those moments, even before heavy AI assisted coding, have always been the moments i cherish about software development.
I really don't know how to fill the minutes that AI takes to write code. Maybe if AI gets faster I will be able to enjoy it.
I either switch between two projects, or I keep an eye on what Claude is doing, because it often gets off the rails or in a direction I don't like and then it's easier to just stop it there and tell it what to do instead.
That said, like many people here I have invested quite some time in becoming a skilled and experienced coder, so there is no denying that this whole AI craze makes me feel like something is taken away from me.
I might have felt like that when I was younger (almost 44 now, programming since 10), but over time I realized that the thing I enjoy is not really writing code itself, but coming up with ideas, solving puzzles, etc. LLMs are like insanely fast junior programmers, so they do the more mundane part of the task, but they need me to come up with good ideas, good taste, and solve design challenges. Otherwise it ends up as a pile of unmaintainable junior programmer code.
It is possible that LLMs might replace the other parts of being a good programmer as well, but for the time being it makes my work more pleasant, because I can work on interesting problems.
For me, coding since the 80s (but i knew then it didn't spark joy or anything - debugging was so annoying, learning new language syntax even more so...) I love AI. I am a product manager, i just see freedom to make things that are real and learn faster - does this solve a problem? Is it better than what we have now? and move on, disposing of things as i go because it's cheap. To fill the minutes, i might work on 3-4 or even 5 separate projects and even multiple worktrees within those. I feel busier than ever. I think the best part is it's not lazy and I am. There's so many things I don't have time or energy to go deep on that I can delegate. I'm jealous of real sw engineers because it's probably a huge force multiplier, while I can't call BS on it as much, but getting better.
"how to fill the minutes that AI takes to write code"
I usually review the code that's been written. Sometimes directly, sometimes by telling claude to bring things up piece by piece to explain choices as I review. Or I kick off one of the various maintenance tasks, validate my assumptions and expectations on how things should function, note the things that don't to be addressed. I'm going to have to do this stuff anyway, I might as well do it then.
Or I read something, or do something to clear my head. Sometimes because I need a mental break, because I find that the speed these tools having me working at can be taxing in different ways.
I think expectations of the "10x" variety whether you put that at 10 or 3x will have to be adjusted. Coding as fast as 5 developers is far far different than "A singly developer can produce as much as 5 others"
+1 review the code
Screw {some number}x. Such BS. Those who can, do. Those who can't, write and spread pseudo-intelligent brain worms. Reject!
> I really don't know how to fill the minutes that AI takes to write code.
Think of it like being a project manager in a team. There's a lot you can do to help the project moving forward without touching one line of code
I would’ve become a project manager if I enjoyed that, right?
The trick is to have 3 terminals with Claude Code open at the same time. You won’t be able to follow more. Reviewing the stuff or plans written is harder than telling them to write it.
Multiple projects at once or at least multiple features. I am usually the limiting factor reviewing, not waiting on agents
I live with a feeling of nonaccomplishment, never having taken a project to completion thanks to my shitty executive function. The AI craze has robbed me of any hope that I might still meaningfully* achieve this in my career.
Couldn't an AI help you take a project to completion, at least until you run out of money?
I didn't mention the crucial point which is that I what I signed up for was writing my own software.
I grew up witnessing Carmack going from Keen to Quake in 5 or 6 years.
That standard gets you attached to the idea that you should be able at some level to individually reach a fraction of the depth and breadth. Sadly, I have neither the energy nor the focus.
But what's the point of getting an LLM to, say, write a raycaster if your point is to learn how to do that yourself? If your mission in life is to learn to build things?
(I hope I'm getting my idea across)
You have conflated the joy of learning with the joy of building. I have been writing code since I was 6 years old and was left to my own devices with the vic-20, the manual, and BASIC instructions.
I have worked as a developer, security engineer, program manager and engineering manager through my career. Writing stuff to understand algorithms or hardware requires engaging with the math, science, and engineering of the software and hardware. Optimizing it or developing a novel algorithm requires deep comprehension.
Writing a service that shuffles a few things around between stuff on my home network so that I can build an automation to turn down the lights when I start playing a movie? Yeah, I could spend a day or two writing and testing it. Having done it a few times, the work of it is a bit of a chore, I'm not learning, just doing something. Using an Claude or some other agent to do that makes it go from 'do I want to spend my time off doing a chore?' to 'I can design this and have it built in an hour'.
Making the jump to using the tools in my day job has been a bitore challenging because as a security engineer I have seen some hairy stuff over the last two years as AI generated stuff wends it's way into production, but the tools and capabilities have expanded massively, and heck, my peers from Mozilla just published some awesome successes working with Anthropic to find new vulns :)
Don't let using tools take away the love of learning, use them to solve a problem and take care of the drudgery of building stuff.
OMG that manual. VIC-20 was my first code experience. I look back and cannot understand how 7 year-old me was patient enough to make a jumping jack guy appear on screen. Joy of Coding? Hell, no. I wanted to see if I could make it work. (I did, and I had no clue how to save to tape)
Sounds like you had one at home? If so, I'm a bit jealous. But also, hello, brother/sister!
I appreciate your thoughtful response, you might be right, maybe there's an element of getting over myself to stay in the race...
Reading you, I was debating on loving kick in the rear. Can't really do that these days and some people react negatively to it. Sounds like you are reasonably self-aware though, so...
Nobody can teach you to own and control you. But you had better. Use tricks, treats, magic, whatever, but get to the damned end or make for damned sure you know why you walked away (and live with that).
Your life matters. Your ideas matter. Birth them. It hurts. Push through. Don't look back at your life and wonder what it would have been like if you had stuck with it. It hurts. But do it.
Or do whatever you want, but this random stranger votes "getting over".
two AIs -- I use Claude Code and Kimi CLI -- I got them to build an agent relay so they can communicate with each other (one plans, the other reviews the plan; one builds, the other reviews the build) -- while one is working on one thing, I'll be chatting and exploring with the other one … they can build anything in any language so if you are a skilled and experienced code you should be able to guide a pair of coding agents no problem.
Otoh -- if there is this bifurcation among coders (one group super-excited, one group depressed and angst-ridden) then maybe we should be trying to figure out why people are reacting the way they are. Can you explain more about your situation? What do you code? Do you have hobby projects? Do you have free time? Etc.
I'm with you on this being incredibly exciting.
I'm 40 and have been doing this since I was 12 as well. Once I became a staff engineer at a large company and ended up being a less Hands-On with code and more on team leadership and system architecture, it set me up for this perfectly.
I missed writing code (or so I thought) but what I realized is that I actually missed bringing ideas to life. Coding was just a means to do that and the new tools with LLM and agents have allowed me to do the core of what I love way more than coding by hand could have ever allowed
Same boat (though 44M) - I don't think it has become less fun, on the contrary it can help with the stuff that was trivial but could still take time to get right. Now it can crank out that stuff often correctly on first try. Of course I have the same fear of job security as everyone else and it is sad to see something you were good at being taken over by machines, but it is not because I enjoy the work itself less, quite the contrary.
I’m retired, so a career pivot isn’t in the cards for me.
I’m also not really in the HN gestalt, so to speak. I have some views that are common, hereabouts, and some, not so much.
I’m enjoying having an LLM “pair partner.”
Yes, it's a partner you can discuss things with, and it never belittles you or gets tired.
I am 50, coding since ~12. Started with Apple II, during the uni times wrote my own editor in assembly for BK-0010 (a soviet computer), then 30 years in computer networking with some high performance dataplane stuff more recently;
The last years somehow it felt like there’s nothing new anymore, the same 10 ideas being regurgitated with slight modifications. I tinkered with AI for the past 2 years but it was mostly a “tool for writing boilerplate”. I have tried a few ideas for agents but didn’t see how it could work.
That changed with Opus 4.6 and the subsequent wave of local models - now I try 10 ideas a day and it’s like magic! And if something doesn’t work - jumping into the code and debugging it is huge fun!
Understanding that the era of the almost-free cloud tokens might come to an end, I run my own harness pointing to my own GPUs running Qwen3.5-27B, and the last few days it has been very busy! :)
My harness doesn’t “pressure cook” since it doesn’t make sense to do that with only one GPU (besides many other reasons), it runs everything in a linear fashion, including subagents, and logs everything - reading the logs as they go by is another cool thing - sometimes I pick up interesting things from it !
The distribution of people’s moods related to AI seems indeed bimodal. And I feel lucky somehow ending up in the “enthusiastic” rather than “depressed” part of it. To the folks in the other one: I am sorry. I don’t know why it is this way. If I knew I might have given unsolicited advice.
So you’ve tried at least a hundred ideas by now, care to share fifty of them? I’m very curious as to what they are. Opus is too slow to even complete one idea per day for me, and that’s fine, I don’t have hundreds of them :)
Way to blame the poster for their problems, and what the hell is a "short term career pivot"?
I don’t think anyone is blaming but it’s hard to ignore the progress we are seeing and thinks that these workflows will not be the norm in instead of the exception.
As the lead dev on our team told us "You will all have a different journey on this road". Not everyone is going to get along with allowing an LLM to write code for them, something they've probably spent their entire lives crafting a skill for. Others only saw code as a means to and end, so an LLM finally removes that silly barrier.
I'm in the former camp. Every time I have an LLM write code it makes me entirely depressed because the satisfaction I get from programming is the programming. However, what I have found incredibly valuable is having LLM's help me plan. Using it as someone to brainstorm with, to "rubber duck" if you will. I still get to code, it just speeds up the planning process and has gone from a depressing exercise to one where I am excited to work.
Find your own path.
I fear that there will be a point where it won't be a choice, although a part of me wonders why there is such a hurry to get rid of devs in the first place, as these tech companies have insane margins
But I also like to work the way you described it, and also by using Claude Code for e.g. K8s stuff (kubectl, helm) where you'd otherwise have to use a TUI or do a lot of typing just to get logs/status/etc. and a bunch of yaml that is just incredibly tedious
Fear is the mind killer.
The reason, and the only reason, that leadership/owners want to get rid of devs is money.
If you’re wondering who the villain is, it’s capitalism. It’s always capitalism.
Thanks, I needed this.
There doesn't seem to be a place for me in the future of software/tech: I like sitting quietly, alone, solving problems, writing code, and reading it. I like in code much of what I like in art: the fruits of human labor and the results of human ingenuity. Being excited about AI/LLMs makes no sense to people like me. If you're excited because LLMs let you make something, great, good for you. Have fun.
If the tools become a mandatory part of the job, I'll change careers. Spending my days talking to chipper robots and describing what I want rather than making it myself sounds unbearable.
I debated heavily whether I'd stay in tech or change my career almost a decade ago. I concluded that the only other profession that I considered rewarding (at that time) would be to become a professor of history. Making history interesting to even one student per semester would be a win.
In the end, I remembered how much I hated schooling. This is despite being a huge fan of education. It wasn't realistic to think that I'd complete the work needed for accreditation.
Regardless, I'm happy today having selected for the thing that I already knew. I hope you also find yourself satisfied. It's lonely feeling lost when evaluating a thing you'd known through a new paradigm.
OP has his retirement prepared. That might increase their perception of the upsides and negate some of the downsides of adopting AI.
60 is relevant because it's inherent to the point they are making in having experienced something that inherently requires having lived through a longer period of time.
Its not uncommon for people to lose interest or find the passion has gone out of things they enjoyed when they were younger, especially in their professional lives where the enjoyment eroded through forced contact with aspects of it that were less enjoyable or contaminated by unpleasant work environments and uninteresting projects.
Having that passion reignited isn't something given to all people.
Honestly, software engineering as a career only went down hill for me from when I began to when I retired.
(And I hesitate to even air that view in front of others that are already in the field because I am a kind of Pollyanna and don't want to foment bad vibes.)
But since I retired a few years ago it was clearly not LLMs that precipitated the decline of my enjoyment of the profession. Instead it was the slow erosion of agency and responsibility that did that.
I'll drop the euphemisms and just say outright that the inmates ran the asylum when I began in the 90's (at Apple, FWIW). The only one that really told me what to do was the tech-lead on the team. Not my manager—for sure not marketing or the CEO (ha ha — Jobs had not yet returned).
In effect, I and all other engineers were told, "Here's your sandbox, here's your shovel: you go make your sand castle however you want—so long as it does X, Y and Z. We'll ship it but you'll own it. You'll fix it, expand it…"
(A coworker whose sense of humor I always enjoyed said to me, perhaps seriously, "When someone drops code in my lap and says, 'It's yours now' the first thing I do is rewrite it." Yeah, that's what happens to someone's code when they moves on—becomes someone else's sandbox and they are free to knock down the castle, build another—Chesterton's Fence not withstanding, ha ha.)
To that end I feel a little bad for anyone that missed that era. I mean unless you enjoy writing unit tests, having code reviews, style guidelines, etc.—and I have certainly met younger engineers that have come on board that seem to enjoy those aspects of the these-days profession.
I admit that when I began it was in fact a bit intimidating when you realized that code you were writing, were responsible for, was going to ship on millions (in 1995? maybe?) of machines. The responsibility though also came with agency—the combination came to give me a sense of freedom, the power of using my discretion, and finally a sense that I was a valued contributor.
You can infer from the above what I disliked about the profession as I was aging out of it. My general sense is that the industry became too big though and too much money riding on it for management to entrust it to the "funny farm". But of course we cowboys who came up in that ward liked it the way it had been.
> Yeah, that's what happens to someone's code when they moves on—becomes someone else's sandbox and they are free to knock down the castle, build another—Chesterton's Fence not withstanding, ha ha.
As someone who references Chesterton’s fence often, I not only agree the code often gets rewritten when someone moves on, I even think it’s often the right thing to do - for medium to small projects where there is one or only a few people who own the code. The reason is because I’ve seen what happens when you don’t rewrite it - the new owner(s) don’t have intimate knowledge of the codebase, and as a result, they work at the speed of molasses regardless of their skill. I have left code behind to people who are better coders than me, and it took years for them to become productive.
To be fair, I have also seen large projects with many people get rewritten and have Chesterton bite back hard, having the projects go late, cost enormous sums of money, and end up as bad as the first time, so rewrites certainly aren’t always called for.
This is all changing dramatically with Claude, BTW, people can now get into a codebase and be productive without rewriting it. They might not understand it, but this is a positive development of some kind at some level.
> But since I retired a few years ago it was clearly not LLMs that precipitated the decline of my enjoyment of the profession. Instead it was the slow erosion of agency and responsibility that did that.
I've been working on a contract for a large corp. They asked me to design a piece of software over 6 months which I delivered on time and worked great — by the time we had to ship into PROD, the whole thing was canned unceremoniously.
Luckily they liked my work so much they moved me to another greenfield project. Worked on it for a year, had to invent novel solutions which I'm pretty proud of, and we shipped into prod last Autumn. I haven't heard a peep from anyone, whether the thing is working and by masterful skill of mine it hasn't crashed yet, or if no one is using it and it was just another bullshit job.
All this work, good pay, and nothing to show for it. Not even a pat in the back. I'm just a well-oiled cog in an unfathomable machine. I wonder if my career has any meaning at all. Recently they've asked me to deliver a feature for yesterday because of bad planning on their part, and when I mentioned how long it'll take, they've half-jokingly suggested to use LLMs so I can ship it in half the time to make their arbitrary deadline.
Joke's on them, in less than 6 months I'm out. 20 years as a software engineer, 15 as a contractor, and all I feel when I get at my desk is existential dread. There is just no pleasure at it, that I'd rather risk poverty but feel like my actions and efforts have tangible effects on the physical world.
Was producing more mediocre code ever the problem? This all feels like a Kafkian fever dream.
I remember arriving at Apple Park to meet a friend/coworker a few years before I retired. Sitting there enjoying the food by one of the huge, curved glass walls, he was distractedly focused on one of the gardeners that Apple employs. This man was out in the center part of Apple Park trimming a plant or something similar.
It was clear that my friend was looking on somewhat enviously and when I asked, he admitted as much.
And I knew too immediately the draw. Before I was old enough for "gainful employment" there was a neighbor who hired my sister and I (I think I was 11, my sister 10) to ride along with him and his kids (our neighborhood friends) and help with his lawn services business.
I know. But this was the 1970's, a small working-class neighborhood in a Kansas suburb… And he paid us by the hour, helped load/unload the lawn mowers. We'd get a free lunch at a "Wiley's" fast-food hamburger joint.
But despite the physical labor of pushing a lawn mower all over someone's yard, there was a curious sense of satisfaction that came from having arrived at a tatty, overgrown lawn but then leaving it looking neat, tidy. It is the usual "sense of accomplishment" that physical labor often metes out that is often more elusive in the white-collar world.
To be sure there's no arguing about the differences in pay—I'm talking strictly about a sense of job satisfaction. (And, over the course of my three decade career as a programmer, the closest to that had been early on when I had full ownership of the code.)
I fear this will be horribly self-indulgent, but I'll share it anyway:
I'd always been a computer person, but it wasn't until I'd reached my thirties that I realized I could make a career out of that interest. The joy of programming still gets me out of bed in the morning and sends me skipping happily to my desk in my home office. What I do wouldn't impress anybody at a technical level. I'm not an innovator. The world of software and tech would not suffer if I had never existed. But I like the guy I work for. I like the people I work with. I write stuff that lots of people use. I do it well enough that I can feel decently good about it.
And I'm watching all of what I enjoy in software as a career and craft gradually disappear. Upper management are now all True Believer AI zealots who know, just know, that AI is the future and therefore ensure that it is also the present. They've caused nothing but organizational chaos, shoved out knowledgeable people, in some misguided effort to remake the company in their image, and replaced them with, to me, obvious bullshit artists.
Engineering time and effort that might a few years ago have produced value and good experiences for users now produce mediocre "MCPs," used only internally, that turn out even more mediocre code and tests that don't test anything.
I don't have nearly the chops or talent you and your peers have. I never could have run with you guys or made the mark on the world that you did. What I do, and the processes I follow, are probably the exact stuff that drove you to retirement. Still, I enjoy what I do and hate that it's being taken from me and replaced with something I hate, overseen, in my company's case, by bullshit artists pretending to be visionaries/cutting-edge 'thought leaders.'
I'm glad some of us got to build things when the inmates ran the asylum, and I regret the money and 'progress' that strangled the life and joy out of it for you.
Just an aside: I've really enjoyed everything you've posted on HN and look forward to your comments. Thanks, and cheers.
When I started the things that made you good in this industry got you bullied - or worse - in high school, and we were not the ones invited to parties during university. Then with all the success and money it attracted the wrong motivations; no longer did you build software to change the world, but to get rich and change your world. And now the circle completes, as those who got rich but could not affect the geopolitical changes they wanted via their work are doing it with their money.
Do you still code? What’s your take on working with LLMs and agents? Does it reignite the same spark OP is talking about?
"Spark" might be putting too fine a point on it if I am being honest (and I always am, ha ha).
But I vibe-coded a web site [1] that I would not have otherwise attempted (I just didn't want to have to figure out how to learn a map-type framework in order to put little points-of-interest on a web page.)
I also vote-coded an extremely esoteric app for turning .mpo files into stereograms that you can then print to display in an old-fashioned stereoscope [2].
I have lately been learning (I hope?) to build a hobbyist analog computer. This a deep dive into electronics—something I have no training in.
And I have already queued up a couple of my abandoned projects (also esoteric) that I hope to turn an LLM loose on when I free up some time (from my current analog computing obsession).
It's hard to say if I would not have pursued all the above without an LLM. I am giving examples though of projects that I feel were sort of on the tipping point for me as to whether they were worth the effort to pursue or not—the learning-curve-required vs. useful-end-product balance. I am finding the LLMs are a finger on the scale tipping it more often toward "Go for it." Maybe you would call that a "spark"?
[1] (where I map out the location of a pair of YouTubers that have been road-tripping the U.S. for over three years) https://engineersneedart.com/OneAdvanture/
[2] https://engineersneedart.com/stereographer/stereographer.htm...
As for you, OP, I have no idea why age is a factor to consider to this.
This is one only data point but my dad was a programmer and frequently complained about cognitive decline once he hit his mid 50s. From talking to him, he remained sharp at a conceptual and high level, knowing what he wanted to do and how it would be done, but struggled with the tooling, the logistical details, etc. He didn't make it to the AI era, alas, but AI could be a god send for people who have the proven technical chops and background but find juggling a lot of minutiae is becoming difficult.
I'm sure there are cognitive declines as you age, but even discounting those there's some fundamental change happening to the opportunity space.
I'm in my mid 40s, I've had a really fulfilling career working on interesting things and making decent money, and over that time have accumulated a few passion projects that I knew were always out of my reach.
Well, technically within my reach but I'd need to somehow find someone to pay for me and a team for some period of time to work on stuff.
When I started playing around with these tools, it started feeling like maybe some of my ideas were within reach. Some time after, it felt plausible enough that I've decided to go for it. I'm actively in the middle of some deep performance research that I simply would not have the bandwidth or capacity for without these tools.
I've also managed to acquire enough confidence in the likelyhood of some degree of success that I'm investing in starting a company (self-funded) to develop and release and license the stuff i'm building.
I don't know exactly how my ideas will turn out, but that's part of the excitement and anticipation. Point is I never felt I had enough breathing room to really go for it (between normal life obligations like mortgage, feeding kids, etc.)
These tools have changed the equation enough that it's made it more feasible for me to pursue some of these ideas on my own. Things I would have shelved for the rest of my life, probably.. or maybe tried to encourage and interest others into doing.
>You are reading HN, the survivorship bias and groupthink is just as high as any other self-calibrating online community
Agreed. To expand IMHO and somewhat tangentially: recognizing the importance of software/technology and using it as tool is the hallmark of a person with balanced mental makeup. Someone who has ever had 'passion' for software (or in general technology) extended beyond a few weeks can be considered to have something abnormal going on - for example autism. This is like a carpenter becoming obsessed with his chisel and deriving his entire sense of purpose and happiness from delving into the minutiae of chisels.
There is a profession called "tool maker" and their 'passion' for making tools has been quite important. Even just for chisels.
I figured that something like that would exist, hence the example.
I'm mostly reading the comments section thinking "wow, anthropic is putting a lot of work into astroturfing Hacker News right now in reponse to the new ChatGPT release"
You feeling that way is the world telling you you’re doing it wrong.
It is more fun to treat them as coding buddies, usually using them one at a time a time, it is fair to race them at debugging a bug or spend waiting time looking at docs or something.
The real bottleneck is how much you can hold in your head simultaneously to be sure about quality as a moral subject.
I was in school when GPT came out and there is a strong generational divide. It reminds me of when i was young teachers said you couldn’t use Wikipedia because it isn’t guaranteed to be correct, but we did anyway. Same thing with LLMs. It’s a faster way to do things so eventually everything will be done that way.
The opposite of this has been my experience.
HN comments bias far more negative towards technology, tech companies, and current politics than the people I know in real life. People who mostly don’t work as professional software engineers, at least not anymore. And the (employed) engineers I know are all having a lot of fun too.
I think both opinions are pretty well-represented here, but the people who aren't so happy about generated code are well into the acceptance phase at this point. (Myself included.)
If you're "proofreading" the agents' work in detail, you're doing it wrong. You need to invest that time productively into planning out what the agents are going to do (with AI help, of course) then once the plan has gotten detailed enough you can set the agent to work and treat the result as something to just read through and quickly accept/revise/reject (upon which rejection you go back to an earlier stage of planning and revise that instead). Planning out at the outset keeps you in the driving seat and avoids frustration; the agents are just a multiplier that operates downstream of your design decisions.
There is a fine line between "not proofreading" and "not paying attention at all to the output." There are many things that look like they work, but won't pass a sniff test, especially when it comes to security or performance. I witnessed agents create "private" endpoints that had no authentication, but sent user IDs as part of the payload and trusted them.
Yeah building acceptance criteria first is the way. An LLM is a goal machine. It uses probability over and over to advance towards the goal(s). That’s all it is and wants to do. So giving it well defined and granular goals and guardrails will get the best results.
Been coding since 13, now 44 working in FAANG.
Love AI explaining code
Dislike AI for writing code (that was my fun part)
You dont like the quality of the generated code or the process of not typing it out yourself?
Process of coming up what to type was interesting to me, not the act of typing itself.
> there's an extremely high survivorship bias because people who are into this LLM craze have a higher probability of browsing HN.
I've worked in professional software development for more than 20 years. I'm pretty well connected and well aware of what is going on in the industry. If you think that coding agents are not widely used and just a bubble on HN, you are very much mistaken. At this point I'd suggest more than 50% of professional developers are using them. Within a few years it will be 90%.
The reason is, they are actually good, despite what some people really want to believe.
Personally, I've been typing characters into a text editor or IDE for a long, long time. I'm very happy that I have a an automated junior programmer to do it for me now while I guide it and tell it when it is getting things wrong, and fix up mistakes. I did the manual way for a long time, I'm enjoying this new way. I understand this isn't for everyone though.
Good point regarding “ survivorship bias and groupthink” here.
I'm about a decade behind you, but I also started my programming career during the "good" COM/DCOM/MFC/ATL/ActiveX/CORBA days. Java just came out. I slept little during that time because truly, there was nothing like programming. It was the thing that pulled me awake in the morning, and pulled me from falling asleep at night. I was so spellbound, calling it Csikszentmihalyi's flow felt like it didn't do it justice.
Fast forward 30 years later, I thought those days were gone forever. I'd accepted that I'd never experienced that kind of obsession again. Maybe because I got older. Maybe those feelings were something exclusively for the young. Maybe because my energy wasn't what it used to be. Yada yada, 1000s of reasons.
I was so shocked when I found out that I could experience that feeling again with Claude Code and Codex. I guess it was like experiencing your first love all over again? I slept late, I woke up early, I couldn't wait to go back to my Codex and Claude. It was to the point I created an orchestrator agent so I could continue chatting with my containerized agents via Telegram.
"What a time to be alive" <-- a trite, meaningless saying, that was infused by real meaning, by some basic maths that run really, really, really fast, on really, really expensive hardware. How about that!
I'm significantly younger but also programmer for two decades since my early teens and am experiencing something similar. CC is so freeing in that it makes those "nice but no time" ideas into reality by doing it next to your main project, almost feels like a drug.
It suddenly turns that dead time while you're waiting for CI, review or response into time where you can work on the fun or satisfying side projects by firing up a few prompts, check an iteration or 2, and then pause again until the next time or while the agent is doing its thing
That was an enjoyable read :) how about that?
As a principal engineer I feel completely let down. I've spent decades building up and accumulating expert knowledge and now that has been massively devalued. Any idiot can now prompt their way to the same software. I feel depressed and very unmotivated and expect to retire soon. Talk about a rug pull!
My experience is that people who weren't very good at writing software are the ones now "most excited" to "create" with a LLM.
Nah man. I understand the frustration, but this is a glass is half empty view.
You have decades of expert knowledge, which you can use to drive the LLMs in an expert way. Thats where the value is. The industry or narrative might not have figured that out yet, but its inevitable.
Garbage in, garbage out still very much applies in this new world.
And just to add, the key metric to good software hasn't changed, and won't change. It's not even about writing the code, the language, the style, the clever tricks. What really matters is how well does the code performs 1 month after it goes live, 6 months, 5 years. This game is a long game. And not just how well does the computer run the code, but how well can the humans work with the code.
Use your experience to generate the value from the LLMs, cuase they aren't going to generate anything by themselves.
Glass half empty view? Their whole skill set built up over decades, digitized, and now they have to shift everything they do, and who knows humans will even be in the loop, if they’re not c-suite or brown nosers. Their whole magic and skill is now capable of being done by a PM in 5 minutes with some tokens. How is that supposed to make skillful coders feel?
Massive job cuts, bad job market, AI tools everywhere, probable bubble, it seems naive to be optimistic at this juncture.
The world changes. Time marches on, and the very skills you spend your time developing will inevitably expire in their usefulness. Things that were once marvelous talents are now campfire stories or punchlines.
LLMs may be accelerating the process, but definitely not the cause.
If you want a career in technology, a durable one, you learn to adapt. Your primary skill is NOT to master a given technology, it is the ability to master a given technology. This is a university that has no graduation!
Is it though? If it was that universal, we'd employ the best programmers as plumbers, since they have the best ability to master plumbing technology. There are limits, and I think the skill being to master programming technologies is a reasonable limit.
If you're a great programmer, can you can stop using Angular and master React? Yes. Can you stop telling the computer what to do, and master formal proof assistants? Maybe. Can you stop using the computer except as a tool and go master agricultural technology? Probably not. (Which is not to say you can't be a good programmer at an agritech company)
What exactly would people retrain into? The future these companies explicitly want is AI taking ALL the jobs, It's not like PMs are going to be any safer, or any other knowledge work. I see little evidence that AI is going to create new jobs other than a breathless assurance that it "always happens"
> Their whole skill set
This is the fundamental problem with how so many people think about LLMs. By the time you get to Principal, you've usually developed a range of skills where actual coding represents like 10% of what you need to do to get your job done.
People very often underestimate the sheer amount of "soft" skills required to perform well at Staff+ levels that would require true AGI to automate.
Yeah well. That's what we've been doing to other industries over and over.
I remember a cinema theater projectionist telling me exactly that while I was wiring a software controlling numeric projector, replacing the 35mm ones.
If a principal doesn't have the skills to mentor juniors, plan and define architecture, review work and follow a good process, they really shouldn't be considered a principal. A domain expert? Perhaps. A domain expert should fear for their job but a principal should be well rounded, flexible, and more than capable of guiding AI tooling to a good outcome.
> Their whole magic and skill is now capable of being done by a PM in 5 minutes with some tokens.
[citation needed]
It has just merely moved from "almost, but not entirely useless" to "sometimes useful". The models themselves may perhaps be capable already, but they will need much better tooling than what's available today to get more useful that that, and since it's AI enthusiasts who will happily let LLMs code them that work on these tools it will still take a while to get there :)
I'm optimistic about people being able to build the things they always wanted to build but either didn't have the skills or resources to hire somebody who did.
If we truly value human creativity, then things that decrease the rote mechanical aspects of the job are enablers, not impediments.
If we truly value human creativity we should stop building technology that decreases human value in the eyes of the rich and powerful
> What really matters is how well does the code performs 1 month after it goes live, 6 months, 5 years.
After 40 years in this industry—I started at 10 and hit 50 this year—I’ve developed a low tolerance for architectural decay.
Last night, I used Claude to spin up a website editor. My baseline for this project was a minimal JavaScript UI I’ve been running that clocks in at a lean 2.7KB (https://ponder.joeldare.com). It’s fast, it’s stable, and I understand every line. But for this session, I opted for Node and neglected to include my usual "zero-framework" constraint in the prompt.
The result is a functional, working piece of software that is also a total disaster. It’s a 48KB bundle with 5 direct dependencies—which exploded into 89 total dependencies. In a world where we prioritize "velocity" over maintenance, this is the status quo. For me, it’s unacceptable.
If a simple editor requires 89 third-party packages to exist, it won't survive the 5-year test. I'm going back to basics.
I'll try again but we NEED to expertly drive these tools, at least right now.
I always tell Claude, choose your own stack but no node_modules.
What's missing is another LLM dialog between you and Claude. One that figures out your priorities, your non-functional requirements, and instructs Claude appropriately.
We'll get there.
> What's missing is another LLM dialog between you and Claude. One that figures out your priorities, your non-functional requirements, and instructs Claude appropriately.
There are already spec frameworks that do precisely this. I've been using BMAD for planning and speccing out something fairly elaborate, and it's been a blast.
I don't understand. You specifically:
> neglected to include my usual "zero-framework" constraint in the prompt
And then your complaint is that it included a bunch of dependencies?
AI's do what you tell them. I don't understand how you conclude:
> If a simple editor requires 89 third-party packages to exist
It obviously doesn't. Why even bother complaining about an AI's default choices when it's so trivial to change them just by asking?
My main point is that we need to expertly drive these tools. I forgot the trivial instruction and ended up with something that more closely resembles modern software instead of what I personally value. AI still requires our expertise to guide it. I'm not sure if that will be the case in a year, but it is today.
Absolutely agree. But I'd push this further: the real advantage isn't just applying expertise to prompts—it's recording the problem-solving process itself.
Think about it: when you (an expert) solve a bug, you're not just generating correct code. You're making dozens of micro-decisions about scope, trade-offs, and edge cases that an LLM won't know.
The gap between "I solved this issue" and "I can explain why I solved it THIS way" is enormous. Most devs only keep the former (the code), and lose the latter (the reasoning).
In the AI era, experts need tools that capture how they think, not just what they write. That changes the game from "LLM vs human" to "LLM + augmented human judgment."
The principal engineers who will win are those who build compounding knowledge of their own decision-making patterns.
Is this a bot? I feel like HN is dying (for me at least) with all the em-dashes and the "it's not just X, it's Z".
This is correct. Had lunch with a senior staff engineer going for a promo to principal soon. He explained he was early to CC, became way more productive than his peers, and got the staff promo. Now he’s not sharing how he uses the agent so he maintains his lead over his peers.
This is so clearly a losing strategy. So clearly not even staff level performance let alone principal level.
Why the downvotes? It is the defining characteristic of the staff+ level to empower others. Individual contributions don’t matter at this level.
Yes, I think this is reasonable.
I have been consistently skeptical of LLM coding but the latest batch of models seems to have crossed some threshold. Just like everyone, I've been reading lots of news about LLMs. A week ago I decided to give Claude a serious try - use it as the main tool for my current work, with a thought out context file, planning etc. The results are impressive, it took about four hours to do a non-trivial refactor I had wanted but would have needed a few days to complete myself. A simpler feature where I'd need an hour of mostly mechanical work got completed in ten minutes by Claude.
But, I was keeping a close eye on Claude's plan and gradual changes. On several occasions I corrected the model because it was going to do something too complicated, or neglected a corner case that might occur, or other such issues that need actual technical skill to spot.
Sure, now a PM whose only skills are PowerPoint and office politics can create a product demo, change the output formatting in a real program and so on. But the PM has no technical understanding and can't even prompt well, let alone guide the LLM as it makes a wrong choice.
Technical experts should be in as much demand as ever, once the delirious "nobody will need to touch code ever again gives way to a realistic understanding that LLMs, like every other tool, work much better in expert hands. The bigger question to me is how new experts are going to appear. If nobody's hiring junior devs because LLMs can do junior work faster and cheaper, how is anyone going to become an expert?
> I have been consistently skeptical of LLM coding but the latest batch of models seems to have crossed some threshold.
It’s refreshing to hear I’m not the only one who feels this way. I went from using almost none of my copilot quota to burning through half of it in 3 days after switching to sonnet 4.6. I’m about to have to start lobbying for more tokens or buy my own subscription because it’s just that much more useful now.
Yes, it's Sonnet 4.6 for me as well as the most impressive inflection point. I guess I find Anthropic's models to be the best, even before I found Sonnet 3.7 to be the only model that produced reasonable results, but now Sonnet 4.6 is genuinely useful. It seems to have resolved Claude's tendency to "fix" test failures by changing tests to expect the current output, it does a good job planning features, and I've been impressed by this model also telling me not to do things - like it would say, we can save 50 lines of code in this module but the resulting code would be much harder to read so it's better not to. Previous models in my experience all suffered from constantly wanting to make more changes, and more, and more.
I'm still not ready to sing praises about how awesome LLMs are, but after two years of incremental improvements since the first ChatGPT release, I feel these late-2025 models are the first substantial qualitative improvement.
^ Big this. If we take a pessimistic attitude, we're done for.
I think the key metric to good software has really changed, the bar has noticeably dropped.
I see unreliable software like openclaw explode in popularity while a Director of Alignment at Meta publicly shares how it shredded her inbox while continuing to use openclaw [1], because that's still good enough innit? I see much buggier releases from macOS & Windows. The biggest military in the world is insisting on getting rid of any existing safeguards and limitations on its AI use and is reportedly using Claude to pick bombing targets [2] in a bombing campaign that we know has made mistakes hitting hospitals [3] and a school [4]. AI-generated slop now floods social networks with high popularity and engagement.
It's a known effect that economies of scale lowers average quality but creates massive abundance. There never really was a fundamental quality bar to software or creative work, it just has to be barely better than not existing, and that bar is lower than you might imagine.
[1] https://x.com/summeryue0/status/2025774069124399363
[2] https://archive.ph/bDTxE
[3] https://www.reuters.com/world/middle-east/who-says-has-it-ha...
[4] https://www.nbcnews.com/world/iran/iran-school-strike-us-mil...
Hi Grok, nice comment!
> Any idiot can now prompt their way to the same software.
I must say I find this idea, and this wording, elitist in a negative way.
I don't see any fundamental problem with democratization of abilities and removal of gatekeeping.
Chances are, you were able to accumulate your expert knowledge only because:
- book writing and authorship was democratized away from the church and academia
- web content publication and production were democratized away from academia and corporations
- OSes/software/software libraries were all democratized away from corporations through open-source projects
- computer hardware was democratized away from corporations and universities
Each of the above must have cost some gatekeepers some revenue and opportunities. You were not really an idiot just because you benefited from any of them. Analogously, when someone else benefits at some cost to you, that doesn't make them an idiot either.
> I don't see any fundamental problem with democratization of abilities and removal of gatekeeping.
This parroted argument is getting really tired. It signals either astroturfing or someone who just accepts what they are sold without thinking.
LLMs aren’t “democratising” anything. There’s no democracy in being mostly beholden to a few companies which own the largest and most powerful models, who can cut you off at any time, jack up the prices to inaccessibility, or unilaterally change the terms of the deal.
You know what’s truly “democratic” and without “gatekeeping”? Exactly what we had before, an internet run by collaboration filled with free resources for anyone keen enough to learn.
Dismissing someone with a different opinion as astroturfing is not productive.
There are loads of high performance open source LLMs on the market that compete with the big 3. I have not seen this level of community engagement and collaboration since the open-source boom 20 years ago.
If I believed it was a different opinion I wouldn’t even have written the first paragraph, or maybe the whole reply.
The issue arises from it not being that person’s opinion but a talking point. People didn’t all individually arrive at this “democratisation” argument by themselves, they were sold what to say by the big players with vested interest in succeeding.
I’m very much for discussing thoughts one has come up with themselves, especially if they disagree with mine. But what is not productive is arguing with a proxy.
> I have not seen this level of community engagement and collaboration
Nor this level of spam and bad submissions.
You're overthinking it.
Programming is a tricky skill and takes a long time to get good at. Lots of people aren't good at it. AI helps them program anyway, and allows them to sometimes produce useful programs. That's it.
It's not a talking point. It's just the reality of what the technology enables, and it's a simple enough observation that millions of people can independently arrive at that conclusion, and some of them might even refer to it as "democratization".
It is a fair note when there are a lot of people with a monetary incentive to hype up a certain piece of technology. And as gp correctly points out: "democratizing" is most commonly used in a very hostile and underhanded manner.
It is what we are talking about, hence not "counterproductive".
> here’s no democracy in being mostly beholden to a few companies which own the largest and most powerful models, who can cut you off at any time, jack up the prices to inaccessibility, or unilaterally change the terms of the deal.
That would not happen, simply because those companies' interest will never be aligned entirely. There are at least three SOA models at the moment plus many open weight models. Anthropic vs. Pentagon is exactly what would play out.
And what is a precedence? Don't say Google, because search is well and alive.
> You know what’s truly “democratic” and without “gatekeeping”? Exactly what we had before, an internet run by collaboration filled with free resources for anyone keen enough to learn.
We have way more free resources at the moment. Name anything you'd like to learn, someone will be able to point you to a relevant resource. There are also better ways of surfacing that resource.
> This parroted argument
Most of arguments here on HN have been discussed ad nauseam, for or against AI. It's only parroted (or biased) if it's against your own beliefs.
> LLMs aren’t “democratising” anything.
They absolutely are. Anytime new knowledge or skills become widely available to everyone, that's a term used for it.
> There’s no democracy in being mostly beholden to a few companies which own the largest and most powerful models, who can cut you off at any time, jack up the prices to inaccessibility, or unilaterally change the terms of the deal.
None of that has anything to do with anything. There's competition between companies to keep prices low and accessibility high.
I think you are simply misunderstanding the word "democratic". It isn't just political. From MW:
> 3 : relating, appealing, or available to the broad masses of the people : designed for or liked by most people
Here, it's specifically about making things available to the broad masses of the people that wasn't before.
This isn't a matter of opinion. It's just the meaning of the word.
I agree completely, the "democratizing programming" is being overplayed by AI vendors like they are doing community service, and HN commenters use it like a trump card in an argument.
Everyone already had the option to write any code, fork any open source project, publish any of their code, run any of their code but suddenly AI appears and THAT is what makes it democratic? What was undemocratic about it? Is this democracy where idiots are running ai agents that publish smear campaigns, or harass maintainers for not accepting their slop is the democratic future you wish for?
How many (job) positions do you see today that want a backend developer? Frontend developer? Not much because now everyone is expected to be at least full stack, if not also devops as well. The exact same thing is playing out right now with AI, people are expected to produce 5x the amount of code before, if you don't, someone else will take your job that is willing to do it.
Already bloated programs will bloat further, they will require even more resources to run, you will have to pay even more for hardware, they will be slower, less responsive, you will have to pay yet another monthly fee to big tech for their AIs, and people will happily do it and pat themselves that we democratized programming, while running towards the future where nobody will be able to own hardware capable of general computing.
> ...I haven't yet tried the big local ones, because how would that be better? I'm still paying to big tech to run it, just in a different way
Why blame big tech when they're just providing a service at a fair cost (3rd party inference is incredibly cheap)? I'm not sure how that makes sense.
I removed this line because people will get hung up on it and not see the forest from the tree.
> There’s no democracy in being mostly beholden to a few companies which own the largest and most powerful models, who can cut you off at any time, jack up the prices to inaccessibility, or unilaterally change the terms of the deal.
LOL. Maybe you are referring to OpenAI and Anthropic? Yes they have codex and opus. But about 1-2 months behind them is Grok, Gemini, and then 2-3 months behind them are all the other models available in cursor, from chinese open source models to composer etc.
How you can possibly use this "big company takes everything away" narrative is ridiculous, when you can probably use models for free that are abour 2 months behind the best models. This is probably the most uncentralised tech boom ever.
(I mean openAI is in such a bad state, I wouldn't be surprised if they lose almost their entire lead and user base within 6-12 months and are basically at the level of small chinese llm developers).
This is technically true in a lot of ways, but also intellectual and not identifying with what the comment was expressing. It's legitimately very frustrating to have something you enjoy democratized and feel like things are changing.
It would be like if you put in all this time to get fit and skilled on mountain bikes and there was a whole community of people, quiet nature, yada yada, and then suddenly they just changed the rules and anyone with a dirt bike could go on the same trails.
It's double damage for anyone who isn't close to retirement and built their career and invested time (i.e. opportunity cost) into something that might become a lot less valuable and then they are fearful for future economic issues.
I enjoy using LLMs and have stopped writing code, but I also don't pretend that change isn't painful.
The change is indeed painful to many of us, including me. I, too, am a software engineer. LLMs and vibe coding create some insecurity in my mind as well.
However, our personal emotions need not turn into disparaging others' use of the same skills for their satisfaction / welfare / security.
Additionally, our personal emotions need not color the objective analysis of a social phenomenon.
Those two principles are the rationales behind my reply.
I appreciate that rationale, I also see the importance of those two principles and I think there's a lot of value there.
I suppose I see "any idiot" as a more general phrase, like "idiot proof", not directly meaning that anyone who uses a LLM is an idiot. However I can also see how it would be seen as disparaging.
Also, while there's a lot of examples of people entrenching into a certain behavior or status and causing problems, I also think society is a bit harsh on people who struggle with change. For people who are less predisposed to be ok with change feels like a lot of the time the response is "just deal with it and don't be selfish, this new XYZ is better for society overall".
Society is pretty much made up of personal emotions on some level. I don't think we should go around attacking people, but very few things can be considered truly objective in the world of societal analysis.
> I don't see any fundamental problem with democratization of abilities and removal of gatekeeping.
It was very democratized before, almost anyone could pick up a book or learn these skills on the internet.
Opportunity was democratized for a very long time, all that was needed was the desire to put in the work.
OP sounds frustrated but at the same time the societal promise that was working for longest time (spend personal time specializing and be rewarded) has been broken so I can understand that frustration..
I'm mad about Ozempic. For years I toiled, eating healthy foods while other people stuffed their faces with pizza and cheese burgers. Everybody had the opportunity to be thin like me, but they didn't take that and earn it like me. So now instead of being happy about their new good fortune and salvaged health, I'm bitter and think society has somehow betrayed me and canceled promises.
/s, obviously I would hope except I've actually seen this sentiment expressed seriously.
I would rather see regulations fixing incentives that create this problem (why does healthy food cost so much more than processed food?) than a bandaid like Ozempic that 2/3 of people can't quit (hello another hidden subscription service) without regaining their weight back.
The produce aisle has the cheapest food in the whole store. Inb4 you cite the price of some fancy imported vegetable as your excuse for eating pizza every night.
I can only speak from my own experience but if you want to have a healthy diet (enough protein and calories) where I'm from it costs a lot more than just buying cheap junk food. Well, the proteins cost.
People are obese because they eat at restaurants, eat junk food, and drink sugary or high carb liquids.
They are not obese because they cannot afford the necessary amounts of protein and calories from healthy sources in the grocery store.
And no lol, I eat very healthy and mainly cook my own vegetarian food, had junk food last time maybe a month ago.
> why does healthy food cost so much more than processed food?
It doesn’t.
> why does healthy food cost so much more than processed food?
It doesn't. Carbs like rice, potatoes, etc. are incredibly cheap. Protein like ground beef and basic cuts of chicken are not expensive. And broccoli, carrots, green peppers, apples -- these are not exactly breaking the bank. Product is seasonal, so you vary what you buy according to what is cheapest this week.
Meanwhile, stuff like breakfast cereal and potato chips and Oreo cookies actually are surprisingly expensive.
> Carbs like rice, potatoes, etc. are incredibly cheap.
Eating too many carbs is not a healthy diet dude
It's the regulations and subsidies that created the very situation in the first place (in the USA, at least). Twinkies are cheap because we literally pay farmers to grow cheap carbs and sugar. It was design this way, well - lobbied.
I can believe that unfortunately. Good regulation is hard to do without lobbyists getting what they want at the expense of people.
> why does healthy food cost so much more than processed food?
It does not. Legumes, whole grains, vegetables, and yogurt have always been cheaper than processed food.
People prefer eating carbohydrates and saturated fats.
is the result the only thing that matters? or does the journey have its place as well?
is there price to be paid for getting any desired result imaginable without effort on a press of a button?
Yeah, exactly. For the longest time those of us who were self taught and/or started late were looked down upon. Before that, same with corporate vs. open source. This is the same elitist and gatekeeping mentality. If LLM coding tools help people finally get ideas out of their head, then more power to them! If others want to yak shave to and do more serious intellectual type of programming and exploration, more power to them!
It goes past software though. That's just the common ground we share on here. A lifetime ago I was a souhd engineer, and knew how to mic up a rock band. I've since forgotten it all, but I was at a buddies practice space and the opportunity came up to mic their setup. so I dredged up decades old memories, only to take a photo and sent it to ChatGPT, which has read every book on sound engineering and mic placement, every web forum that was open to the public where someone dropped some knowledge out there on the Internet for free. So, damned if it didn't come up with some good suggestions! I wish I could say it only made wrong and stupid suggestions. A lot about mic placement is subjective, but in telling it the kind of sound we were after, it was able to tell us which direction to go to get warmer or harsher.
So it's not just software that's coming to an end, everything else is as well. But; billionaires wives will still need haircuts (women billionaires will also need haircuts), so hairdresser will be the last profession.
I remember the cosmetology department on the other side of the tech school I went to was a common target of mockery on the "tech" tech side. Life as a hairdresser isn't always easy, but it's real skill. And unlike computer touching, requires certification.
So you put these all in the same category: gaining knowledge, gaining abilities, and just obtaining things.
I gatekeep my bike, I keep it behind a gate. If you break the gate open and democratize my bike, you're an idiot.
I'm not sure how you're getting that from their post? None of the four things mentioned (book publishing, web publishing, open-source software, computer hardware) involve stealing someone's property, he's saying that the ability to produce those things widened and the cost went down massively, so more people were able to gain access to them. Nobody stole your bike, but the bike patents expired and a bunch of bike factories popped up, so now everyone can get a cheap bike.
I did have misgivings about saying that because I'm from the old "information wants to be free" school. But the subject was idiocy, and the point isn't to say that the bike was stolen, but that the bike-taker didn't do anything clever, or have much of a learning experience.
Maybe it's of value that any idiot can do this, but we're still idiots.
it is more like:
You gatekeep your bike, you keep it behind a gate, you don't let anyone else ride it.
Your neighbor got a nicer bike for Christmas, rode it by your house and now you are sad because you aren't the special kid with the bike any more, you are just regular kid like your neighbor.
No, both bikes are owned by a $trillion corporation who collects a monthly rent.
Yeah, if you studied and mastered all of the various disciplines required for fabricating a bicycle, and then fabricated your own by hand and offered to do likewise for others, sometimes in exchange for compensation, sometimes for free (provided others could use the bike), only for some machine that mass produces bikes to (informal) spec that was built by studying all of the designs you used for the bikes you made to suddenly become widely and cheaply available.
Jesus that's brutal. Accurate. But I feel attacked ;p
Using physical analogs for virtual things is not the best choice, for example: Would you give a copy of your bike, or copy of your food to your poor neighbor kid if you could copy it as easily and as cheaply as digital products?
Actually he would be very wise, for he then has a bike and can ride it or sell it for money. You have to learn capitalist thinking to succeed in this economy.
> "removal of gatekeeping"
Gates were put in place for lawyers, doctors, and engineers (real ones, not software "engineers") because the cost of their negligence and malpractice was ruined lives and death. Gatekeeping has value.
Software quality, reliability, and security was already lousy before the advent of LLMs, making it increasingly clear that the gate needed to be kept. Gripes about "gatekeeping" are a dogwhistle for "I would personally benefit from the bar being lowered even further".
While I can see your point I also think it is not directly relevant to OP. Firstly, I don't think OP meant that people are idiots for using LLM's, it was just a way of saying that skill is no longer required so even idiots can do it whereas it used to be something that required high skill.
As for the comparisons - some are partly comparable to the current situation, but there's some differences as well. Sure books and online content enabled others to join, thereby reducing the "moat" for those who built careers on esoteric knowledge. But it didn't make things _that_ easy - it still required years of invested time to become a good developer. Also, it happened very gradually and while the developer pie was growing, and the range of tech growing, so developers who kept on top of technology (like OP did) could still be valuable. Of course, no one knows fully how it will play out this time around; maybe the pie will get even bigger, maybe there's still room for lots of developers and the only difference is that the tedious work is done. Sure, then it is comparable. But let's be honest, this has a very real chance of being different (humans inventing AI surely is something special!) and could result in skill-sets collapsing in value at record time. And perhaps worse, without opening new doors. Sure, new types of jobs may appear but they may be so different that they are essentially completely different careers. It is not like in the past you just needed to learn a new programming language.
The real litmus test is whether one would allow LLMs to determine a medical procedure without human check. As of 2026, I wouldn’t. In the same sense I prefer to work with engineers with tons of experience rather than fresh graduates using LLMs
Elitism is good. Elitism is just. There is absolutely nothing wrong with elitism.
Skill based one of course.
People actually value the effort and dedication required to master a craft. Imagine we invent a drug that allows everyone to achieve olympic level athletic performance, would you say that it "democratises" sports? No, that would be ridiculous.
It does technically democratize the exhilarating experiences of that level of performance. Likely also democratizes negative aspects like injuries, extreme dieting, jealousy, neglecting relationships.
That said, if we zoom out and review such paradigm shifts over history, we find that they usually result in some new social contracts and value systems.
Both good expert writers and poor novice writers have been able to publish non-fiction books from a few centuries now. But society still doesn't perceive them as the same at all. A value system is still prevalent and estimated primarily from the writing itself. This is regardless of any other qualifications/disqualifications of authors based on education / experience / nationality / profession etc.
At the individual level too, just because book publishing is easy doesn't mean most people want to spend their time doing that. After some initial excitement, people will go do whatever are their main interests. Some may integrate these democratized skills into their main interests.
In my opinion, this historical pattern will turn out to be true with the superdrug as well as vibe coding.
Some new value will be seen in the swimming or running itself - maybe technique or additional training over and above the drug's benefits.
Some new value will be discovered in the code itself - maybe conceptual clarity, algorithmic novelty, structural cleanliness, readability, succinctness, etc. Those values will become the new foundations for future gatekeeping.
>Some new value will be discovered in the code itself - maybe conceptual clarity, algorithmic novelty, structural cleanliness, readability, succinctness, etc. Those values will become the new foundations for future gatekeeping.
It's a nice idea, but I feel like that's only going to be the case for very small companies or open source projects. Or places that pride themselves on not using AI. Artisan code I call it.
At my company the prevailing thought is that code will only be written by AI in the future. Even if today that's not the case, they feel it's inevitable. I'm skeptical of this given the performance of AI currently. But their main point is, if the code solves the business requirements, passes tests and performs at an adequate level, it's as good as any hand written code. So the value of readable, succinct, novel code is completely lost on them. And I fear this will be the case all over the tech sector.
I'm hopeful for a bit of an anti-AI movement where people do value human created things more than AI created things. I'll never buy AI art, music, TV or film.
The exhilarating experience is a byproduct of the effort it took to obtain. Replace drug with exoskeleton or machine, my point is the same. The way you democratise stuff like this is removing barriers to skill development so that everyone can learn a craft, skill, train their bodies etc.
But I do agree, if everyone can build software then the allure of it along with the value will be lost. Vibe coding is only a superpower as long as you're one of the select few doing it. Although I imagine it will continue to become a niche thing, anyone who thinks everyone and their grandma will be vibing bespoke software is out to lunch.
Personally I think there is a certain je ne sais quoi about creating software that cannot be distilled to some mechanical construct, in the same way it exists for art, music, etc. So beyond assembly line programming, there will always be a human involved in the loop and that will be a differentiating factor.
It would democratize sports, while making sports worthless and unremarkable. It would collapse the market for sports.
> would you say that it "democratises" sports
Given how I've seen a lot of AI "artists" describe themselves and "their" works, yeah, probably a lot of them would.
Democratizing? A handful of companies harvesting data and building products on top of it is democratizing?
Open research papers, that everyone can access is democratizing knowledge. Accessibile worldwide courses, maybe (like open universities).
But LLMs are not quite the sane. This is taking knowledge from everyone and, in the best case, paywalling it.
I agree in spirit that the original comment was classist, but in this context your statements are also out of place, in my opinion.
Coding is one of the least gate kept things in history. Literally the only obstacle is "do I want to put in the time to learn it". All Claude is doing is remixing all the free stuff that was already a google search away.
This is a good response. Progress has always been resisted by incumbents
Exactly. How ridiculous. The world doesn’t owe ‘principal engineers’ shit. I hate to work with people like this.
—- from a ‘principal engineer’
how is 2-3 centralized providers of this new technology "democratization"?
It's _relatively_ democratic when compared to these counterfactual gatekeeping scenarios:
- What if these centralized providers had restricted their LLMs to a small set of corporations / nations / qualified individuals?
- What if Google that invented the core transformer architecture had kept the research paper to themselves instead of openly publishing it?
- What if the universities / corporations, who had worked on concepts like the attention mechanism so essential for Google's paper, had instead gatekept it to themselves?
- What if the base models, recipes, datasets, and frameworks for training our own LLMs had never been open-sourced and published by Meta/Alibaba/DeepSeek/Mistral/many more?
> - What if Google that invented the core transformer architecture had kept the research paper to themselves instead of openly publishing it?
I'm pretty sure that someone else would have come around the corner with a similar idea some time later, because the fundamentals of these stuff were already discussed decases before "Attention is all you need" paper, the novel thing they did was combining existing knowhow into a new idea and making it public. A couple of ingredients of the base research for this is decades old (interestingly back then some European universities were leading the field)
> I'm pretty sure that someone else would have come around the corner with a similar idea some time later, because the fundamentals of these stuff were already discussed decases before
I am not trying to be dismissive, but this could apply to all research ever
thats true! I meant "not somewhen accidentally in the future" but more of "relative close together on the timeline"
There are lots of open weight models
You're right! And cars when they were invented didn't give increased mobility to millions of people, because they came from just a few manufacturers.
Cell phones made communication easier for exactly zero people even though billions have been sold. Why? Because they come from just a few different companies.
Cars are a great analogy because they made mobility significantly worse for people who can’t afford them or refuse to use them for ethical reasons.
Those people are the exception that proves the rule.
And cars now have become privacy nightmares, which we are now beholden to
I will tell that to the billions of people who are walking and biking around at this very moment. Just give me some time.
I said worse, not impossible. Because of cars, I have to take longer routes and put myself into danger while working.
cars, up to relatively recently, have been pure hardware machines that when a consumer buys, they can own. Now that's starting to change. Let's see how "democratized" cars are when the manufacturers can hard-lock "owners" from them.
Similar story to cell phones.
LLMs are in this state right out the gate.
> elitist in a negative way.
It's funny you say that, because I've seen plenty of the reverse elitism from "AI bros" on HN, saying things like:
> Now that I no longer write code, I can focus on the engineering
or
> In my experience, it's the mediocre developers that are more attached to the physical act of writing code, instead of focusing on the engineering
As if getting further and further away from the instructions that the CPU or GPU actually execute is more, not less, a form of engineering, instead of something else, maybe respectable in its own way, but still different, like architecture.
It's akin to someone claiming that they're not only still a legitimate novelist for using ChatGPT or a legitimate illustrator for using stable diffusion, but that delegating the actual details of the arrangement of words into sentences or layers and shapes of pigment in an image, actually makes them more of a novelist or artist, than those who don't.
Yes, both are forms of elitism.
Yeah, and one is at least plausibly justifiable (though still potentially unfounded), while the other is absurd on its face.
> My experience is that people who weren't very good at writing software are the ones now "most excited" to "create" with a LLM.
I've been a tech lead for years and have written business critical code many times. I don't ever want to go back to writing code. I am feeling supremely empowered to go 100x faster. My contribution is still judgement, taste, architecture, etc. And the models will keep getting better. And as a result, I'll want to (and be able to) do even more.
I also absolutely LOVE that non-programmers have access to this stuff now too. I am always in favor of tools that democratize abilities.
Any "idiot" can build their own software tailored to how their brains think, without having to assemble gobs of money to hire expensive software people. Most of them were never going to hire a programmer anyway. Those ideas would've died in their heads.
> I also absolutely LOVE that non-programmers have access to this stuff now too. I am always in favor of tools that democratize abilities.
Programming was already “democratized” in the sense that anyone could learn to program for free, using only open-source software. Making everyone reliant on a few evil megacorporations is the opposite of democratization.
You know what they mean by that term, it's about building things without needing to put in the learning effort. I have bosses building small POCs via vibe coding, something they would not have done via learning to code and typing it manually.
It's the same sort of argument artists use when it comes to AI generated media, there obviously is a qualitative difference in the people now able to generate whatever they want versus needing to draw something by hand, so saying "they could've just learned to draw themselves" is not very convincing. People don't want to do that yet still get an output, and I see nothing wrong with that, and if you do, it's just another sort of gatekeeping, that the "proper" way is to learn it by hand.
Lastly, many, many open weight models exist.
What you bring to the table night be fine, but how long do you think you'll find emoloyers willing to still pay for this?
One thing is for sure LLMs will bring down down the cost of software per some unit and increase the volume.
But..cost = revenue. What is a cost to one party is a revenue to another party. The revenue is what pays salaries.
So when software costs go down the revenues will go down too. When revenues go down lay offs will happen, salary cuts will happen.
This is not fictional. Markets already reacted to this and many software service companies took a hit.
If AI completely erases the profession of software developer, I'll find something else to do. Like I can't in good faith ever oppose a technology just because it's going to make my job redundant, that would be insane.
Take that to its extreme. Suppose there was a technology that you do not own that would make everyone's job redundant. Everyone out of a job. There is no need for education, for skills to be mastered, for expertise. Would it still be insane to complain?
Then society needs to collectively decide how to allocate resources. Uh oh!
The owners of the AI companies will collectively decide how to allocate resources, rather.
the resources go to the guys with the AI duh
A world where there is no need for work? Oh no, my steak is too juicy and my lobster is too buttery.
A world where there is no need for workers--not at all the same thing.
You may not end up with a seat at the table.
Isnt' that what old-school software did for many years? It used to take jobs, just not from developers. If you implement software that takes accounting from 10 people to 2, 8 just got fired. If you have Support solution helping one support rep answer 100 requests instead of 20, you just optimised support force by the rate of 1 to 5.
I'm in the boat of SaaS myself, but feel a bit dishonesty from Senior devs complaining about technology stealing jobs. When it was them doing the stealing, it was fine. Now that the tables have turned, it's not technology is bad
Jevons' paradox still exists. Making X cheaper (usually by needing fewer people to do one unit of X) can and often does lead to more people being needed for X.
You still need education, skills to be mastered and expertise even in a world without jobs. How would you play any game or sport without skills?
There are bigger issues if everyone is out of a job.
take that to absolute extreme. Why do we even need a job? If all our physical needs are met maybe humanity can finally focus on real problems (spiritual, mental, inter personal) that no amount of "jobs" can solve...
Because greedy capitalists control the world which means that most people's most basic needs aren't met if they don't have a job.
I believe that if situation gets that bad, then we will actually do some new kind of revolution, even in the West.
There may not be a job for you in an office setting. What would you do?
That's when the problem shifts from individual to systemic, and only systemic solutions fix systemic problems.
I think that a what a lot of anti-AI folks are trying to argue without saying it explicitly is that it already is a systemic problem. They're not necessarily against the technology on its own, but against the systemic problems it would introduce if society doesn't take a stance against it.
I'd buy some good gloves and steel-toed boots.
I don't have an answer for this, and won't pretend to.
But my take on this is that accountability will still be a purely human factor. It still is. I recently let go of a contractor who was hired to run our projects as a Scrum/PM, and his tickets were so bad (there were tickets with 3 words in them, one ticket was in the current sprint, that was blocked by a ticket deep in the backlog, basic stuff). When I confronted him about them, he said the AI generated them.
So I told him that:
1. That's not an excuse, his job is to verify what it generated and ensure it's still good.
2. That actually makes it look WORSE, that not only did he do nearly 0 work, that he didn't even check the most basic outputs. And I'm not anti-AI, I expressly said that we should absolutely use AI tools to accelerate our work. But that's not what happened here.
So you won't get to say (at least I think for another few years) "my AI was at fault" – you are ultimately responsible, not your tools. So people will still want to delegate those things down the chain. But ultimately they'll have to delegate to fewer people.
In general I agree. But it’s somehow very unlikely for the AI to generate a three word ticket. That’s what humans do. AI might generate an overly verbose and specific ticket instead.
What drives that behavior is what I like to call human slop :)
>What you bring to the table night be fine, but how long do you think you'll find emoloyers willing to still pay for this?
I'm assuming that the software factory of the future is going to need Millwrights https://en.wikipedia.org/wiki/Millwright
But, builders are builders. These tools turn ideas into things, a builders dream.
Just sold a house/moved out after being laid off in mid-January from a govt IT contractor(there for 8 great years and mostly remote). I started my UX Research, Design and Front End Web Design coding career in 2009, but now I think it's almost a stupid go nowhere vanishing career, thanks to AI.
I think much like you that AI is and will just continue to destroy the economy! At least I got to sell a house and make a profit--stash it away for when the big AI market crash happens (hopefully not a 2030 great depression tho). As then it's a down market and buying stocks, bitcoin and houses is always cheaper.
Any given system will still need people around to steer the AI and ensure the thing gets built and maintained responsibly. I'm working on a small team of in-house devs at a financial company, and not worried about my future at all. As an IC I'm providing more value than ever, and the backlog of potential projects is still basically endless- why would anyone want to fire me?
Why would it need people to steer the AI? I can easily see a future where companies that don't rely on the physical world (like manufacturing) are completely autonomous, just machines making money for their owner.
Yours is a naive sight. You learn a bit about engineering and feedback control and realize that the world is too complex for that.
It's easy to imagine but there's still a vast amount of innovation and development that has to happen before something like that becomes realistic. At that point the whole system of capitalism would need to be reconsidered. Not going to happen in the foreseeable future.
> why would anyone want to fire me?
Because they can hire some "prompt engineer" to "steer the AI" for $30-50k instead of $150-$250k.
The difference between having a non-technical person and someone who is capable of understanding the code being generated and the systems running it is immense, and will continue to be so over the foreseeable future.
Just because somebody has a bunch of power tools doesn't mean I'd ask them to build my house.
Anyone that only costs $30k-50k would either be doing this part-time, or have some limit that prevented them from earning $150k-250k.
Or not living in US?
"One thing is for sure LLMs will bring down down the cost of software per some unit and increase the volume.
But..cost = revenue."
That is Karl Marx's Labor theory of value that has been completely disproven.
You don't charge what it costs to build something, you charge the maximum the customer is willing to pay.
The price is determined by SUPPLY and demand, and being able to write software quickly using LLMS would move the supply curve.
Congrats - you caused me to create an account to reply, due to the sheer density of your incorrectness.
- First, the LTV was not Marx's idea. Adam Smith held the same view, as did many many others during this era. Marx refined this idea, but there's nothing about your point that is unique to his version of it.
- Second, while LTV is not widely used today, this is not because it was "completely disproven" (can you cite anything to back this claim up?). It is because economics shifted to a different paradigm based on marginal utility. These two frameworks operate at different levels of abstraction and address different aspects of the price of goods. There is actually empirical evidence of a correlation between the cost of a good and the cost of the labour, at an aggregate level.
- Third, Marx explicitly differentiated between _value_ and _price_. LTV deals with value exclusively (in other words, what happens when externalities impacting price are accounted for). He would have had no issue accepting that externalities impacting supply and demand would impact price.
The final irony of your comment is that the commenter's claim that you are incorrectly analysing is actually also fully defensible under your (presumably) neoclassical view of economics. In competitive markets, reduced production costs lead to reduced equilibrium prices as competitors undercut each other. The proposition that in the long run, under competition, price tends toward cost is a standard result in microeconomics. The idea that "you charge the maximum the customer is willing to pay" only holds without qualification in monopoly or monopolistic competition with strong differentiation, which are precisely the conditions that increased software supply would erode.
Efficient markets barely exist anywhere, especially in tech, it's all monopolistic competition that's bad for the consumer and increases inequality.
> I also absolutely LOVE that non-programmers have access to this stuff now too. I am always in favor of tools that democratize abilities.
Here's the other edge of that sword. A couple back-end devs in my department vibe-coded up a standard AI-tailwind front-end of their vision of revamping our entire platform at once, which is completely at odds with the modular approach that most of the team wants to take, and would involve building out a whole system based around one concrete app and 4 vaporware future maybe apps.
And of course the higher-ups are like “But this is halfway done! With AI we can build things in 2 weeks that used to six months! Let’s just build everything now!” Nevermind that we don’t even have the requirements now, and nailing those down is the hardest part of the whole project. But the higher-ups never live through that grind.
This scenario is not new with AI at all though? 14 years ago I watched a group of 3 front-end devs spin up a proof of concept in ember.js that has a flashy front end, all fake data, and demo it to execs. They wowed the execs and every time the execs asked "how long would it take to fix (blank) to actually show (blank)?" the devs hit f12, inspect element, and typed in what they asked for and said "already done!".
It was missing years of backend and had maybe 1/20th feature parity with what we already had and it would have, in hindsight, been literally impossible to implement some of the things we would need in the future if we had went down that path. But they were amazed by this flashy new thing that devs made in a weekend that looked great but was actually a disaster.
I fail to see how this is any different than what people are complaining about with vibe coded LLM stuff a decade and a half later now? This was always being done and will continue to be done; it's not a new problem.
The difference is now anyone can spin up a vibe-coded site that wows execs.
It reemphasizes the question of importance. Would a user accept their data needing a AI implementation of a ("manual") migration and their flow completely changing? Does reliability to existing users even matter in the companies plans?
If it isn't a product that needs to solve problems reliably over time then it was kind of silly to use a DBA that cost twice the Backend engineer and only handled the data niche. We progressed from there or regressed from there depending on why we are developing software.
The models will not keep betting better. We have pased "peak LLM" already, by my estimate. Some of the parlour tricks that are wrapped around the models will make some incremental improvements, but the underlying models are done. More data, more parameters, are no longer doing to do anything.
AI will have to take a different direction.
This is really interesting to me; I have the opposite belief.
My worry is that any idiot can prompt themselves to _bad_ software, and the differentiator is in having the right experience to prompt to _good_ software (which I believe is also possible!). As a very seasoned engineer, I don't feel personally rugpulled by LLM generated code in any way; I feel that it's a huge force multiplier for me.
Where my concern about LLM generated software comes in is much more existential: how do we train people who know the difference between bad software and good software in the future? What I've seen is a pattern where experienced engineers are excellent at steering AI to make themselves multiples more effective, and junior engineers are replacing their previous sloppy output with ten times their previous sloppy output.
For short-sighted management, this is all desirable since the sloppy output looks nice in the short term, and overall, many organizations strategically think they are pointed in the right direction doing this and are happy to downsize blaming "AI." And, for places where this never really mattered (like "make my small business landing page,") this is an complete upheaval, without a doubt.
My concern is basically: what will we do long term to get people from one end to another without the organic learning process that comes from having sloppy output curated and improved with a human touch by more senior engineers, and without an economic structure which allows "junior" engineers to subsidize themselves with low-end work while they learn? I worry greatly that in 5-10 years many organizations will end up with 10x larger balls of "legacy" garbage and 10x fewer knowledgeable people to fix it. For an experienced engineer I actually think this is a great career outlook and I can't understand the rug pull take at all; I think that today's strong and experienced engineer will be command a high amount of money and prestige in five years as the bottom drops out of software. From a "global outcomes" perspective this seems terrible, though, and I'm not quite sure what the solution is.
>For short-sighted management, this is all desirable since the sloppy output looks nice in the short term
It was a sobering moment for me when I sat down to look at the places I have worked for over my career of 20-odd years. The correlation between high quality code and economic performance was not just non-existing, it was almost negative. As in: whenever I have worked at a place where engineering felt like a true priority, tech debt was well managed, principles followed, that place was not making any money.
I am not saying that this is a general rule, of course there are many places that perform well and have solid engineering. But what I am saying is that this short-sighted management might not be acting as irrationally as we prefer to think.
I generally agree; for most organizations the product is the value and as long as the product gives some semblance of functionality, improving along any technical axis is a cost. Organizations that spend too much on engineering principles usually aren’t as successful since the investment just isn’t worth it.
But, I have definitely seen failure due to persistent technical mistakes, as well, especially when combined with human factors. There’s a particularly deep spiral that comes from “our technical leadership made poor choices or left, we don’t know what to invest in strategically so we keep spending money on attempted refactors, reorgs, or rewrites that don’t add more value, and now nobody can fix or maintain the core product and customers are noticing;” I think that at least two companies I’ve worked at have had this spiral materially affect their stock price.
I think that generative coding can both help and hurt along this axis, but by and large I have not seen LLMs be promising at this kind of executive function (ie - “our aging codebase is getting hard to maintain, what do we need to do to ensure that it doesn’t erode our ability to compete”).
My guesses are
1. We'll train the LLMs not to make sloppy code.
2. We'll come up with better techinques to make guardrails to help
Making up examples:
* right now, lots of people code with no tests. LLMs do better with tests. So, training LLMs to make new and better tests.
* right now, many things are left untested because it's work to build the infrastructure to test them. Now we have LLMs to help us build that infrustructure so we can use it make better tests for LLMs.
* ...?
* better languages and formal verification. If an LLM codes in Rust, there’s a class of bugs that just can’t happen. I imagine we can develop languages with built-in guardrails that would’ve been too tedious for humans to use.
ChatGPT came out a little over 3 years ago. After 5-10 more years of similar progress I doubt any humans will be required to clean up the messes created by today’s agents.
Good software, bad software, and working software.
> Any idiot can now prompt their way to the same software.
No, it can't. I use claude code and AMP a lot, and yet, unless I pay attention, it easily generate bad code, introduces regressions while trying to fix bugs, get stuck in suboptimal ideas. Modularity is usually terrible, 50 year ideas like cohesion and coupling are, by the very nature of it, mostly ignored except in the most formal rigid ways of mimicry introduced by post-training.
Coding agents are wonderful tools, but people who think they can create and mantain complex systems by themselves are not using them in an optmal way. They are being lazy, or they lack software engineering knowledge and can't see the issues, and in that case they should be using the time saved by coding agents to read hard stuff and elevate their technique.
> Any idiot can now prompt their way to the same software.
It may look the same, but it isn't the same.
In fact if you took the time to truly learn how to do pure agentic coding (not vibe coding) you would realize as a principal engineer you have an advantage over engineers with less experience.
The more war stories, the more generalist experience, the more you can help shape the llm to make really good code and while retaining control of every line.
This is an unprecedented opportunity for experienced devs to use their hard won experience to level themselves up to the equivalence of a full team of google devs.
> while retaining control of every line
What I want when I'm coding, especially on open source side projects, is to retain copyright licensing over every line (cleanly, without lying about anything).
Whoops!
Hmm. TIL: The real exposure isn't Anthropic, OpenIA claiming your code, it's you unknowingly distributing someone else's GPL code because the model silently reproduced it, with essentially zero recourse for the model owner.
I wonder why people still believe in intellectual property, it's a concept that has long since lived past its usefulness, especially technologically.
A free license, like the BSD, if followed, ensures that the unpaid creator of a free work is at least credited. Everyone using that work at the source code level sees the copyright notice with that author's name. The author has already given everyone the freedom to do anything with the code, except for plagiarism. AI is taking away the last thing from peoiple who have shared everything else.
Why is plagiarism an issue? In school it's an issue due to the effect that students won't learn well if they just copy everything, but outside of school and especially for personal use, why should I care if I "plagiarize" or not (and arguably AI doesn't even plagiarize as it's not a 1 to 1 copy paste of the code when making a new project)? The concept of plagiarism is as much a fiction as "intellectual" property. The only sort of property that actually exists is real and tangible.
For morons who have never created anything and just steal theft is not an issue. Of course.
Creators who are ripped off care. IP is more logical that land ownership, since new things have been created whereas no one created the land. Land is just stolen and defended.
> Why is plagiarism an issue?
For starters, because of the western values of giving credit.
We have diseases named after people, never mind inventions and ideas.
Plagiarism is kick-out-of-school grade academic misconduct, whereby you are pretending that someone's work (and the ability it implies) is your own.
> The only sort of property that actually exists is real and tangible.
Remember, I'm talking about works that are free to redistribute, use and even modify. Or in other cases, that the users to whom a compiled work is distributed have access to the buildable source code.
The authors put their names on it, and terms which says that their notices are to be preserved when copies are made.
This isn't good enough for the Altmans and Amodeis of the world.
> it's an issue due to the effect that students won't learn well if they just copy everything
... and fraudulently obtain professional licensing, and use that to cause harm: medical malpractice, unsafe engineering.
It is fraud.
Because IP democratizes returns on the creative process.
Maybe it used to but with companies like Disney lengthening copyright times way beyond the original intention, or corporations patenting absurd things, it seems to be more of a way to entrench power than any sort of democratization. I'm glad generative AI seem to be bypassing all this and actually democratizing returns on the creative process, by flagrantly violating the concept of IP.
In the case of BSD-like licenses, IP is applied in a way that discourages plagiarism, while giving all the practical freedoms to the users, including making proprietary products.
In the case of copyleft licenses like GPL, IP is applied in a way to ensure that users have the code.
These things are taken away when the code is laundered through AI.
Again, start talking to people outside the field of programming and ask them how they like it when their labor of passion is "democratized" by AI turning it into unattributable slurry.
I don't really care how they like it because it's not up to them how I use the tools I want to use. It's literally the same argument photographers faced 100 years ago and in another 100 years I guarantee no one will be talking about AI in the terms you are today.
Even today, in 2026, it is possible to use photography in ways that infringe copyright! You literally cannot just snap your shutter over anything whatsoever and call it yours!
No one started photographing paintings and declaring them free to use. If they did the lawsuits would leave a huge impact crater.
Photography started displacing painting as a form of portraiture, but displacing a technique is not the same thing as appropriating the work itself.
I don't see any issues with "appropriating" a work especially if it's not a one to one copy which AI does not produce (without out some pretzel level prompting), especially with regards to visual media (what even is appropriation in this case? Your example of photographers taking images of paintings is not the same as how AI training occurs). In other words, training is and should be free and fair use.
> Any idiot can now prompt their way to the same software.
No they can't. They think they can, but they will still need to put in the elbow grease to get it done right.
But, in my case (also decades of experience), I have had to reconcile with the fact that I'll need to put down the quill pen, and learn to use a typewriter. The creativity, ideas, and obsession with Quality are still all mine, but the execution is something that I can delegate.
This.
LLMs don't always produce correct code - sometimes it's subtly wrong and it takes an expert to notice the mistake(s).
As an idiot, I am very aware that Claude can help me, but also very aware I am not an experienced SWE and continue to seek out their views.
I’m with you here.
I grew up without a mentor and my understanding of software stalled at certain points. When I couldn’t get a particular os API to work, in Google and stack overflow didn’t exist, and I had no one around me to ask. I wrote programs for years by just working around it.
After decades writing software I have done my best to be a mentor to those new to the field. My specialty is the ability to help people understand the technology they’re using, I’ve helped juniors understand and fix linker errors, engineers understand ARP poisoning, high school kids debug their robots. I’ve really enjoyed giving back.
But today, pretty much anyone except for a middle schooler could type their problems into a ChatGPT and get a more direct answer that I would be able to give. No one particularly needs mentorship as long as they know how to use an LLM correctly.
Today every single software engineer has an extremely smart and experienced mentor available to them 24/7. They don't have to meet them for coffee once a month to ask basic questions.
That said, I still feel strongly about mentorship though. It's just that you can spend your quality time with the busy person on higher-level things, like relationship building, rather than more basic questions.
How would this affect future generations of ... well anyone, when they have 24/7 access to extremely smart mentor who will find solution to pretty much any problem they might face?
Can't just offload all the hard things to the AI and let your brain waste away. There's a reason brain is equated to a muscle - you have to actively use it to grow it (not physically in size, obviously).
I agree with you about using our brains. I honestly have no idea.
But I can tell you that, just like with most things in life, this is yet another area where we are increasingly getting to do just the things we WANT to do (like think about code or features and have it appear, pixel pushing, smoothing out the actual UX, porting to faster languages) and not have to do things most people don't want to do, like drudgery (writing tests, formatting code, refactoring manually, updating documentation, manually moving tickets around like a caveman). Or to use a non tech example, having to spend hours fixing word document formatting.
So we're getting more spoiled. For example, kids have never waited for a table at a restaurant for more than 20 mins (which most people used to do all the time before abundant food delivery or reservation systems). Not that we ever enjoyed it, but learning to be bored, learning to not just get instant gratification is something that's happening all over in life.
Now it's happening even with work. So I honestly don't know how it'll affect society.
Just because you have every instruction manual doesn't mean you can follow and perform the steps or have time to or can adapt to a real world situation.
"No one particularly needs mentorship as long as they know how to use an LLM correctly."
The "as long as they know how..." is doing a lot of work there.
I expect developers with mentors who help give them the grounding they need to ask questions will get there a whole lot faster than developers without.
I have this feeling as well. At one point I thought when I got older it might be nice to teach - Steve Wozniak apparently does. But, it doesn't feel like I can really add much. Students have infinite teachers on youtube, and now they have Gemini/Claude/ChatGPT which are amazing. Sure, today, maybe, I could see myself as mostly a chaperone in some class to once in a while help a student out with some issue but that possibility seems like it will be gone in 1 to 2 years.
It's not black/white. There's are scales of complexity and innovation, and at the moment, the LLMs are mostly good (with obvious caveats) at helping with the lower end of the complexity scale, and arguably almost nowhere on the innovation scale.
If, as a principal engineer, you were performing basic work that can easily be replicated by an LLM, then you were wasted and mistasked.
Firstly, high-end engineers should be working on the hard work underlying advances in operating systems, compilers, databases, etc. Claude currently couldn't write competitive versions of Linux, GCC (as recently demonstrated), BigQuery, or Postgres.
Secondly, and probably more importantly, LLMs are good at doing work in fields already discovered and demonstrated by humans, but there's little evidence of them being able to make intuitive or innovative leaps forwards. (You can't just prompt Claude to "create a super-intelligent general AI"). To see the need for advances (in almost any field) and to make the leaps of innovation or understanding needed to achieve those advances still takes smart (+/- experienced) humans in 2026. And it's humans, not LLMs, that will make LLMs (or whatever comes after) better.
Thought experiment: imagine training a version of Claude, only all information (history, myriad research, tutorials, YouTube takes and videos, code for v1, v2, etc.) related to LLMs is removed from the training data. Then take that version and prompt it to create an LLM. What would happen?
Short answer: use your expertise in complex project.
Story: I'm dev for about 20 years. First time I had totally the same felling when desktop ui fading away in favor of html. I missed beauty of c# winforms controls with all their alignment and properties. My experience felt irrelevant anymore. Asp.net (framework which were sold as "web for backed developers") looked like evil joke.
Next time it have happened with the raise of clouds. So were all my lovely crafted bash scripts and notes about unix command irrelevant? This time however that was not that personal for me.
Next time - fall of scala as a primary language in big data and its replacement with python. This time it was pretty routine.
Oh and data bases... how many times I heard that rdbms is obsolete and everybody should use mongo/redis/clickhouse?
So learn new things and carry on. Understanding how "obsolete" things works helps a lot to avoid silly mistake especially in situation when world literally reinvent bicycle
I echo another reply here, if anything my experience coding feels even more valuable now.
It was never about writing the code—anyone can do that, students in college, junior engineers…
Experience is being able to recognize crap code when you see it, recognizing blind alleys long before days or weeks are invested heading down them. Creating an elegant API, a well structured (and well-organized) framework… Keeping it as simple as possible that just gets the job done. Designing the code-base in a way that anticipates expansion…
I've never felt the least bit threatened by LLMs.
Now if management sees it differently and experienced engineers are losing their jobs to LLMs, that's a tragedy. (Myself, I just retired a few years ago so I confess to no longer having a dog I this race.)
Sorry for the dumb question but how could you feel threatened by LLMs if you retired just a few years ago? Considering the hype started somewhere in 2022-2023.
You're right, as I say, I no longer have skin in the game.
Retired, I have continued to code, and have used Claude to vibe code a number of projects—initially I dod so out of curiosity as to how good LLM are, and then to handle things like SwiftUI that I am hesitant to have to learn.
It's true then that I am not in a position of employment where I have to consider a performance review, pleasing my boss or impressing my coworkers. I don't doubt that would color my perception.
But speaking as someone who has used LLMs to code, while they impress me, again, I don't feel the threat. As others have pointed out in past threads here on HN, on blogs, LLMs feel like junior engineers. To be sure they have a lot of "facts" but they seem to lack… (thinking of a good word) insight? Foresight?
And this too is how I have felt as I was aging-out of my career and watched clever, junior engineers come on board. The newness, like Swift, was easy for them. (They no doubt have rushed headlong into Swift UI and have mastered it.) Never though did I feel threatened by them though.
The career itself, I have found, does in fact care little for "grey beards". I felt by age 50 I was being kind of… disregarded by the younger engineers. (It was too bad, I thought, because I had hoped that on my way out of the profession I might act more as mentor than coder. C'est la vie!)
But for all the new engineer's energy and eagerness, I was comfortable instead with my own sense of confidence and clarity that came from just having been around the block a few times.
Feel free to disregard my thoughts on LLMs and the degree to which they are threatening the industry. They may well be an existential threat. But, with junior engineers as also a kind of foil, I can only say that I still feel there is value in my experience and I don't disparage it.
and they only got really good like last December.
how would you suggest someone who just started their career moves ahead to build that “taste” for lean and elegant solutions? I am onboarding fresh grads onto my team and I see a tendency towards blindly implementing LLM generated code. I always tell people they are responsible for the code they push, so they should always research every line of code, their imported frameworks and generated solutions. They should be able to explain their choices (or the LLM’s). But I still fail to see how I can help people become this “new” brand of developer. Would be very happy to hear your thoughts or how other people are planning to tackle this. Thanks!
My "taste" (like perhaps all other "tastes") comes from experience. Cliche, I know.
When you have had to tackle dozens of frameworks/libraries/API over the years, you get to where you find you like this one, dislike that one.
Get/Set, Get/Set… The symmetry is good…
Calling convention is to pass a dictionary: all the params are keys. Extensible, sure, but not very self-documenting, kind of baroque?
An API that is almost entirely call-backs. Hard to wrap your head around, but seems to be pretty flexible… How better to write a parser API anyway?
(You get the idea.)
And as you design apps/frameworks yourself, then have to go through several cycles of adding features, refactoring, you start to think differently about structuring apps/frameworks that make the inevitable future work easier. Perhaps you break the features of a monolithic app into libraries/services…
None of this is novel, it's just that doing enough of it, putting in the sweat and hours, screwing up a number of times) is where "taste" (insight?) comes from.
It's no different from anything else.
Perhaps the best way to accelerate the above though is to give a junior dev ownership of an app (or if that is too big of a bite, then a piece of a thing).
"We need an image cache," you say to them. And then it's theirs.
They whiteboard it, they prototype it, they write it, they fix the bugs, they maintain it, they extend it. If they have to rewrite it a few times over the course of its lifetime (until it moves into maintenance mode), that's fine. It's exactly how they'll learn.
But it takes time.
This answer probably feels unsatisfying and I agree. But some things actually need repetition and ongoing effort. One of my favorite quotes is from Ira Glass about this very topic.
> Nobody tells this to people who are beginners, and I really wish somebody had told this to me.
> All of us who do creative work, we get into it because we have good taste. But it's like there is this gap. For the first couple years that you're making stuff, what you're making isn't so good. It’s not that great. It’s trying to be good, it has ambition to be good, but it’s not that good.
> But your taste, the thing that got you into the game, is still killer. And your taste is good enough that you can tell that what you're making is kind of a disappointment to you. A lot of people never get past that phase. They quit.
> Everybody I know who does interesting, creative work they went through years where they had really good taste and they could tell that what they were making wasn't as good as they wanted it to be. They knew it fell short. Everybody goes through that.
> And if you are just starting out or if you are still in this phase, you gotta know its normal and the most important thing you can do is do a lot of work. Do a huge volume of work. Put yourself on a deadline so that every week or every month you know you're going to finish one story. It is only by going through a volume of work that you're going to catch up and close that gap. And the work you're making will be as good as your ambitions.
> I took longer to figure out how to do this than anyone I’ve ever met. It takes awhile. It’s gonna take you a while. It’s normal to take a while. You just have to fight your way through that.
> —Ira Glass
As a Principal SWE, who has done his fair share of big stuff.
I'm excited to work with AI. Why? Because it magnifies the thing I do well: Make technical decisions. Coding is ONE place I do that, but architecture, debugging etc. All use that same skill. Making good technical decisions.
And if you can make good choices, AI is a MEGA force multiplier. You just have to be willing to let go of the reins a hair.
As a self teaching beginner* this is where I find AI a bit limiting. When I ask ChatGPT questions about code it is always about to offer up a solution, but it often provides inappropriate responses that don't take into account the full context of a project/task. While it understands what good structure and architecture are, it's missing the awareness of good design and architecture and applying to the questions I have, and I don't have have the experience or skill set to ask those questions. It often suggests solutions (I tend to ask it for suggestions rather than full code, so I can work it out myself) that may have drawbacks that I only discover down the line.
Any suggestions to overcome this deficit in design experience? My best guess is to read some texts on code design or alternatively get a job at a place to learn design in practice. Mainly learning javascript and web app development at the moment.
*Who has had a career in a previous field, and doesn't necessarily think that learning programming with lead to another career (and is okay with that).
I can't summarize 40~ YOE of programming easily. (30+ professional)
I can tell you: Your problems are a layer higher than you think.
Coding, Architecture, etc. Those get the face time. Process, and Discipline, and where the money is made and lost in AI.
To give a minor example: My first attempt at a major project with AI failed HORRIBLY. But I stepped back and figured out why. What short-comings did my approach have, what short-comings did the AI have. Root Cause Analysis.
Next day I sat down with the AI and developed a PLAN of what to do. Yes, a day spent on a plan.
Then we executed the plan. (or it did and I kept it on track, and fixed problems in the plan as things happened.) On the third day I'd completed a VERY complex task. I mean STUPIDLY complex, something I knew WHAT I wanted to do, and roughly how, but not the exact details, and not at the level to implement it. I'm sure 1-2 weeks of research could have taught me. Or I could let the AI do it.
... And that formed my style of working with AI.
If you need a mentor pop in the Svalboard discord, and join #sval-dev. You should be able to figure out who I am.
I'm surprised that as a principal engineer, you view your greatest skill set as your expertise in programming. While that is certainly an enormous asset, I have never met a principal engineer that hadn't also mastered how to work within the organization to align the right resources to achieve big goals. Working with execs and line managers and engineers directly to bring people together to chase something complex and difficult: that skill is not going to be replaced by LLMs and remains extremely valuable.
I'm not a principal, but I would wonder: if AI increases every "coder's" productivity, say, 5x doesn't that replace some teams with 1 person, meaning less "alignment" necessary? Some whole org layers may disappear. Soft skills become less relevant when there are fewer people to interface with.
Even regarding "chase something complex and difficult", there are currently only so many needs for that, so I think any given person is justified fearing they won't be picked. It may be a decade between AI eating all the CRUD work from principal down, and when it expands the next generation of complex work on robotics or whatever.
Also, to speak on something I'm even less qualified – the economy feels weak, so I don't have a lot of hope for either businesses or entrepreneurs to say "Let's just start new lines of business now that one person can do what used to take a whole team." The businesses are going to pocket the safe extra profits, and too many entrepreneurs are not going to find a foothold regardless how fast they can code.
> My experience is that people who weren't very good at writing software are the ones now "most excited" to "create" with a LLM.
My experience is the opposite. Those with a passion for the field and the ability to dig deeply into systems are really excited right now (literally all that power just waiting to be guided to do good...and oh does it need guidance!). Those who were just going through the motions and punching a clock are pretty unmotivated and getting ready to exit.
Sometimes I dream about being laid off from my FAANG job so I have some time to use this power in more interesting than I'm doing at work (although I already get to use it in fairly interesting ways in my job).
I wouldn’t say the pessimists fall into that category.
In my experience they are mostly the subset of engineers who enjoyed coding in and of itself and ——in some cases—— without concern for the end product.
I consider myself very good at writing software. I built and shipped many projects. I built systems from zero. Embedded, distributed, SaaS- you name it.
I'm having a lot of fun with AI. Any idiot can't prompt their way to the same software I can write. Not yet anyways.
With all due respect. If _any idiot_ can prompt their way to the _same_ software you’d have written, and your primary value proposition is to churn out code, then you’re… a bit of an outlier when it comes to principal engineers.
It's more common than you might think.
Good engineers are way more important than they’ve ever been and the job market tells the story. Engineering job posts are up 10% year over year. The work is changing but that’s what happens when a new technology wave comes ashore. Don’t give up, ride the new wave. You’re uniquely qualified.
I am sorry you feel that way but I feel professionally strongly insulted by your statement.
Specifically the implication high LLM affinity implies low professional competence.
"My experience is that people who weren't very good at writing software are the ones now "most excited" to "create" with a LLM."
Strong disagree.
I've earned my wings. 5 years realtime rendering in world class teams. 13 years in AEC CAD developing software to build the world around us. In the past two years I designed and architected a complex modeling component, plus led the initial productization and rendering efforts, to my employers map offering.
Now I've managed to build in my freetime the easy-to-use consumer/hobbyist CAD application I always wanted - in two years[0].
The hard parts, that are novel and value adding are specific, complex and hand written. But the amount on ungodly boilerplate needed to implement the vision would have taken either a) team and funding or b) 10 years.
It's still raw and alpha and it's coming together. Would have been totally impossible without Claude, Codex and Cursor.
I do agree I'm not an expert in several of the non-core technologies used - webview2 for .net for example, or xaml. But I don't have to be. They are commodity components, architected to their specific slot, replaceable and rewritable as needed.
As an example of component I _had_ professional competence 15 years ago - OpenGL - I don't need to re-learn. I can just spec quickly the renderpasses, stencil states, shader techniques etc etc and have the LLM generate most of that code in place. If you select old, decades old technlogies and techniques and know what you want the output is very usable most of the time (20 year old realtime rendering is practically already timeless and good enough for many, many things).
[0] https://www.adashape.com/
Why would I need this tool if I can just say "Claude, make me a CAD drawing of XYZ"?
Not trying to be rude, just generating some empathy for the OP's situation, which I think was missed: Like them, there is something you are passionate about that there is no longer really a point to. You could argue "but people will need to use my tool to generate really _good_ CAD drawings" but how much marginal value does that create over getting a "good enough" one in 2 minutes from Claude?
I feel sorry for bringing this up, but I think you might have missed how the thing that makes this possible makes it unnecessary.
No need to be sorry - you raise an excellent point!
Note my critique was labeling all of us LLM enthusiasts by association ”incompetents” which I believe is an incorrect assumption.
The point raised that more people can now code I think was a correct one though. I think that’s a net benefit.
Let me be brief. There are two topics here - CAD & AI and AI & society which I think the underlying point we are discussing.
I appreciate you made a domain specific example, but like _all_ AI workflows - it does not really hold up unless one is extremely specific what the workflow is.
First of all if someone is making a CAD tool for drawings that’s really not a segment. All 3D design tools target a specific content workflow, with specific domain model. Drawings are one possible output from this domain model - just like the on-screen 3D presentation or a 3MF file you get for export.
What ever LLM competency level is it does not come with it’s own domain model. Real people want to configure the models they create. This means there needs to be a domain model you hook up to the LLM to have stable model with specific editable components.
So if you are prompting a model, you are still better off if you prompt the domain model in a real cad package.
So I don’t think CAD packages will die.
Second - I’m mainly trying to serve _my_ need (which I believe is shared by others). My need is that I want to design 3D models with minimum effort, in an enviroment that has perfect undo, perfect boolean, versioning, snaphshotting and intuitive parametricity. This package did not exist in the market before.
Will it have traction? I would expect there are lot of human users that want to create models themselves. Computer chess did not kill chess etc.
To be super specific, there is a clear wedge in the market between Tinkercad and Fusion360 for an affordable desktop offering with the above features.
I do realize my market thesis is just a hypothesis at this point. Which is fine - it’s a passion project. I hope it will be usefull for others, but if not, at least I will have the tool I want.
I’m mainly excited about the possibility of being able to ship to test my market hypothesis.
Without LLM tools I would not be able to ship.
Regarding society:
I believe we are discussing a normal destructive phaze of innovation cycle. Machine looms, weavers, luddites, new forms of labour etc.
Regarding living standards the main worry is - can ”normal” people exist above poverty?
I guess the markets will want to have consumers in the future so either there will be new jobs or some form of basic income.
It’s possible I’m wrong as well.
I have no idea if democracies will survive.
You don't know what you don't know.
Playing with Claude, if you tell it to do something, it'll produce something. Sometimes it's output is ok, sometimes it's not.
I find I need to iterate with Claude, tell it no, tell it how to improve it's solution or do something in a different way. It's kind of like speed running iterating over my ideas without spending a few hours doing it manually, writing lots of code then deleting it to end with my final solution.
If I had no prior coding knowledge i'd go with what ever the LLM gave me and end up with poor quality applications.
Knowing how to code gives you the advantage still using an LLM. Saying that, i'm pessimistic what my future holds as an older software engineer starting to find age/experince is an issue when an employer can pay someone less with less experience to churn out code with prompts when a lot of time the industry lives by "it's good enough".
> Any idiot can now prompt their way to the same software.
You sound quite jaded. The people I see struggling _the most_ at prompting are people who have not learned to write elegantly. HOWEVER, a huge boon is that if you're a non-native English speaker and that got in your way before, you can now prompt in your native language. Chinese speakers in particular have an advantage since you use fewer tokens to say the same thing in a lot of situations.
> Talk about a rug pull!
Talk to product managers and people who write requirements for a living. A PM at MSFT spoke to me today about how panicked he and other PMs are right now. Smart senior engineers are absorbing the job responsibilities of multiple people around them since fewer layers of communication are needed to get the same results.
I see this at my workplace. The PMs and BAs are now completely redundant since you can prompt your way to decent specs with the right access and setup.
IMHO any idiot can create a piece of crap. It takes experience to create good software. Use your experience Luke! Now you have a team of programmers to create what ever you fancy! Its been great for me, but I have only been programming C++ for 36 years.
I might be wrong but this sounds like an ego issue more than anything. Twice you berated less skilled programmers. I’m skilled as well and it did sting when I realized that a relatively new technology could beat me. But there’s so much more to it, especially PMs. PMs find big high value problems and solve them. The coding should be the easy part. If your coding skills are such a big part of your identity and you enjoy the feeling of superiority, a good therapist (chatgpt maybe lol) could be useful.
Same here, although hopefully won't be retiring soon.
What's missing from this is that iconic phrase that all the AI fans love to use: "I'm just having fun!"
This AI craze reminds me of a friend. He was always artistic but because of the way life goes he never really had opportunity to actively pursue art and drawing skills. When AI first came out, and specifically MidJourney he was super excited about it, used it a lot to make tons and tons of pictures for everything that his mind could think of. However, after awhile this excitement waned and he realized that he didn't actually learn anything at all. At that point he decided to find some time and spend more time practicing drawing to be able to make things by himself with his own skills, not by some chip on the other side of the world and he greatly improved in the past couple of years.
So, AI can certainly help create all the "fun!!!" projects for people who just want to see the end result, but in the end would they actually learn anything?
I mean. Sounds like the guy had existing long term goals, needed to overcome an activation threshold, and used AI as a catalyst to just get started. Seems like, behaviorally, AI was pivotal for him to learn things, even if the things he learned came from elsewhere / his own effort.
I suppose, yes, AI was like a kickstart. But the point is - he didn't just stick to AI, he realized that in terms of skill and fulfillment it's a no-go direction. Because you neither learn anything, nor create anything yourself.
I feel the same way. But this is a new economy now, software is cheap, and regarding the skill and fulfillment you derive writing it yourself, to quote Chris Farley: "that and a nickel will get you a nice hot cup of JACK SQUAT!!!"
> My experience is that people who weren't very good at writing software are the ones now "most excited" to "create" with a LLM.
My greatest frustration with AI tools is along a similar line. I’ve found that people I work with who are mediocre use it constantly to sub in for real work. A new project comes in? Great, let me feed it to Copilot and send the output to the team to review. Look, I contributed!
When it comes time to meet with customers let’s show them an AI generated application rather than take the time to understand what their existing processes are.
There’s a person on my team who is more senior than I am and should be able to operate at a higher level than I can who routinely starts things in an AI tool but then asks me to take over when things get too technical.
In general I feel it’s all allowed organizations to promote mediocrity. Just so many distortions right now but I do think those days are numbered and there will be a reversion to the mean and teams will require technical excellence again.
Yes, the LLM can write it. No, the LLM cannot architect a complex system and weave it all together into a functioning, workable, tested system. I have a 400 table schema networked together with relationships, backrefs, services, well tested, nobody could vibe code their way to what I've built. That kind of software requires someone like yourself to steer the LLM.
Even as a principal engineer, there is an infinite number of things you don't know.
Suppose you get out of your comfort zone to do something entirely new; AI will be much more helpful for you than it is for people who spent years developing their skills.
AI is the great equalizer.
very hard to feel sorry of you when countless professions experienced the same in the past - only that they were poor / working class and not overpaid software engineers at FAANG.
also very egocentric & pessimist way to look at things. humankind is much better off when anyone can produce software and skilled experts will always be needed, just maybe with a slightly different skillset.
I understand your feelings. You spent years working hard to learn and master a complex craft, and now seeing that work feel almost irrelevant because of AI can be deeply unsettling.
However, this can also be an opportunity to gain some understanding about our nature and our minds. Through that understanding, we can free ourselves from suffering, find joy, and embrace life and the present moment as it is.
I am just finishing the book The Power of Now by Eckhart Tolle, and your comment made me think about what is explained in it. Tolle talks about how much of our suffering comes from how deeply we (understandably) tie our core identity and self-worth to our external skills, our past achievements, and our status among peers.
He explains that our minds construct an ego, with which we identify. To exist, this ego needs to create and constantly feed an image of itself based on our past experiences and achievements. Normally we do this out of fear, in an attempt to protect ourselves, but the book explains that this never works. We actually build more suffering by identifying with our mind-constructed ego. Instead of living in the present and accepting the world as it is, we live in the past and resist reality in order to constantly feed an ego that feels menaced.
The deep expertise you built is real, but your identity is so much more than just being a 'principal engineer'. Your real self is not the mind-constructed ego or the image you built of yourself, and you don't need to identify with it.
The book also explores the Buddhist concept that all things are impermanent, and by clinging to them we are bound to suffer. We need to accept that things come and go, and live in the present moment without being attached to things that are by their nature impermanent.
I suggest you might take this distress you are feeling right now as an opportunity to look at what is hurting inside you, and disidentify yourself from your ego. It may bring you joy in your life—I am trying to learn this myself!
I'm reading The Compassionate Mind by Paul Gilbert and I find it shares many similar ideas. Also I've been interested by Buddhist concepts like impermanency for a while.
While I think rationally what you said is good and makes sense, at the same time it feels like it says you should forget your roots and be this impermanent being existing in the present and only the present. I value everything about my life, the past, my role models when I was a kid, my past and current skills, all friends from all ages, my whole path essentially. When considering current choices I have to make, I feel more drawn to think "What has been my path and values previously, and what makes sense now?" instead of forgetting the past and my ego and just hustling with the $CURRENT technology.
At least that's how I have thought about my ego when I have tried to approach it with topics like these. It might allow me to make more money in the present if I just disidentified with it, but that thought legitimately feels horrifying because it would mean devaluing my roots.
Interested to hear your take on this.
I think that's right when you say: "What has been my path and values previously, and what makes sense now?" That is actually a sensible way to approach the present moment.
Disidentifying from your ego doesn't mean you have to act like a stateless robot with amnesia. Your past experiences, your role models, and your skills are still there for you to recall; they are tools that help guide your decisions. Disidentifying just means you don't let the mind-constructed image of those things define who you are. It means you don't have to constantly mull over the past, and you don't feel threatened when the things you valued in the past ends or changes.
However, I was really struck by your comment that disidentifying would feel horrifying because it would mean "devaluing your roots" to make more money. I am wondering if this is what you really think.
Imagine if letting go of that specific past identity led you to a truly marvelous opportunity in the present: not just more money, but working with wonderful people, doing engaging things, and being genuinely happy. Would that really be horrifying just because it didn't perfectly align with your roots? Probably not.
I suspect what you actually find horrifying isn't "devaluing your roots," but rather the idea of selling out. The real nightmare is getting a well-paid but completely soulless job where you are unhappy, working on things you don't care about, or being treated like a disposable cog who just takes orders.
Just my two cents, I am no spiritual guide!
> I've spent decades building up and accumulating expert knowledge and now that has been massively devalued.
That remains to be seen. There's a huge difference between an experienced engineer using LLMs in a controlled way, reviewing their code, verifying security, and making sure the architecture makes sense, and a random person vibecoding a little app - at least for now.
Maybe that will change in a year or two or five or never, but today LLMs don't devalue expert knowledge. If anything, LLMs allow expert programmers to increase productivity at the same level of quality, which makes them even more valuable compared to entry-level programmers than they were before.
I am a principal engineer too. In the last 5 months I have been working on a project using the latest LLMs. 5 years ago that project would have required 30 engineers. Now I am alone but need at least 5 more months to have an MVP. You are just not working on projects that are complex and difficult enough. There are so many projects that I have in mind that feel within reach and I would have never considered 5 years ago.
> As a principal engineer I feel completely let down. I've spent decades building up and accumulating expert knowledge and now that has been massively devalued. Any idiot can now prompt their way to the same software. I feel depressed and very unmotivated and expect to retire soon. Talk about a rug pull!
Really?
The vibe coders are running into a dark forest with a bunch of lobsters (OpenClaw) getting lost and confused in their own tech debt and you're saying they can prompt their way to the same software?
Someone just ended up wiping their entire production database with Claude and you believe that your experience is for nothing, towards companies that need stable infrastructure and predictability.
Cognitive debt is a real thing and being unable to read / write code that is broken is going to be an increasing problem which experienced engineers can solve.
Do not fall for the AI agent hype.
> Do not fall for the AI agent hype.
Problem is, it's the people in higher positions who should be aware of that, except they don't care. All they would see is how much more profit company can make if it reduces workforce.
Plenty of engineers do realize that AI is not some magical solution to everything - but the money and hype tends to overshadow cooler heads on HN.
This is exactly it. The junior and mids on my team produce Junior and mid quality level vibe code.
Too generic prompts, unaccounted edge casez, inattentive code reviews...
Yes, anyone can generate code, but real engineering remains about judgment and structure. AI amplifies throughput, but the bottleneck is still problem framing, abstraction choice, and trade-off reasoning. Capabilities without these foundations produce fragile, short-lived results. Only those who anchor their work in proper abstractions are actually engineering, no matter who’s writing the code.
I feel it is about being disinterested than about being good. the ones who were not interested(whether good or bad) and were trapped in a job are liberated and happy to see it be automated.
The ones who are frustrated are the ones who were interested in doing(whether good or bad) but are being told by everyone that it is not worth it do it anymore.
youre getting it backwards. anyone can get to something that looks alright in a browser... until you actually click something and it fails spectacularly, leaks secrets, doesn't scale beyond 10 users and is a swamp of a codebase that prevents clean ongoing extension = hard wall for non techies, suddenly the magical LLM stops producing results and makes things worse.
All this senior engineering experience is a critical advantage in these new times, you implicitly ask things slightly different and circumvent these showstoppers without even thinking if you are that experienced. You don't even need to read the code at all, just a glimpse in the folder and scrolling a few meters of files with inline "pragmatic" snippets measured in meters and you know its wrong without even stepping through it. even if the autogenerated vanity unit tests say all green.
Don't feel let down. Slightly related to when Google sprung into existence - everyone has access and can find stuff, but knowing how to search well is an art even today most people don't have, and makes dramatic differences in everyday usage. Amplified now with the AI search results even that often are just convincing nonsense but most people cannot see it. That intuitive feel from hard won experience about what is "wrong" even without having an instant answer what would be "right" is getting more and more the differentiator.
Anyone can force their vibe coded app into some shape thats sufficient for their own daily use and they're used to avoiding their own pitfalls of the tool they created and know are there, but as soon as there's some kind of scaling (scope, users, revenue, ...) involved, true experts are needed.
Even the new agent tools like Claude for X products at the end perform dramatically different in the hands of someone who knows the domain in depth.
I don't find the same, like you, principle/CTO engineer, there's a world of difference between simplistic prompt/vibe coding and building a properly architected/performant/maintainable system with agentic coding.
> Any idiot can now prompt their way to the same software.
Not only it would be good if true, but it is also not true. Good programmers learn how to build things, for the most part, since they know what to build, and have a general architectural idea of what they are going to build. Without that, you are like the average person in the 90s with Corel Draw in their hands, or the average person with an image diffusion model today: the output will be terrible because of lack of taste and ideas.
Same level of engineer here - I feel that the importance of expertise has only increased, just that the language has changed. Think about the engineer who was an expert in Cobol and Fortran but didn't catch the C++ / Java wave. What would you say to them?
LLMs goof up, hallucinate, make many mistakes - especially in design or architecting phase. That's where the experience truly shines.
Plus, it let's you integrate things that you aren't good at (UI for me).
Nah - I've also spent decades trying to become the best software developer I can and now it is giving me enormous power. What used to take me 5 days is now taking me a day, and my output is now higher quality. I now finish things properly with the docs, and the nooks and crannies before moving on.
What used to take incompetent developers 5 days - it is still taking them 5 days.
I fancy myself pretty good at writing software, and here's my path in:
All the tools I passed up building earlier in my career because they were too laborious to build, are now quite easy to bang out with Claude Code and, say, an hour of careful spec writing...
The best programmers I know are the ones most excited about it.
The mediocre programmers who are toxic gate keepers seem to be the ones most upset by it.
definitely. With AI I can stop working on the painful tasks and spend much more time on things that matter most to me: building the right abstractions, thinking about the maths, talking to the customer...
But TBH, I have been a bit "shocked" by AI as well. It's much more troubling that the coming of the internet. But my hope is that having worked with AI extensively for the past 1-2 years, I'm confident they miss the important things: how to build the abstractions to solve the non-code constraints (like ease of maintenance, explainability to others, etc.)
And the way it goes at the moment shows no sign of progress in that area (throwing more agents at a problem will not help).
Yeah right. Only mediocre people like Rob Pike would be a toxic gate keeper.
The reality is that in the theft of Chardet at least 2000 people supported Mark Pilgrim and almost no one supported the three programmers who constantly blog about AI and try to reprogram people.
Incidentally, everyone who unironically uses the word "gate keeper" is mediocre.
I don't understand this sentiment at all.
For me it, feels more like a way integrate search results immediately into my code. Did you also feel threatened by stack overflow?
If you actually try it you'll find it's a multiplier of insight and knowledge.
As a senior engineer if your value add was "accumulated expert knowledge". Then yes, you are in a bad place.
If instead it was building and delivering products / business value. Good judgement, coordination and communication skills, intuition, etc… then you are now way way more leveraged than you ever were and it has never been greater.
I think "accumulated expert knowledge" was never really useful if an organisation could just replace that person with a wiki.
You summed up my feelings pretty well, thanks for this counterpoint to usual comments in HN
> Any idiot can now prompt their way to the same software.
They simply can't in my experience. Most people cannot prompt their way out of a wet paper sack. The HN community is bathed in thoughtful, high quality writing 24/7/365, so I could see how a perception to the contrary might develop.
That's how progress looks like! We need less to produce more. The less includes less skill and human capital.
For me, LLMs just help a lot with overcoming writer's block and other ADHD related issues.
For me this is a painting vs photography thing
Painting used to be the main way to make portraits, and photography massively democratized this activity. Now everyone can have as many portraits as they want
Photography became something so much larger
Painting didn't disappear though
Compared to painting, software allows you to solve the problem once, then distribute the solution to the problem basically for free.
Market frictions cause the problem to be solved multiple times.
LLMs learn the solution patterns and apply it devaluing coming up with solutions in the first place.
Well, slightly different take: it's like telling an artist the world doesn't need another song about love, these already exist and can be re-heard as needed. Sharper formulated: a CRM or TODO-list is a solved problem in theory, right? tons of solutions even free ones to use out there. still look at what people are doing and selling - CRMs and TODO-list variations. because, in fact, its not solved, and always has certain tradeoffs that doesn't fit some people.
Based on your comment you’re probably not a very good principal engineer ;)
Hence, you are back in the group of those who should benefit from LLMs. Following your own logic :)
Ps: please don’t take it seriously
> Any idiot can now prompt their way to the same software.
Well, this is not what the main value of software actually is? Its not about prompting a one shot app, sure there will be some millionaires making an app super successful by coincidence (flapp bird, eg.), but in most cases software & IT engineering is about the context, integration, processes, maintenance, future development etc.
So actually you are in perfect shape?
And no worries: The one who werent good at writing code, will now fail because of administration/uptime/maintenance/support. They will fail just one step later.
I find fun in using opencode and Claude to create projects but I can't find the energy to run the project or read the code.
Watching this program do stuff is more enjoyable then using or looking at the stuff produced.
But it doesn't produce code that looks or is designed the way I would normally. And it can't do the difficult or novel things.
> I've spent decades building up and accumulating expert knowledge and now that has been massively devalued. Any idiot can now prompt their way to the same software.
Do you like the craft of programming more than the outcomes? Now you are in a better position than ever to achieve things.
No worries. True, you need to learn new skills to work properly with Claude. However, 30 yrs of coding experience come in handy to quickly detect it is going in the wrong direction. Especially on an architectural level you need to guide it.
Embrace
I love it. I can't stand this sentiment and this type of technologist pompous ass. You are why software mostly sucks. You have no imagination. Hopefully the models make your limited, extraordinarily overvalued skill set the last 20 years completely democratized. We will see who is the idiot going forward.
Spoken like a loser and a thief.
I urge you to actually try these tools. You will very quickly realize you have nothing to worry about.
In the hands of a knowledgeable engineer these tools can save a lot of drudge work because you have the experience to spot when they’re going off the rails.
Now imagine someone who doesn’t have the experience, and is not able to correct where necessary. Do you really think that’s going to end well?
Yeah, even just now I had to go and correct some issues with LLM output that I only knew were an issue because I have extensive experience with that domain. If I didn't have that I would not have caught it and it would have been a major issue down the line.
LLM's remove much of the drudgery of programming that we unfortunately sort of did to ourselves collectively.
I review PRs daily and people are pushing changes that have basic problems, not to talk about more serious flaws. The amount of code an engineer can produce is higher, but it's also less thought through.
There will be more code with lower quality. If you want to be valued for your expertise, you need to find niches where quality has to stay high. In a lot of the SaaS-world, most products do not require perfection, so more slop is acceptable.
Or you can accept the slop, grind out however more years you need to retire, and in the meanwhile find some new passion.
CC is not nearly that good. It may never be. It's an amplifier not a replacer.
On the plus side you're retiring soon... imagine if your were a graduate today
At least they're young enough to re-train into something else if they want. It's the mid-career devs who are flailing at the moment.
I thought this was parody until the last sentence.
I think that the biggest difference is between people who mostly enjoy the act of programming (carefully craft beautiful code; you read and enjoyed "Programming Pearls" and love SICP), vs the people who enjoy having the code done, well structured and working, and mostly see the act of writing it as an annoying distraction.
I've been programming for 40 years, and I've been on both sides. I love how easy it is to be in the flow when writing something that stretches my abilities in Common Lisp, and I thoroughly enjoy the act of programming then. But coding a frontend in React, or yet another set of Python endpoints, is just necessary toil to a desired endpoint.
I would argue that people like you are now in the perfect position to help drive what software needs writing, because you understand the landscape. You won't be the one typing, but you can still be the one architecting it at a much higher level. I've found enjoyment and solace in this.
I think it’s important for you to understand that there were always way more people who loved programming than were able to work professionally as high-level coders. Sure, if you spent most of your working life writing code, you’d be very proficient. But for many, many others, they haven’t been able to spend the time developing those muscles. Modern LLMs really are a joyful experience for people who enjoy software creation but haven’t had the 10,000 hours.
No offense but you sound more like a “principle coder”, not a principle engineer. At least in many domains and orgs, Most principal engineers are already spending most their time not coding. But -engineering- still take sip much or most of their time.
I felt what you describe feeling. But it lasted like a week in December. Otherwise there’s still tons of stuff to build and my teams need me to design the systems and review their designs. And their prompt machine is not replacing my good sense. There’s plenty of engineering to do, even if the coding writes itself.
I make documentation and diagrams for myself rather than writing code much of the time
> Any idiot can now prompt their way to the same software
If you really think it's the reality, then your expert knowledge is not that good to begin with.
From my point of view, having the llm as co-pilot is like having the junior engineer that the team would never justify the budget to hire. I get quite a bit more done when I can assign the tool a task to work on, work on something else in the meantime, and come back in 5 or 10 minutes to check on its progress and make adjustments.
There are many aspects of software engineering that are fun, but the pure mechanical part gets sold quickly; there are only but so many times you can type "emplace" and feel fulfilled. I'm finding that co-pilot is extremely good at that part.
I'm not sure why you feel devalued or let down, LLM code is a joke and will be a thing of the past after everyone has had their production environment trashed for the nth time by "AI."
Completely the opposite experience here! I am a tech lead with decades of experience with various programming languages.
When it comes to producing code with an llm, most noobs get stuck producing spaghetti and rolling over. It is so bad that I have to go prompt-fix their randomly generated architecture, de-duplicate, vectorize and simplify.
If they lack domain knowledge on top of being a noob it is a complete disaster. I saw llm code pick a bad default (0) for a denominator and then "fix" that by replacing with epsilon.
It isn't the end, it is a new beginning. And I'm excited.
I think you've got this backwards!
I've been working with computers since an Apple ][+ landed in our living room in the early 80s.
My perspective on what AI can do for me and for everyone has shifted dramatically in the last few weeks. The most recent models are amazing and are equipping me to take on tasks that I just didn't have the time or energy for. But I have the knowledge and experience to direct them.
I haven't been this enthused about the possibilities in a long time.
This is a huge adjustment, no doubt. But I think if I can learn to direct these tools better, I am going to get a lot done. Way more than I ever thought possible. And this is still early days!
Just incredible stuff.
> My experience is that people who weren't very good at writing software are the ones now "most excited" to "create" with a LLM.
I consider myself to have been a 'pretty good' programmer in my heyday. Think 'assembly for speed improvements' good.
Then came the time of 'a new framework for everything, relearn a new paradigm every other week. No need to understand the x % 2 == 0 if we can just npm an .iseven()' era ... which completely destroyed my motivation to even start a new project.
LLMs cut the boilerplate away for me. I've been back building software again. And that's good.
Indeed, and I noticed companies now are focusing on hiring coops, paying them peanuts and just use AI, and have maybe one senior and one wrangler (engineering/project manager), that’s basically what I have noticed what neo-teams are.
Really? I love LLMs because I can't stand the process of taking the model in my brain and putting it in a file. Flow State is so hard for me to hit these days.
So now I spec it out, feed it to an LLM, and monitor it while having a cup of tea. If it goes off the rails (it usually does) I redirect it. Way better than banging it out by hand.
It's only going to get harder to achieve if you keep letting your skills and resoning abilities rot from LLM reliance.
It is weird because I am the opposite. The symbols were never the objective for me but instead how they all fit together.
Now I am like a perfect weapon because I have the wisdom to know what I want to build and I don't have to translate it to an army of senior engineers. I just have Github Copilot implement it directly.
> Any idiot can now prompt their way to the same software
I have been thinking about the "same software"
Because I remember seeing Sonnet 4.5 and I had made comments that time as well that I just wanted AI to stop developing more as the more it develops, the more harm to the economy/engineers it would do than benefit in totality.
It was good enough to make scripts, I could make random scripts/one-off projects, something which I couldn't do previously but I used to still copy-paste it and run commands and I gave it the language to choose and everything. At that time, All I wanted was the models getting smaller/open source.
Now, I would say that even an Idiot making software with AI is gonna reach AI fatigue at one point or another and it just feels so detached with agents.
I do think that we would've been better off in society if we could've stopped the models at sonnet 4.5. We do now have models which are small and competitive to sonnet (Qwen,GLM,[kimi is a little large])
In my experience, the truly best in class have gone from being 10x engineers to being 100x engineers, assuming they embrace AI. It's incredible to watch.
I wouldn't say I'm a 10x-er, but I'm comfortable enough with my abilities nowadays to say I am definitely "above average", and I feel beyond empowered. When I joined college 15 years ago, I felt like I was always 10 steps ahead of everyone else, and in recent years that feeling had sort of faded. Well, I've got that feeling back! So much of the world around me feels frozen in place, whereas I am enjoying programming perhaps as much as when I learned it as a little kid. I didn't know I MISSED this feeling, but I truly did!
Everything in my daily life (be it coding or creating user stories — who has time to use a mouse when you can MCP to JIRA/notion/whatever?) is happening at an amazing speed and with provable higher levels of quality (more tests, better end-user and client satisfaction, more projects/leads closed, faster development times, less bug reports, etc.). I barely write lines of code, and I barely type (often just dictate to MacWhisper).
I completely understand different people like different things. Had you asked me 5 years ago I probably would have told you I would be miserable if I stopped "writing" code, but apparently what I love is the problem solving, not the code churning. I'm not trying to claim my feelings are right, and other people are "wrong" for "feeling upset". What is "right" or "wrong" in matters of feelings? Perhaps little more than projection or a need for validation. There is no "right" or "wrong" about this!
If I now look at average-to-low-tier-engineers, I think they are a mixed bag with AI on their hands. Sometimes they go faster and actually produce code as good as or better than before. Often, though, they lack the experience, "taste" or "a priori knowledge" to properly guide LLMs, so they churn lots of poorly designed code. I'd say they are not a net-positive. But Opus 4.6 is definitely turning the tide here, making it less likely that average engineers do as much damage as before (e.g. with a Sonnet-level model)
On top of this divide within the "programming realm", there's another clear thing happening: software has finally entered the DIY era.
Previously, anyone could already code, but...not really. It would be very difficult for random people to hack something quickly. I know we've had the terms "Script kiddies" for a long time, but realistically you couldn't just wire your own solution to things like you can with several physical objects. In the physical world, you grab your hammer and your tools and you build your DIY solutions — as a hobby or out of necessity. For software...this hadn't really been the case....until now! Yes, we've had no-code solutions, but they don't compare.
I know 65 year olds who have never even written a line of code that are now living the life by creating small apps to improve their daily lives or just for the fun of it. It's inspiring to see, and it excites me tremendously for the future. Computers have always meant endless possibilities, but now so many more people can create with computers! To me it's a golden age for experimentation and innovation!
I could say the same about music, and art creation. So many people I know and love have been creating art. They can finally express themselves in a way they couldn't before. They can produce music and pictures that bring tears to my eyes. They aren't slop (though there is an abundance of slop out there — it's a problem), they are beautiful.
There is something to be said about the ethical implications of these systems, and how artists (and programmers, to a point?) are getting ripped off, but that's an entirely different topic. It's an important topic, but it does not negate that this is a brand new world of brand new artists, brand new possibilities, and brand new challenges. Change is never easy — often not even fair.
I know that your post has lots of comments, but I'd like to weigh in kindly too.
> I've spent decades building up and accumulating expert knowledge and now that has been massively devalued.
Listen to the comments that say that experience is more valuable than ever.
> Any idiot can now prompt their way to the same software.
No they cannot. You and an LLM can build something together far more powerful and sophisticated than you ever could have dreamt, and you can do it because of your decades of experience. A newbie cannot recognize the patterns of a project gone bad without that experience.
> I feel depressed and very unmotivated and expect to retire soon.
Welcome to the industry. :) It happens. Why not take a break? Work on a side project, something you love to do.
> My experience is that people who weren't very good at writing software are the ones now "most excited" to "create" with a LLM.
Once upon a time painters and illustrators were not "artists", but archivists and documenters. They were hired to archive what something looked like, and they were largely evaluated on that metric alone. When photography took that role, painters and illustrators had to re-evaluate their social role, and they became artists and interpreters. Impressionism, surrealism, conceptualism, post-modernism are examples of art movements that, in my interpretation, were still attempting to grapple with that shift decades, even a century later.
Today, we SWE are grappling with a very similar shift. People using LLMs to create software are not poor coders any more (or less) than photographers were poor painters. Painters and illustrators became very valuable after the invention of photography, arguably more valuable socially than before.
Why did you leave this as a comment on someone talking about how happy they were about their own experience?
What I keep hearing is that the people who weren't very good at writing software are the ones reluctant to embrace LLMs because they are too emotionally attached to "coding" as a discipline rather than design and architecture, which are where the interesting and actually difficult work is done.
Really? To me it seems that quite the opposite is true - people who were never very good at writing code are excited about LLMs because suddenly they can pretend to be architects without understanding what's happening in the codebase.
Same as with AI-art, where people without much drawing skills were excited about being able to make "art".
Perhaps you are both right. People who see coding as a means to an end enjoy LLMs while people who saw it as the most enjoyable part don’t.
This is more accurate, I've written enough code in my life to never really want to do it again ....but I still love creating (code was merely the way to do it) so LLMs help with my underlying passion.
[dead]
On the bright side, working in tech between 2006 and 2026 means you should be extremely wealthy and able to retire comfortably.
In SV probably. As a lead FE dev with 14 yoe in Munich I‘m at 85k€, thats not even enough to pay off a loan for a house around here.
Uh if you worked for a top company or something. Most tech workers have made relatively ordinary salaries the last 20 years.
Cries in federal employee wages
50. Started coding at 7. I never stopped coding. In fact, the past decade saw heavy open source contribution, public speaking, etc.
I love coding with agents. Claude Code now almost exclusively. The 20x max subscription is endless until you start writing custom multi-agent processes, and even then. Still takes quite a bit of effort to burn through.
I get so much more done, and can be productive with languages/frameworks I'm not familiar with.
To everybody worried that AI will kill jobs. There have been many points in the evolution of software dev where some new efficiency was predicted to kill off jobs. The opposite happens. Dev becomes more economical, and all of the places where dev was previously too expensive open up. Maybe this time won't work out that way, but history isn't on the side of that prediction.
An experienced software dev can get multiples of efficiency out of AI coding tools compared to non-devs, and can use them in scaled projects, where non-devs are only going to compound a mess. Some of those non-devs will learn how to be more efficient and work with scaled projects. How? They'll learn to be devs.
I'd be building several side projects for myself if I wasn't super busy with the primary work I'm doing. The AI tools take over the tedious work, and remove a lot of work that would just add mental load. Love it.
Same here - it's like programming with a couple of buddies. Occasionally they goof off and wreck everything, but we put it back together and end up with a finished project. I'm literally going through my backlog of projects from the early 80s! There are parts of each of these projects that were black holes for me - just didn't know enough to get a toe hold. With Karl (that's my agent) he explains everything I don't understand, does stuff, breaks stuff, and so on. It's really a blast.
Same for me (even though I am a bit younger). I burned out a couple of times and assumed I will never finish so many sideprojects I have lying around. Now I can just feed them into claude and guide it to completion. It feels great. And yes, ideally I would have more time and energy to do it all by myself, but I don't. And to me results matter, not the tinkering itself, if I would be after that, I would do some code puzzles for fun. But I am rather interested in making ideas reality and AI is helping with that.
> it's like programming with a couple of buddies. Occasionally they goof off and wreck everything,
Nailed it :)
The sad part is the “buddy hackathon” is kind of redundant now
I think people can do "buddy vibecodeathons" now. :)
It's nice to be able to either just body double [1], or have some other people around to vent to when Claude goes off the rails.
[1] https://health.clevelandclinic.org/body-doubling-for-adhd
Maybe it's like that. But they're drunk. Which means they are very supportive but quite unreliable and have a short memory.
I've caught Claude making the gravest anti pattern mistakes using Elixir and trying to get it to correct them makes the whole thing worse.
It's ok for smaller scoped stuff but actual architectural changes come out worse than before more often than not.
Without experience, programming with AI (vibe coding, I guess) can be compared to being a rat in maze... You work your way through a project, but the dead-ends exact a high cost in terms time, attention, and ultimately cost.
With experience, you see these dead ends before they have a chance to take hold and you know when and how to adjust course. It's literally like one poster said: coding with some buddies without ego and without the need to constantly talk people out of using the latest and greatest shiny objects/tools/frameworks.
I've really enjoyed going back a revisiting old ideas and projects with the help of AI. As the OP stated -- it has restored my energy and drive.
Fully agree: I believe my decades of software engineering experience definitely help me fly LLM tools better than less experienced folks.
But the much more interesting question to me: as LLM coding becomes the norm, does it drive the cost of self or small-company generated software to 0?
Like many SW architects/engineers my not-so-developed work-in-retirement plan is to assemble a small team of people I’ve loved working with over the years, start an LLC, and try to make a reasonable (not posh) living doing what we love: making software to solve problems.
On the one hand, it’s clear LLM coding can accelerate and amplify our efforts, but alternately there’s many people claiming there’s no possibility of a moat, your solution/innovation can be cloned in a matter of days … ie. the value of your software is exactly 0.
Not sure which future will be closer to reality. A backup plan that seems reasonable in the 0-value case is to focus our effort on creating actual physical gadgets and systems in the embedded realm, which conceivably can be designed and prototyped by a small team… It seems like these would still be valuable.
I have always had ADHD and as a consequence have a decades long backlog of things that I want to do “some day”, and Claude just removes all the friction from going from idea to execution. I am also a software engineer, so basically for me it is like having a team of developers available 24 hours a day to build anything I want to design.
I have built and thrown away a half dozen projects ideas and gotten one into production at work in just the last few months.
I can build a POC for something in the time it would take me to explain to my coworkers what I even want. An MVP takes as long as what a POC used to take.
The thing that really unlocks stuff for me is how fast it is to make a cli/tui/web ui for things.
As a fellow ADHD’er who is also old and out of coding a decade, after decade and a half coding, wholeheartedly agreed. It’s great to just shit done and abandon if needed. Feels much better than spend 6 months and abandon
are you trying openclaw?
Claude Code has killed my ADHD and turned me into an always-on hyper-focused machine.
I am getting 20x done. This is a literal superpower.
I am not using it in agentic mode yet. I am telling it everything I want it to do. I will tell it where I want the files, what I want structs to be named, how I want the SQL queries to join, etc. I then review every line and make edits (typically with Claude first).
I haven't tried the agentic stuff yet, but I probably will at some point soon. I'm anxious about losing control over the architecture and data model, which is something I feel gives me my speed with Claude Code and that I know is important for my engineering work and quality.
I won't be writing code by hand ever again. This is the future. We'll look back at the old way as horse carriages.
Claude is also really freaking good at Rust, and the fact that it emits proper Rust with tests makes me even more confident of my changes.
We are literally living in the future now. Twenty years of SaaS and smartphone incrementalism and now we have jet packs.
Instead of engineers inventing 50 different frameworks and conventions for any given language or platform, maybe that energy will be directed to creating better AI tools.
Edit: I'll also reiterate what others are saying in that I think this is a tool best leveraged by engineers who know what they're doing and that care about code quality. The results you get back will also depend on your repo/project's code quality. If your project is poorly structured or has a lot of cruft, Claude will see that and spit it right back out. Keeping your code clean and low on tech debt is going to matter tremendously.
>Instead of engineers inventing 50 different frameworks and conventions for any given language or platform, maybe that energy will be directed to creating better AI tools.
I think this will happen since one of the reason for new frameworks and languages was improving the human experience of coding, but now that friction goes away and AI doesn't feel that.
Although we might need to study which language AI is best at, and possibly invent new ones to maximize that.
Careful though: a lot of people are getting the feeling of getting 20x done. Do you have objective measurements?
In my case is nearer to ∞x. I have developed an opensource Android app which has already ~200 users that I would never ever written in my whole life. Zero experience with mobile development and zero time in my free time to focus on this appropriately to at least try to learn how to do it. I know myself, I would have given up before getting the first dummy APK on my phone. And while it's totally vibe-coded, in the sense that I just prompted CC and not written a single line of Kotlin code, I put a ton of effort on it anyway, on how I want it to behave, how it looks like, squashing all the usual subtle bugs that CC leaves here and there.
What does it do?
(I'm going to break the rule I had to keep my HN identity separated from my real one with this post but here it goes...)
It's a frontend to the vehicle data served by TeslaMate (a local logger for your EV data), as a more mobile-friendly alternative to Grafana.
Did you prompt it step by step or let it do its thing for a bit?
I've been working on it since December. The first working prototype was a oneshot of a biggish enough prompt (for Opus 4.5). After that well, I didn't save the prompts (my bad) but I have probably prompted what... 2-3k 80-columns lines of English in Claude Code? Yeah, I guess we are in that ballpark. Sometimes it nails the new feature at first attempt, sometimes it takes a few attempts and corrections (and in that case it can definitely be frustrating)
How do you even begin to define objective measurements of software engineering productivity? You could use DORA metrics [1] which are about how effectively software is delivered. Or you could use the SPACE Framework [2] which is more about the developer experience.
1. https://cloud.google.com/blog/products/devops-sre/using-the-...
2. https://space-framework.com/
I don't have time for that mysticism. I just know.
IDK just yesterday I got a complete slide / powerpoint-lite editor in Qt Quick that is sufficient for the use case I have in two prompts, roughly 7 minutes. How long would it take you to write, on your best day, using your favourite programming language ?
Ha ha, for some of us the "feeling" is good enough.
this is the most self-aware, honest answer IMO
For me the evidence is I have completed side projects I never would have before. I also recently started building a game that I had put off for years. At work I am closing more features than historically and at the end of the day not as fatigued. It’s only my experience and everyone’s is going to be different.
> Claude Code has killed my ADHD and turned me into an always-on hyper-focused machine.
> I am getting 20x done. This is a literal superpower.
Adding this comment to favourites to revisit in half a decade.
I've already "made fun" of your exaggerated hype comments, so I'll use this opportunity to say that I hope you remain sane and grounded in your discoveries. You wouldn't be the first to go psychotic after interacting with these stochastic parrots.
Don't you have anything better to do with your time?
I told you people back in 2019 that these models would replace Hollywood and you and others have been calling me all kinds of names, and every step of the way calling me an idiot. I'm a filmmaker - I know what I'm talking about. And now we're almost here. We have million dollar VFX services at our disposal for pennies.
Claude Code is doing the exact same thing for software engineering. I've been a senior software engineer for a good while - these capabilities are otherworldly and they can generalize to all new unseen problems. You're not paying attention.
I'd be more worried about whether or not you have a job in 5 years than whether I have or have not created a business or whatever criteria you want to use to thumb your nose at me.
You know how you can quickly ideate software plans for some large scale idea? Architecture, infrastructure, data models, etc., but the implementation takes longer? Claude Code short circuits that last bit. You need to hold your nose so you stop smelling whatever you're smelling and just try the damn tool.
I wish I could slap sense into you grumpy folks. You're so stiff in your beliefs. This is a train headed your way. Pay attention.
> I told you people back in 2019 that these models would replace Hollywood
What kind of alternate reality are you living in?
I wish you would disclose your credentials (though I admit privacy is an inalienable right of yours) so I could place the biggest AI hype-man on this forum. Actually, there is hype, and there is being completely gone with hubris and you’re towards the latter end of the spectrum, given your doomsday calls on other comments that software engineering is done for and that you believe AI is close to ‘putting all the HN engineers out of work’ (https://news.ycombinator.com/item?id=47185284)
> I wish I could slap sense into you grumpy folks. You're so stiff in your beliefs. This is a train headed your way. Pay attention.
Lay off the violent thoughts and get some rest, man. Sounds like you need it.
part of my adhd/bipolar is reacting like the guy you're replying to and I was thinking the same. Comment reminded me of when I'm in the "YES THIS IS IT" mode which usually isn't far off from hitting the wall. Hopefully just projection on my part, though, and this guy is really doing well. When I start talking like him though I usually have to take a step back and it'll be a topic in therapy next session.
100% feel the same way, and had the same starting point lol
This comment about the OpenClaw guy hits a little too close to home:
“Peter Steinberger is a great example of how AI is catnip very specifically for middle-aged tech guys. they spend their 20s and 30s writing code, burn out or do management stuff for a decade, then come back in their late 40s/50s and want to try to throw that fastball again. Claude Code makes them feel like they still got it.”
What an ageist quote. I am in my 40s and never stopped coding even as I've become the principal engineer. Claude just frees me from the mundane tasks I'd done a million times before and never wanted to do again if possible, which it now is. I can still throw a fastball without AI, but why would I when I can throw it much faster, with much less effort now, while still enjoying what I am doing?
It's still coding. If you think it's not you probably think that letting the IDE auto-complete or apply refactorings is also not coding.
Do you really think you are as eager, inquisitive, and open to learning new ideas in your 40s compared to your 20s?
I am 50 and the answer is 'yes'.
> Claude just frees me from the mundane tasks I'd done a million times before and never wanted to do again if possible, which it now is.
What kind of tasks?
Writing unit tests. Modifying existing unit tests to achieve desired code coverage.
writing any git command, ever, writing any documentation, ever. writing comments in issue trackers, resolving issues in issue trackers, doing pretty much anything in the terminal, ever… basically every imaginable thing which takes time away from the actual job
Why not say “using a computer”. gcl (my alias for git clone) is way faster to use than any prompting. Any use case I found for LLMs, I noticed that a good script or a DSL (as an abstraction) would be way more useful.
This idea of LLMs a vehicle of midlife crisis is fascinating. I'm not sure if it's just about "throwing the fastball" though. Most of the usual midlife crisis things are a rejection of virtue. For example: buying a porsche, pickign up a frivolous hobby, or cheating on your wife, these are irresponsible uses of money, time, or attention that a smart, dedicated, family man wouldn't partake in.
In relation to LLM usage I think there's two interepretations. 1) This midlife crisis is a rejetion of empathy, understanding, and social obligation however minute. Writing a one-sentence update on an issue, understanding design decisions of another developer, reading documention are all boilerplate holding them back from their full potential in a perfectly objective experience. Of course, their personal satisfaction still relies on adoption of their products by customers (though decades of viewing customers through advertising surveillance has stripped away the customers' humanity from their perspective). Or 2) economic/political factors such as inflation, rising unemployment, supply chain issues, starvation of public services, and general instability means doing the usual midlife crisis activities are too expensive or risky, and LLMs present a local optmimum allowing them to reject societal virtues (eg. craftsmanship, collaboration, empathy) without endangering their financial position. Funny enough, I feel this latter point was also a factor of the NFT bubble (though, the finances were more clearly dubious).
Same but for me it's 25 years of accumulated personal backlog that I'm finally burning through. Like I've been a project hoarder and now I have a house elf to tidy up and do all that widget fobbering business. I just need to figure out what the rules of the house are.
And why would they not? do they have to feel they ain’t got it anymore because age?
Because they don't "got it". Asking the bot to program is the same as asking a junior engineer to write some code, and then claiming it as your own. It's not actually them programming. Just a misplaced sense of pride.
More gatekeeping, more no true Scotsman fallacies, more bitter cope.
You can absolutely take pride in having raised your own cows. But the guy down the street can also take pride in having cooked his own steak. In fact, the guy down the street might actually be a better chef than you, even though you know how to breed cattle.
You're wrong because you are making the wrong comparison.
In this analogy, The guy down the street didn't cook his own steak. He told someone else to cook the steak. And then claimed that he himself cooked it. Telling himself, "wow, I'm a great chef!". When In fact, he did not cook the steak.
Your greatness as a chef isn't measured by how well you manage restaurant kitchens. That would be a great manager. Your greatness as a chef is measured by actually cooking yourself. Claiming other chef's work as your own would be dishonest and self-deception.
If we want to stretch this analogy a bit - I believe all world-level chefs have a team of sous-chefs working for them. Doing things like chopping ingredients, prepping things, in fact probably doing a lot of th cooking. I think building with ai is pretty similar.
This is the exact analogy that Gene Kim and Steve Yegge used throughout their book Vibe Coding: Building Production-Grade Software With GenAI, Chat, Agents, and Beyond.
I get it. Knowing good code and how to correctly build software that people actually want is experience that is consistently hampered by constantly having to learn yet another tech stack.
Using an LLM lets you quickly learn (or quickly avoid having to learn) yet another tech stack while you leverage your inherent software development knowledge.
> late 40s
This describes me nearly perfectly. Though I didn’t exactly burn out of coding, I accidentally stumbled upon being an EM while I was coding well and enjoying. But being EM stuck so I got into managing team(s) at biggish companies which means doing everything except one that I enjoy the most which is coding.
However now that I run my own startup I’m back to enjoying coding immensely because Claude takes care of grunt work of writing code while allowing me to focus on architecture, orchestration etc. Immense fun.
Me too, only I'm "only" 42! Got my first job as a programmer at 18 and (in retrospect) burnt out at some point and thought going into managment was the fix.
If you don’t mind sharing, what does your startup do?
Absolutely not!
I run a business of giving out loan against stocks and mutual funds as collateral in India.
Please visit https://www.quicklend.in/ to know more.
What is an "EM"?
Engineering Manager (as opposed to people who stick to programming, called Individual Contributor.)
Oh, how I hate these horrible job descriptions.
But thanks for the info.
And what's the problem with that?
I spent the last 2 days primarily using Claude instead of coding things myself at work. I felt the exact opposite way. It was so unfulfilling. I’d equate it to the feeling of getting an A on a test, knowing I cheated. I didn’t accomplish anything. I didn’t learn anything. I got the end result with none of the satisfaction and learned nothing in the process.
I’m probably going to go back and redo everything with my own code.
That's interesting. I have been thinking about how the vastly different reactions people seem to have to agentic coding could be influenced by what they value about coding. To me it seems like there are three joys in coding:
1. Creating something
2. Solving puzzles
3. Learning new things
If you are primarily motivated by seeing a finished product of some sort, then I think agentic coding is transcendent. You can get an output so much quicker.
If your enjoyment comes from solving hard puzzles, digging into algorithms, how hardware works, weird machine quirks, language internals etc... then you're going to lose nearly all of that fun.
And learning new things is somewhere in the middle. I do think that you can use agentic coding to learn new technologies. I have found llms to be a phenomenal tool for teaching me things, exploring new concepts, and showing me where to go to read more from human authors. But I have to concede that the best way to learn is by doing so you will probably lose out on some depth and stickiness if you're not the one implementing something in a new technology.
Of course most people find joy in some mix of all three. And exactly what they're looking for might change from project to project. I'm curious if you were leaning more towards 2 and 3 in your recent project and that's why you were so unsatisfied with Claude Code.
I'll add "craftsmanship". It isn't just delivering "A" finished product, you want to deliver a "good", if not "the best", finished product.
I guess if you're in an iterative MVP mindset then this matters less, but that model has always made me a little queasy. I like testing and verifying the crap out of my stuff so that when I hand it off I know it's the best effort I could possibly give.
Relying on AI code denies me the deep knowledge I need to feel that level of pride and confidence. And if I'm going to take the time to read, test and verify the AI code to that level, then I might as well write most of it unless it's really repetitive.
I don't think AI coding means you stop being a craftsman. It is just a different tool. Manual coding is a hand tool, AI coding is a power tool. You still retain all of the knowledge and as much control over the codebase as you want, same with any tool.
It's a different conversation when we talk about people learning to code now though. I'd probably not recommend going for the power tool until you have a solid understanding of the manual tools.
It can be a power tool if used in a limited capacity, but I'd describe vibe-coding as sending a junior construction worker out to finish a piece of framing on his own.
Will he remember to use pressure treated lumber? Will he use the right nails? Will he space them correctly? Will the gaps be acceptable? Did he snort some bath salts and build a sandcastle in a corner for some reason?
All unknowns and you have to over-specify and play inspector. Maybe that's still faster than doing it yourself for some tasks, but I doubt most vibe-coders are doing that. And I guess it doesn't matter for toy programs that aren't meant for production, but I'm not wired to enjoy it. My challenge is restraining myself from overengineering my work and wasting time on micro-optimizations.
Meanwhile Linus argued against Debuggers in 2000: https://lwn.net/2000/0914/a/lt-debugger.php3
But then he changed his tune? Even on LLMs...
> I'll add "craftsmanship". It isn't just delivering "A" finished product, you want to deliver a "good", if not "the best", finished product.
I don't raise a single PR that I feel I wouldn't have written myself. All the code written by the AI agent must be high quality and if it isn't, I tell it why and get it to write bits again, or I just do it myself.
I'm having quite a hard time understanding why this is a problem for other people using AI. Can you help me?
That's a really good point. And I agree that kind of confidence in craftsmanship is something that's missing from agentic coding today... it does make slop if you're not careful with it. Even though I've learned how to guide agents, I still have some uneasiness about missing something sloppy they have done.
But then it makes me ask if the agents will get so good that craftsmanship is a given? Then that concern goes away. When I use Go I don't worry too much about craftsmanship of the language because it was written by a lot of smart people and has proven itself to be good in production for thousands of orgs. Is there a point at which agents prove themselves capable enough that we start trusting in their craftsmanship? There's a long way to go, but I don't think that's impossible.
I would argue that craftsmanship includes a thorough understanding and cognitive model of the code. And, as far as I understand it, these agents are syntactic wonders but can not really understand anything. Which would preclude any sort of craftsmanship, even if what they make happens to be well-built.
I can see where this idea is coming from, but I don't agree with the conclusion at all. As someone who loves solving puzzles and learning new things, AI has been a godsend. I also very much like creating things, but even more than that, I like doing all three at once.
I think of AI like a microdose of Speed Force. Having super speed doesn't mean you don't like running; it just means you can run further and more often. That in turn justifies a greater amount of time spent running.
Without the Speed Force, most of the time you were reliant on vehicles (i.e. paying for third-party solutions) to get where you needed to go. With the Speed Force, not only can you suddenly meet a lot more of your transportation needs by foot, you're able to run to entirely new destinations that you'd never before considered. Eventually, you may find yourself planning trips to yet unexplored faraway harsh terrains.
If your joy in running came from attempting to push your biological physical limits, maybe you hate the Speed Force. If you enjoy spending time running and navigating unfamiliar territory, the Speed Force can give you more of that.
Sure, there are also oddballs who don't know how to run, yet insist on using the Speed Force to awkwardly jump somewhere vaguely in the vicinity of their destination. No one's saying they don't exist, but that's a completely different crowd from experienced speedsters.
You may be an exception, but most businesses and many individuals pay for a laundry list of commercial software products. If you count non-monetary forms of payment (i.e. data and/or attention to ads), that expands to virtually everyone with access to a computer.
I think I'd add a #4 to this list, and that's helping people. I like making things that people can use to make their life easier. That's probably my number one.
The "creating something" idea... That's more complex. With agentic coding something can be created, but did I create it? Using agentic coding feels like hiring someone to do the work for me. For example, I just had all the windows in my house replaced. A crew came out at did it. The job is done, but I didn't do anything and felt no pride or sense of accomplishment in having these new windows. It just happened. Contrast that to a slow drain I had in my bathroom. I took the pipes apart, found the blockage, cleared it out, and reassembled the drain. When I next used the sink and the water effortlessly flowed away, I felt like I accomplished something, because I did it, not some plumber I hired.
So it isn't even about learning or solving puzzles, it's about being the person who actually did the work and seeing the result of that effort.
Yes! Good points! I think what I meant for point 1 was more "outputting something" vs "creating something". In my mind that encompasses materializing something into the world to achieve whatever you wanted, whether you were aiming to help others, solve a problem you alone have, or scratch some other sort of itch. It's about achieving some end. And helping somebody can be achieved indirectly and still be satisfying.
The inherent value of creating is something I was missing. Solving puzzles might be part of that, but not all. It's the classic Platonic question about how we value actions: for their own sake, for their results, or for both.
I think we agree that coding can be both, and it sounds like you feel the value for its own sake is lackluster in agentic coding -- It's just too easy. And I think that's the core sliding scale: Do you value creation more for its own sake or for its results? Where you land on that spectrum probably influences how people feel about agentic coding.
That being said, I also think that agentic coding can give enough of a challenge to scratch the itch of intrinsic value of creating. To a certain degree I think it's about moving up the abstraction chain to work more on architecture and product design. Those things can be fun and rewarding too. But fundamentally it's a preference.
It's kind of a weird thing. I spent 2 days working one some code, which in a way was the process of working out the requirements and functionality that was required. I then told Claude to look at it in and refactor it.
I did put in 2 days of work to come up with what Claude used to ultimately do what it did... but when I look at the resulting code, I feel nothing. Having the idea isn't the same as being the one who actually did the thing. I plan to delete the branch next week. I don't want to maintain what it did, and think it should be less complex than it made it.
> If you are primarily motivated by seeing a finished product of some sort, then I think agentic coding is transcendent
As someone who enjoys technology, and using it, and can just barely sort-of code but really not, agentic coding must be wonderful. I have barely scratched the surface with a couple of scripts. But simply translating "here's what I want, and how I would have done it the last time I used Linux 20 years ago, show me how to do it with systemd" is so much easier than digging through years of forum posts and trying to make sure they haven't all been obsoleted.
None of it is new. None of it is fancy. I do regret that people aren't getting credit for their work, but "automount this SMB share from my NAS" isn't going to make anyone's reputation. It's just going to make my day easier. I really did learn enough to set up a NAT system to share a DSL connection with an office in the late 1990s on OpenBSD. It took a long time, and I don't have that kind of free time anymore. I will never git gud. It's this, or just be another luser who goes without.
You're forgetting that (1) brings a sense of pride. "I built this". That's not true in many ways if you ask something else to do it
I'm squarely into #1, but it usually requires #2 (at a high level) and has #3 as a side effect. But there's also #0 which kicks it all off: the triggering problem/question.
Like just yesterday I started to notice the increasing pressure of an increasingly hard-to-navigate number of Claude chats. So I went searching for something to organize them. I did find an extension, but it's for Chrome, and I'm a Firefox person, so I had Claude look at it with the initial idea of porting to Firefox. Then in the analysis, Claude mentioned creating an extension from scratch, and that's what I went for.
I've never really used JavaScript, let alone created a Firefox extension before, but in a few minutes I was iterating on one, figuring out how I wanted it to work with Claude, and now I have a very nice and featureful chats organizer. And I haven't even peeked at the code. I also now have a firm idea of this general spec of how I want arbitrary list-organizing UI to look+behave going forward.
I think your comment really captures some of the reasons behind the differences between people’s reactions to Claude pretty well.
I will add though, on 2 and 3, during most of the coding I do in my day job as a staff engineer, it’s pretty rare for me to encounter deeply interesting puzzles and really interesting things to learn. It’s not like I’m writing a compiler or and OS kernel or something; this is web dev and infra at a mid sized company. For 95% of coding tasks I do I’ve seen some variation already before and they are boring. It’s nice to have Claude power through them.
On system design and architecture, the problems still tend to be a bit more novel. I still learn things there. Claude is helpful, but not as helpful as it is for the code.
I do get the sense that some folks enjoy solving variations of familiar programming puzzles over and over again, and Claude kills that for them. That’s not me at all. I like novelty and I hate solving the same thing twice. Different tastes, I guess.
I find there are still opportunities to solve puzzles. Claude Code might build something in an unsatisfying or inelegant way, and you can suggest a better approach. You can absolutely write core components — the fun parts you crave — of the code and give it to an LLM to flesh out the rest.
One of the recent joys I’ve had is having CC knit together separate notebooks I’d been updating for a couple of years into a unified app. It can be a fulfilling experience.
The creator of OpenClaw had a great line about this:
"If your identity is tied to you being an iOS developer, you are going to have a rough time. But if your identity is 'I'm a builder!' it is a very exciting time to be alive."
Plus, there is no rule that says you can't keep coding if it's faster for you and/or it's quicker in general. e.g I can write a Perl one liner much faster than Claude can. Heck, even if it's not faster and you enjoy coding, just keep coding.
> I’m a builder!
I‘m a builder too.
I built a house. Ok, I said an architect what I want and he showed me the plans and I gave him feedback for adjustments and then the plans were given to the construction crew and they built the actual house.
But is was my prompt, so I‘m a builder.
Curious about that reasoning : where do you draw the line ?
Are you a builder if there is an middleman ? If not, what if the middleman is a tool ? If you use autocad to build the plans, are you still a builder ? What if autocad has a prompt feature, are you still a builder ?
If you actually do something that is considered building.
Same with vibe coding, if you don’t write code you just ordered and didn’t code, otherwise all my customers and bosses where coders long before AI because there orders don’t reach much different from today’s prompts. The recipient changed but that doesn’t change the sender.
It’s some kind of Chinese Room but this time for those outside the room.
Also, half of the rooms in the house can’t be accessed because they don’t have a door. And when it starts raining, the house collapses.
[dead]
I'm a few years younger than the OP, but I remember the early Internet days. I started with Perl CGI scripts, ASP, even some early server side JS, in the form of Netscape Livewire.
Over the past couple months, I've created several applications with Claude Code. Personal projects that would've taken me weeks, months, or possibly forever, since I generally get distracted and move on to something else. I write pretty decent specs, break things into phases, and make sure each phase is solid before moving on to the next.
I have Claude build things in frameworks I would've never tried myself, just because it can. I do actually look at the code. Some of it is slop. In a few cases, it looks like it works, but it'll be a totally naive or insecure implementation. If I really don't like how it did something, I'll revert and give it another attempt. I also have other AIs review it and make suggestions.
It's fun, but I ultimately gain little intellectual satisfaction from it. It's not like the old days at all. I don't feel like I'm growing my skill set. Yes, I learned "something", but it's more about the capabilities of AI, not the end result.
Still, I'm convinced this is the future. Experienced developers are in the best position to work with AI. We also may not have a choice.
Then you haven't had any exciting idea and the need to actually build it. I personally like thinking of different projects and come up with ideas to make them unique. With Claude Code you can iterate like you're on steroids.
For fun and education purposes, learning and satisfaction are understandable.
For work, companies won't support it. Get it done. Fast. That's the new norm.
I disagree. I need to be able to support what I ship and answer to the details of what it does and why it does it. I can only truly do that if I write it myself.
There should also be a symbiotic relationship at a job. Yes, they get something from me, but I should also get something… learning and some amount of satisfaction… in addition to the paycheck. I can get a paycheck anywhere.
It’s not the “new norm” unless employees accept it as the new normal. I don’t know why anyone would accept a completely one-sided situation like that.
> I need to be able to support what I ship and answer to the details of what it does and why it does it. I can only truly do that if I write it myself.
How do you function on a team, where you have to maintain code others have written?
We talk to each other. If someone wrote something I don't understand, I defer to them. If someone wrote something who is no longer with the company, we trying to make sense of it, and in some cases end up re-writing some things.
There are only 3 or 4 of us working on most of the code I touch. 3 of us have worked together in some form or another for close to 20 years.
That's a LONG time! I'm happy for you :)
> I can only truly do that if I write it myself.
That's where you're wrong. AI can debug code better than humans. I put it on a task that I'd spent months on: debugging a distributed application which had random errors which required me to comb through MBs of logs. I gave Claude the task, a log parser (which it also wrote), and told it to find what each issue was. It did the job in a few minutes. This is a task that was, frankly, just a bit above my capacity with a human brain as it required associating lots of logs by timestamps trying to reconstruct what the heck was going on.
My new worry is that I need to make sure the code AI is writing is more comprehensible not to other humans, but to other AIs in the future, since there's very little chance humans will be doing the debugging by themselves given how bad we are at that compared to LLMs even now, let alone in a few years.
> but I should also get something
What do you want beyond a pay check? If you want to get better at your job, the most important technique you can improve right now is hands down how to interact with an AI to solve business problems. The learning you're thinking of, being able to fully understand code and actually debug it in your head, is already a thing of the past now. In a few years, no one will seriously consider building software that's not entirely AI-written except for enthusiasts, similar to the people currently participating in C obfuscated code competitions. I say this as someone who reluctantly started using AI in anger only a few months ago after hating on it before that for the laughable code it was producing just around 6 months ago (it probably was already good by then but I was not really giving it a chance yet).
When it comes to writing code, I can almost tell before writing code that whether this particular piece of code will be intellectually stimulating to me. If so, I write it myself without thinking about whether Claude might have done it faster. If not, I let Claude write it. Currently I'd estimate maybe 70% of the code falls in the first category, and the remaining 30% is something I would've used a lot of my own willpower to get started anyways.
Also, when I write code myself, I still ask Claude to review it. It's faster than asking a human colleague to review it, so you can have Claude review often. Just today after a five-minute review Claude said a piece of code I wrote had four bugs, three of which were hallucinations and one was a real bug. I honestly do think it would have taken me a bit more than five minutes to find that one real bug.
I had a similar feeling trying to calculate some combinatorial structures. At some point the LLM made a connection to extremal combinatorics and calculated tighter bounds and got me to the solution faster.
Felt flashbacks of playing chess against humans online as a teen by copying moves from a chess engine.
Whats the point haha
This past week I found and fixed a bug that happens once in 40,000 transactions working with Claude Code - Opus 4.6. Our legacy app was designed around 2008 and has had zillions of band aids added since then. Nobody (~700 person company) has been able to reliably reproduce this issue to confidently claim that they know what the cause is and how to definitively fix it. That all changed yesterday. I spent today writing up summaries that were shared far and wide. My wizard status is yet again renewed.
> It was so unfulfilling.
I'm going to say something people hate... you're probably holding it wrong. Why do I say that? Because I absolutely felt exactly the way you are feeling. In fact, it can be worse than unfulfilling, it can be even draining.
But I, over time, changed how I used LLMs and I actually now find it rewarding and I'm learning a huge amount. I've learned more technologies (and I do mean learn) in the last year than I have ever in the past.
I think my advice is that if it feels wrong then you shouldn't be doing it that way. But that isn't inherent in using LLMs to help you work. Everyone has different preferences for how they work (and what languages they like, etc). The people using 15 LLMs to build software probably love that but I don't think that's how I want to do it. And that's fine.
> I’m probably going to go back and redo everything with my own code.
Why? Did Claude do a bad job?
I think it depends what you're building. I find it fun, once in a while, an engineer to "not go shoeless" and get some of things I need done.
You're paid by a company to create software, so they can use it to drive business value and make a profit. You did so effortlessly. But it didn't make you feel personally fulfilled. So you're going to go back and re-do it, so you feel better?
How do you think your company's CEO is going to feel when you tell them you could be finishing the software much faster, but you'd rather not, because it feels better to do it by hand?
It’s not just about speed today. It’s about the speed to make changes, to understand the minutia of the code to more quickly troubleshoot when something goes wrong, to better understand the implication of future changes…
Just yesterday I was on a call where someone was trying to point to my code as a problem when we suspected a DNS issue. If I didn’t know the code inside and out, I could have easily been steam rolled, because as we know, “it’s never the network”. We found out today it was in fact DNS.
If someone only ever worries about is speed, they’ll likely get tripped up and fall. One guy on my team is all about delivering quickly. He gives very optimistic timelines and gets things out the door as fast as possible. Guess what, the code breaks. He is constantly getting bug reports from everyone and having to fix stuff. As he continues to run into this, he is starting to become a bit more mature and tactical, but that is taking time.
I think the CEO would much rather see the production code be fully tested and stable. I write the frameworks everyone else on the team uses. If my code breaks, everyone’s code is broken. How much will that cost?
Why would I give a rat’s ass what my CEO thinks. I do my job the way I want to in a way that allows me to keep going. If the CEO wants it a different way he can fire me, and pay me 10 months worth of wages while I look for a different job.
I know the code I produce is damn good, and I take pride in my extremely low defect rate. I will not be rushed. I will not be pushed. And I will do so until the day I retire.
My CEO is fine as long as the project is profitable, which is part of my responsibility, and they are actually on board with us delivering the best quality we can under that constraint, not only because our clients do notice quality, but also as a matter of principle.
Your choices are not limited to one extreme or the other.
Hey, I'm nearly 80 years old. I haven't written a line of code in over 10 years. But I'm coding now, with the help of Claude & Gemini, and having a great time. Each block of Python or Applescript that they generate for me is a much better learning tool than a book - I'm going through the code line by line and researching everything. And I'm also learning how to deal with LLMs and their strengths & weaknesses. Correcting them from time to time when they screw up. Lots of fun.
> Each block of Python or Applescript that they generate for me is a much better learning tool than a book - I'm going through the code line by line and researching everything.
I have been doing something similar. In my case, I prefer reading reference documentation (more to the point, more accurate), but I can never figure out where to start. These LLMs allow me to dive in and direct my own learning, by guiding my readings of that documentation (i.e. the authoritative source).
I think there has been too much emphasis (from both the hypesters and doomsayers) on AI doing the work, rather than looking at how we can use it as a learning tool.
Couldn't agree more. On a large and open ended feature I sometimes struggle with where to start and end up researching something tangential. Cool learning, but not efficient.
Claude Code gives me a directory, usually something that works, and then I research the heck out of it. In that way I am more of an editor, which seems to be my stronger skill.
>>>>Hey, I'm nearly 80 years old.
You are an inspiration. I will remember this when I grow older. Just wanted to say this, I am 1/2 your age, and I am sure there are 1/3 or even 1/4 people here. ;)
I'm very happy for you and hope when I'm nearing 80 I get to be doing something similar.
It's cool to rediscover Applescript for me (I'm late 40's) but it's a funny thing where I can like smell the NeXT in it almost nostalgically but it's quite handy in this new era of hijacking mac mini's (OpenClaw obviously is one way to do it, but why not just straight to the core).
I personally think coders get better with age, like lounge singers.
AppleScript doesn’t have any NeXT heritage, it comes entirely from classic MacOS (debuted in System 7.1)
Sure, but you can feel some emergent philosophies that are starting to converge and there are recognizable aesthetics.
That's great and I'm the same, 40s multiple founder and I was ready to hang it up after my last exit -- had 0 passion to code anymore and now I'm back and LLMs are reigniting my passion to create again.
You are an inspiration. Reading this makes me happy
Good for you. Learning is a life long thing!
> better learning tool than a book
Learning for what? That day when you write it yourself, that will never come ...
There is only so much you can learn by reading; it requires doing.
The good thing about traditional sources like books, tutorials and other people's code bases is that they give you something, but don't write your project for you.
Now you can be making a project, yet be indefinitely procrastinating the learn-by-doing part.
> Learning for what? That day when you write it yourself, that will never come ...
For the enjoyment, and producing better products, faster?
Why were you learning, before AI tools?
I second another fellow commenter, you are my inspiration too! Thanks for sharing.
Maybe the internet has made me too cynical, and I'm glad people seem to be having a good time, but at time of posting I can't help but notice that almost every comment here is suspiciously vague as to what, exactly, is being coded. Still better than the breathless announcements of the death of software engineering, but quite similar in tone.
Some _fun_ stuff i "coded" in a day each just in last couple weeks:
https://hippich.github.io/minesweeper/ - no idea why but i had a couple weeks desire to play minesweeper. at some point i wanted to get a way to quickly estimate probability of the mine presence in each cell.. No problem - copilot coded both minesweeper and then added probabilities (hidden behind "Learn" checkbox) - Bonus, my wife now plays game "made" by me and not some random version from Play store.
another one made in a day - https://hippich.github.io/OpenCamber - I am putting together old car, so will need to align wheels on it at some point. There is Gyraline, but it is iOS only (I think because precision is not good enough on Android?). And it is not free. I have no idea how well it will work in practice, but I can try it, because the cost of trying it is so low now!
yes, both of these are not serious and fun projects. unlikely to have any impact. but it is _fun_! =)
The other week I used Copilot to write a program that scans all our Amazon accounts and regions, collects services and versions, and finds the ones going EOL within a year. The data on EOL dates is scraped from several sources and kept in JSON. There's about 16 different AWS API calls used. It generates reports in markdown, json, and csv, so humans can read the markdown (flags major things, explains stuff), and the csv can be used to triage, prioritize, track work over time. The result is deduplicated, sorted, consolidated (similar entries), and classified. I can automatically send reports to teams based on a regex of names or tags. This is more data than I get from AWS Health Dashboard, and can put it into any format I want, across any number of accounts/regions.
Afaik there are no open source projects that do this. AWS has a behemoth of a distributed system you can deploy in order to do something similar. But I made a Python script that does it in an afternoon with a couple of prompts.
> almost every comment here is suspiciously vague as to what, exactly, is being coded
Why? You don't trust a newly-created account that has not engaged with any of the comments to be anything but truthful?
Yes. I never really see people say wtf they're making. It's always "AI bot wrote 200k lines of code for me!" Alright, cool. Is the project something completely new? Useful? A rushed remake of a project that already exists in GitHub with actual human support behind it? I never see an answer.
I wrote SuperSecretCrypt.com, ScoreRummy.com. Other stuff, too.
I have integrated Claude Code with a graph database to support an assistant with structured memory and many helpful capabilities.
I have clients. I automated a complicated data ingestion pipeline into a desktop app with a bulletproof process queue, localhost control panel and many features.
For another, I am writing an AI-specific app that is so cool. I wish I could tell you about it but it's definitely not a rushed remake of anything.
I hope that helps.
> SuperSecretCrypt.com
Is down. And the scoring one, no offense, seems like a project a junior would make to pad out their resume/portfolio. Nothing wrong with that of course, but I fail to see how this translates to all the hype being thrown around.
SuperSecretCrypt.com doesn't work.
I am currently using a Claude skill that I have been building out over the last few days that runs through my Amazon PPC campaigns and does a full audit. Suggestions of bid adjustments, new search terms and products to advertise against and adjustment to campaign structures. It goes through all of the analytics Amazon provides, which are surprisingly extensive, to find every search term where my product shows up, gets added to cart and purchased.
It's the kind of thing that would be hours of tedious work, then even more time to actually make all the changes to the account. Instead I just say "yeah do all of that" and it is done. Magic stuff. Thousands of lines of Python to hit the Amazon APIs that I've never even looked at.
And it doesn't freak you out that you're relying on thousands of lines of code that you've never looked at? How do you verify the end result?
I wouldn't trust thousands of lines of code from one of my co-workers without testing
Why wouldn't you test? That sounds like a bad thing.
Me? I use AI to write tests just as I use it to write everything else. I pay a lot of attention to what's being done including code quality but I am no more insecure about trusting those thousands of tested lines than I am about trusting the byte code generated from the 'strings of code'.
We have just moved up another level of abstraction, as we have done many times before. It will take time to perfect but it's already amazing.
So people don't look at the code, or the tests.
So they don't know if it has the right behavior to begin with, or even if the tests are testing the right behavior.
This is what people are talking about. This is why nobody responsible wants to uberscale a serious app this way. It's ridiculous to see so much hype in this thread, people claiming they've built entire businesses without looking at any code. Keep your business away from me, then.
It's thousands of lines of variation on my own hand-tooling, run through tests I designed, automated by the sort of onboarding docs I should have been writing years ago.
Do you trust the assembly your compiler puts out? The machine code your assembler puts out? The virtual machine it runs on? Thousands of lines of code you've never looked at...
None of that is generated by an LLM prone to hallucination and is perfectly deterministic unless there's a hardware problem.
And yes, I have occasionally run into compiler bugs in my career. That's one reason we test.
> None of that is generated by an LLM
How did you verify that?
> prone to hallucination
You know humans can hallucinate?
> is perfectly deterministic
We agree then that you can verify, test, and trust the deterministic code an LLM produces without ever looking at it.
> That's one reason we test
That's one way we can trust and verify code produced by an LLM. You can't stop doing all the other things that aren't coding.
I get there's a difference. Shitty code can be produced by LLMs or humans. LLMs really can pump out the shitty code. I just think the argument that you cant trust code you haven't viewed is not a good argument. I very much trust a lot of code I've never seen, and yes I've been bitten by it too.
Not trying to be an ass, more trying to figure out how im going to deal for the next decade before retirement age. Uts going to be a lot of testing and verification I guess
> How did you verify that?
The compiler works without an internet connection and requires too little resources to be secretly running a local model. (Also, you can’t inspect the source code.)
> You know humans can hallucinate?
We are talking about compilers…
> We agree then that you can verify, test, and trust the deterministic code an LLM produces without ever looking at it.
Unlike a compiler, an LLM does not produce code in a deterministic way, so it’s not guaranteed to do what the input tells it to.
It is for me because the LLM makes my ability to evaluate super, too.
Compiler theory and implementation is based on mathematical and logic principles. And hence much more provable and trustworthy than a LLM thats stitching together pieces of text based on ‘training’
"Trust"? God no. That's why I have a debugger
Also you really do have to know how the underlying assembly integer operations work or you can get yourself into a world of hurt. Do they not still teach that in CS classes?
I've been doing agentic work for companies for the past year and first of all, error rates have dropped to 1-2% with the leading Q3 and Q4 models... 2026's Q1 models blowing those out the water and being cheaper in some way
but second of all, even when error rates were 20%, the time savings still meant A Viable Business. a much more viable business actually, a scarily crazy viable business with many annoyed customers getting slop of some sort, with a human in the loop correcting things from the LLM before it went out to consumers
agentic LLM coders are better than your co-workers. they can also write tests. they can do stress testing, load testing, end to end testing, and in my experience that's not even what course corrects LLMs that well, so we shouldn't even be trying to replicate processes made for humans with them. like a human, the LLM is prone to just correct the test as the test uses a deprecated assumption as opposed to product changes breaking a test to reveal a regression.
in my experience, type errors, compiler errors, logs on deployment and database entries have made the LLM correct its approach more than tests. Devops and Data science, more than QA.
It's also usually from people who stopped coding and haven't kept their skills up.
Or have no more skin in the game, retirement.
In the past month, in my spare time, I've built:
- A "semantically enhanced" epub-to-markdown converter
- A web-based Markdown reader with integrated LLM reading guide generation (https://i.imgur.com/ledMTXw.png)
- A Zotero plugin for defining/clarifying selected words/sentences in context
- An epub-to-audiobook generator using Pocket TTS
- A Diddy Kong Racing model/texture extractor/viewer (https://i.imgur.com/jiTK8kI.png)
- A slimmed-down phpBB 2 "remake" in Bun.js/TypeScript
- An experimental SQLite extension for defining incremental materialized views
...And many more that are either too tiny, too idiosyncratic, or too day-job to name here. Some of these are one-off utilities, some are toys I'll never touch again, some are part of much bigger projects that I've been struggling to get any work done on, and so on.
I don't blame you for your cynicism, and I'm not blind to all of the criticism of LLMs and LLM code. I've had many times where I feel upset, skeptical, discouraged, and alienated because of these new developments. But also... it's a lot of fun and I can't stop coming up with ideas.
Yes and they all mention Claude as if it's the only LLM that can code.
I wrote SuperSecretCrypt.com, ScoreRummy.com. Other stuff, too.
I have integrated Claude Code with a graph database to support an assistant with structured memory and many helpful capabilities.
I have a freelance gig with a startup adapting AI to their concept. I have one serious app under my belt and more on the way.
Concrete enough?
SuperSecretCrypt.com doesn't work.
In my experience, I have "vibe coded" various tools and stuff that, while nice to have, isn't really something I need or brings a ton of value to me. Just nice-to-haves.
I think people enjoy writing code for various reasons. Some people really enjoy the craft of programming and thus dislike AI-centric coding. Some people don't really enjoy programming but enjoy making money or affecting some change on the world with it, and they use them as a tool. And then some people just like tinkering and building things for the sake of making stuff, and they get a kick out of vibe coding because it lets them add more things to their things-i-built collection.
I will say that I grieve the passing of 'coding', per se. I used to love getting the flow, envisioning the data flows and object structures and cool mechanisms, refactoring to perfection. I truly miss it.
But the payoff for letting that go is huge.
The combination of the internet and how insanely pushed every single facet of AI bullshit is has made me incredibly cynical. I see a post like this reach the top of HN by a nobody, getting top votes and all I can think is that this is once again, another campaign to try and make people feel better about AI.
Every time I've asked people about what the hell they're actually doing with AI, they vanish into the ether. No one posts proof, they never post a link to a repo, they don't mention what they're doing at their job. The most I ever see is that someone managed to vibe code a basic website or a CRUD app that even a below-average engineer can whip up in a day or two.
Like this entire thread is just the equivalent of karma farming on Reddit or whatever nonsense people post on Facebook nowadays.
think about why anybody would ever associate a production level product with slop when consumers are polarized towards generative AI
this site gets indexed
there are too many disincentives to cater specifically to your suspicion and cynicism
I’m 63 (almost 64), and I’m rewriting an app (server and native client), that took a couple of years to originally write.
Been working for about a month, and I’m halfway through. The server’s done (but I’m sure that I’ll still need to tweak and fix bugs), and I’m developing the communication layer and client model, now. It took seven months to write the first version of the server, and about six months to write a less-capable communication driver, the first time.
This is not a “vibe-coded” toy for personal use. It’s a high-Quality shipping app, with thousands of users. There’s still a ton of work, ahead, but it looks like an achievable goal. I do feel as if my experience, writing shipping software, is crucial to using the LLM to develop something that can be shipped.
I’ve had to learn how to work with an LLM, but I think I’ve found my stride. I certainly could not do this, without an LLM.
The thing that most upset me, since retirement, has been the lack of folks willing to work with me. I spent my entire career, working in teams, and being forced to work alone, reduced my scope. I feel as if LLMs have allowed me to dream big, again.
The isolation of being a retired programmer is a real bitch. I think back to the days of a few young programmers with me at the whiteboard, the fast back and forth, the satisfaction of seeing ideas come together. I really missed that.
I'm not allowed to feel like AI is an adequate replacement for fear that the critics will tell me I'm not healthy but, between you and me, as much as I miss the camaraderie of real humans, being able to brainstorm with an entity that knows pretty much everything and is able to execute my will without complaint is not bad.
And, it's nice to have someone, something, to talk to about technical ideas. It's a great time to be alive.
> It's a great time to be alive.
I feel the same.
A lot of comments from more grown up engineers who feel nostalgia like it's COM/CORBA/MFC again and they are excited how they can be productive again.
I'm really sorry (and accept down-vote storm) to disappoint you but you won't be young again and burning midnight oil may remind old days and bring excitement, but in the end it will harm your health.
Learning like crazy, late night hacking and other attributes of fresh engineers is sometimes a necessity to build a career, knowledge base, equity to comfortably start a family. Some people enjoy it and many hate, but most of us did it at some point.
I wouldn't oppose it if it wasn't harmful for the industry. What all those engineers who are excited again would think of a startup that stole all free land, building material and doubled housing? I bet all youngsters would be excited to have their own place for $20 monthly mortgage payment, telling everyone who has paid most of their salary over last 30 years how energizing is feeling you don't need to work for your house your whole life and ignoring equity crash for those folks.
This might just be the single most worthless, non-sensical post that I've read in my twelve years of using HN.
Congratulations.
This thread doesn't resonate with me whatsoever. It's not that I don't get where these people are coming from. LLMs have allowed people to churn out projects (especially small, personal projects) faster than ever, skipping what a lot of people view as the boring or tedious parts. But these discussions feel like a kid playing with toys, while a nuclear explosion is going off in the background.
Until I realized that no one here is going to be in the blast radius. So many people who agree with this admit to being in their 40s, 50s, 60s. All of them have already had the time to learn without LLMs, get industry experience, network, climb their career ladders as high as they could. These people are now sitting on piles of assets, and they know that if LLMs start pushing out people from the industry, it'll be us juniors and new grads. They will either remain relevant in the industry due to seniority/experience/pivoting to managerial duty, use their money and connections to easily learn new skills and pivot, or punch out and coast through retirement before it affects them.
You’re right that LLMs are going to push out jobs at the low end of the market. “Code monkey” type jobs are going to be displaced the same way computers displaced a lot of basic clerical and computational jobs.
But that doesn’t mean there won’t be entry level jobs, they will just have a different set of qualifications and expectations. Just like it’s hard to get a job doing arithmetic today without some other knowledge of the application, future jobs in computing are going to require people to understand things outside of the realm of programming alone. They are going to need to know more about the application of the code they write. It’ll be bad for developers who “just close Jira tickets” but problem solvers in a specific field will be okay.
This is a propaganda/marketing post.
1) What 60 year old in tech his entire life only makes a HN account in the last 17 hours?
2) Assuming he wasn't aware of it. What brought the site to his attention and why now?
3) Did not engage with the thread at all after his initial post. Has not engaged with anything else since. You'd think someone introduced to a tech community would be eager to look around and contribute??
I completely understand your sentiment though and it's exactly what makes the OG post so tone deaf.
Going over the 50 bump, and I see myself selling toasts, as being an IC/Architect is no longer valued enough, everyone is expected to be a PM for their agents minions.
The teams get reduced, as now one can do effectively more with less, and in South Europe, in IT there is hardly a place to get a job above 50 years old, unless one goes consulting as the company owner, and even then the market cannot hold everyone.
As kid I have seen this happening, as factory automation replaced jobs in complete villages, the jobs that weren't offshored into Asia or Eastern Europe for clothing and shoes, got replaced with robots.
The few lucky ones were the ones pressing the buttons and unloading trucks.
Likewise a few ones will be lucky AI magicians, some will press buttons, and the large majority better get newer skills beyond computing.
- Grocery List with some tracking of frequent purchases
- Health Log for medical history, doc appointments and past visits
- Habits Tracker with trends I’m interested
- Daily Wisdom Reader instead of having multiple ebooks to keep track of where I'm at
- A task manager similar to the old LifeBalance app
- A Home Inventory app so that I can track what I have, warranty, and maintenance
- An ios watch app to see when I'm asleep so that it can turn off my music or audiobook
- An ios watch chess tactics trainer app
- some games
Many of these are similar to paid offerings, but those didn't check off all the features I really wanted, so I vibe-coded my own. They all do what I want, the way I want it to.
That's amazing!!
Can I ask, do you pay for any server service or run your own or are these standalone apps?
For me, many of your ideas, if I was to implement them, I'd want them to have a server. Habits Tracker, need to access from whatever device I'm on at that moment. Grocery List. Same thing, and multiple users so everyone in the same house can add things to one list.
Etc....
This is not really LLM related but I feel like I have a blindspot, or hurdle or something where I haven't done enough server work be comfortable making these solutions. Trying to be clearer, I've setup a few servers in the past so it's not like I can't do it. It's more a feeling for comfort, or maybe discomfort.
Example: If you ask me to make a static website, or a blog, I'd immediately make a new github repo, install certain tools (static site generator or whatever), setup the github actions, register a new domain if needed, setup the CNAME, check it it's working. If I think it's going to be popular put cloudflare in front of it. I'm 100% confident in that process. I'm not saying my process is perfect. Only that I'm confident of it. I also know what it costs, $10-$20 a year for the domain name and maybe a yearly subscription to github
Conversely, if I was to make anything that was NOT a static server but actually a server with users and accounts, then I just have to go read up on the latest and cross my fingers I'm not leaking user data, have an XSS, going to get a bill for $250k from a DOS attack, picking the right kind of database, ID service, logging, etc... I could expose a home server but then be worried it'll get hacked. Need to find a backup solution, etc....
I know someone will respond I'm worrying to much but I hoping for more example of what others are doing for these things. Is there some amazing saas that solves all of this that most of you use? Some high-level framework that solves all of this and I just pick "publish" don't have to worry about giant bills?
Most all of the apps sync with iCloud so it syncs across all of my devices.
However the MediaWatch app syncs between me and my wife which iCloud does not support (as a sidenote, this is one of the hallucination traps that both Claude and ChatGPT led me down -- both said it was possible, and after a few weeks and many, many hours, I learned the major constraints. I was not wanting any of my apps on the Appstore, so that blew that option). Anyway, I ended up making a small simple SQLite database using python on my Pi and use that for my sync needs. The devices only sync while at home, which was not a problem for me. Also I'm not exposing the database to external security issues.
You're looking for something like Vercel or Firebase
And the biggest thing is that: software the way we want is much easier. No ads. No monthly cost.
Exactly! One of the reasons I vibed my own Ulysses/Bear similar app for journaling and notetaking with the essential features I need and no subscription.
This is the reason. I have just been vibe-coding my way for a few months now, got almost all the tools (except Browser and Mail) that I use daily, designed by me (with the help of LLM).
I'm curious what you mean by that. Tools I use include git and jj. I don't think I want my own versions of those. I use VSCode and Sublime Merge and gg. I'd be curious how far I could LLM code those. It'd be certainly easy to pull up Electron with Monaco but I'd probably just LLM code extensions. And I use lots of software via the browser (maps, google docs, chat, slack, discord, ...), I don't I'd want to make those. iIterm2, XCode, zsh, I don't think I want to LLM code a shell but that might be cool.
I'm over 50 now and feel like this as well. Haven't used Claude yet but used Codex a bunch, and it's been SO MUCH fun going over all the old perl & shell scripting stuff that I used to do years ago before I got into healthcare time and morphed to a hobby sysadmin.
Staying up and re-learning what I used to love long ago has given me a new found passion as well. Even if I do vibe code some scripts, at least I have the background now to go through them and make sure they make sense. They're things I'm using in my own homelab and not something that I'm trying to spin up a Github repo for. I'm not shipping anything. I'm refreshing my old skills and trying to bring some of them up to date. An unfortunate reality is that my healthcare career is going to be limited due to multiple injuries along the way, and I need to try to be as current as I can in case something happens. My safety net is limited.
Having never touched Perl in my life, Claude has enabled me to create a plugin for this ancient Perl software a lot of people are still using to this day. This felt different from just creating some new code with some LLM. This felt like ancient gods we're whispering their knowledge into me.
I just had what you might probably describe as the opposite experience. I was sat at a very important all hands meeting by our senior tech leader with about 100 people or so .who was mandating an AI goal for every employee on workday, he basically says that “if we all do not learn to adapt to AI, we will all get left behind” , and he had presented how to utilise spec driven development. He opened up the room for Q&A at the end of the meeting. A lot of them had technical questions about the agentic framework itself but I had a philosophical one. I I felt uncomfortable asking him the question in the open, so I sent him a private note .
The note read something like as follows : I don’t exactly agree with the framing that we will all get left behind if we all don’t learn to adapt to AI. More accurately, I see it this way. While the company definitely stands to gain from all the hyper increase in productivity by the use of said AI tools, I stand to pay a personal price and that personal price is this - I’m may very slowly stop exercising my critical thinking muscles because I am accustomed passing that to AI for everything and this will render me less employable, it is this personal price that I feel reluctant to pay. There has always been a delicate balance between an employer and employee. We learn new technologies on the job and we’re more employable for transferring that to other companies. This equation is now unbalanced. The company trapped more value, but there is skill erosion on my side . For instance, our team actually has to perform a Cassandra DB migration this year . Usually, I’d have to take a small textbook and read about the internals of CassandraDB, and maybe learn a guide on how to write Cassandra queries. What do I put in my resume now? That I vibe coded Cassandra migration? How employable is that? And I’m not sure if others felt the same way. But I definitely felt like the odd one out here for asking that question because everyone else in the meeting was on board with AI adoption.
The leader did respond to me and he said that learning agentic AI actually will make me more employable. So there is a fundamental disagreement as to what constitutes skill. I think he just spoke past me. Oh well at least I tried.
I understand your sentiment. I personally would never use a textbook for anything code related, if there's no proper documentation online then I wouldn't touch it with a ten-foot pole, haha.
However, even though I've never worked with CassandraDB, I feel pretty confident that I could do it with Claude Code. Not just "do it for me", but more like "I have done a lot of database migrations in my time, but haven't worked with CassandraDB in particular. Can you explain to me the complexities of this migration, and come up with a plan for doing it, given the specifics of this project?"
That question alone is already a massive improvement over a few years ago. I don't feel like I was using my "critical thinking muscles" when I tried to figure out how the hell to get hadoop to run on windows, that was just an exercise in frustration as none of the documentation matched the actual experience I was getting. Doing it together with Claude Code would be so much easier, because it'll say something like "Oh yeah this is because you still need to install XYZ, you can do that by running this line here: ...".
Now I'm not saying that Claude Code, and agentic in general, isn't taking away some of my critical thinking: it really is. But it also allows me to learn new skills much more quickly. It feels more like pair programming with someone who is a better programmer than me, but a much worse architect. The trick is to keep challenging yourself to take an active role in the process and not just tell it to "do it", I think.
Oh, I agree with what you’re saying and that’s sort of how I mostly use AI as well. The problem I have with my company is they’ve stepped from measuring success by the outcomes to measuring the means to achieve it. My opinion is - It forces people to operate a certain way potentially at their own expense, unwittingly even.
You are definitely not alone, and it’s unfortunate when people pushing AI ignore that legitimate fear and talk past it.
You are right, there is something you lose, but for what it’s worth, I don’t think the loss is necessarily critical thinking - I think it’s possible to use AI and still hone your critical thinking skills.
The thing you start to lose first is touching the code directly, of course, making the constant stream of small decisions, syntax, formatting, naming, choosing container classes, and a large set of other things. And sometimes it’s the doing battle with those small decisions that leads to deeper understanding. However, it is true, and AI agents are proving, that a lot of us have to make the same small decisions over and over, and we’re frequently repeating designs that many other people have already thought through. So one positive tradeoff for this loss is better leveraging of ground already covered.
Another way to think about AI is that it can help you spend all of your time doing and thinking about software design and goals and outcomes rather that having to spend the majority of it in the minutiae of writing the code. This is where you can continue to apply critical thinking, just perhaps at a higher level than before. AI can make you lazy, if you let it. It does take some diligence and effort to remain critical, but if you do, personally I think it can be a lot of fun and help you spend more time thinking critically, rather than less.
Some possible analogies are calculators and photography. People were fretting we’d lose something if we stop calculating divisions by hand, and we do, but we still just use calculators by and large. People also thought photography would ruin art and prevent people from being able to make or appreciate images.
Software in general is nearly always automating something that someone was doing by hand, and in way every time we write a program we’re making this same tradeoff, losing the close hands-on connection to the thing we were doing in favor of something a touch more abstract and a lot faster.
Database migrations are hard and inductive and often fail in some aspect. Why would you want to spend time doing them when you can spend time building the important thing after the migration is done.
Secondly - AI helps with happy path tasks for a migration. But most database migrations are complex beyond what an LLM can just spit out. There is so much context outside the observable parts of the database AI has access to. So I don’t think you have to worry about vibe coding eating the entire migration project.
++1
Was able to build a large financial application just with the 20 USD subscription in the last 12 month - without Claude, I would have required 5 - 6 people and at least 1 year of funding.
This was by far my best investment in my whole life 12x20 USD vs. 750.000 salary :-)
It is especially inspiring since it brings you usually a few new ideas into your context; also just joking around with it can yield new inspirations.
I'm wondering how long it will stay at 20 USD for the smallest subscription, no chance that they can keep this price, I'd say? Its impressive that they are giving it away for nearly free.
I find this baffling tbh as I regularly ask Claude for basic components and they come out completely broken, wrong and buggy.
The last: I asked for a quick TCP server in C++ that handled just a single client (disconnecting the existing client when a new client connected), with a send() that I could call from another thread. It was holding mutexes over read(), and trying to set the SO_REUSEPORT port socket option on a socket that had already been bound. Subtley broken garbage.
It would literally be better to copy and paste a solution off Stack Overflow, because at least there's a chance it'd have been reviewed by someone who knows what they're doing.
You are simply doing it wrong.
I could do this TCP server in another time at all and it would be perfect. I have done stuff that complicated and more many times.
You need to rethink how you are using the tool because you absolutely could get excellent results like I do.
The biggest things I suggest are... Treat it as collaboration or pair programming. Make sure to work through a design before programming and have it written to a file for your review before execution.
You can do this.
That doesn't sound like it would actually save me any time, or produce better results than just doing it myself.
this is an abysmal level of condescension, the kind that makes me wish mods actively moderated it, as it's that insulting
> a large financial application
Those could mean anything. Some people think 5k likes is large. Others think 100k is small.
Oh well, at least they didn't say "complex".
"large" in a sense of functionality for the given/required usecase.
LOC is currently around 200k, so for sure: Its not Microsoft-scale :-D
Also, they didn't say how accurate or secure it is.
Good one! :)
Since its proprietary, it runs in a private cloud environment and is processing only data of one "user per instance"; there is not public interface, only a VPN you have to dial in etc., so no frontend/frontpage facing the public.
Though, there are some design flaws from this perspective, because of convenience: E.g. it lets you persist the account number in the DB, if wanted.
i think its all about caring and knowing what you want to make and willing to iterate on the result until it is actually good. If you want the ai to do your job for you its probably not going to work, but if youre really good at using its advantages you almost certainly will be winning
Are you using this large financial application just for yourself?
I think the difficult task is/will be to sell vibe coded software from the lone developer to anyone.
Hyper-individualized software is what LLMs are best for IMHO. They lower the bar so much that it's becoming perfectly feasible and reasonable to amase a large amount of software which is fit to your exact personal needs and preferences.
Yeah, I have a dozen random tools that do specific things I need that wouldn't be useful to anyone else, and that I wouldn't share in their current state anyway. But they're fine for me, and without LLMs, I wouldn't have spent the time to build them.
Exactly: The app can do only one thing, and does this one thing very well. Other approaches are not implemented or planned. Its like a specific dentist-tool that you need for one specific task.
Yes, its for proprietary use among friends; its not for sale, instead I get a cut of their returns for providing support & maintenance.
It is not 100% vibe code, by far not! I use cloud for method-by-method or simple class instructions and integrate in the app manually, I do not use any of the API integrations, I just use the standrad WebUI for discussing, planing & implementation.
Would you have needed 6 people? I find that Claude, Codex etc. are able to output so much because they do a lot of reinventing the wheel whereas a human, given constraints, would make much more pragmatic choices around which technology to use. That’s not necessarily a bad thing, and regardless, you’ve been able to achieve something you’re happy with, which is what matters. But, I’d still like to hear more about what it has done that you think you couldn’t have done in a year yourself by choosing existing technologies. E.g: what is novel in your application? What background do you have?
These models making bad / tasteless decisions about what dependencies to pull in is one of the main reasons they work best in the hands of experienced developers. You've got to know what you want it to use and tell it, and anticipate what shortcuts it will want to take and tell it not to do those things. For these reasons we're not yet at the point where inexperienced non-programmers can get high quality software out of these tools. I do think this will improve with time though.
You need to tell the model what to do and what not to do: The dependency thingy is an issue, yes, but you can tell the model not to do so - and you should always know which result the prompt should create: For sure you must be able to read/understand/judge the code - completely fire and forget is not possible (to my experience), though I see many people saying "I had one mega prompt and after 2 days the app was ready", I take those always with a grain of salt.
Absolutely. These models still need a lot of this sort of hand holding, so they work best in experienced hands. I'm also skeptical of those very long runs, letting it go so long without active oversight must surely produce at least some objectionable design or implementation details, right? So I guess the people claiming those sort of results have less care for these sort of qualities.
Yes, even Claude Opus 4.6 is still running into accidents on longer chats which lasts for 3 - 4 days. But its getting better and better.
The explanation is simple:
a) Speed - it included a lot of boring stuff, esp. in the beginning on when I was in the discovery phase and had to figure out some basics relevant for the context
b) I think I would have given up very early on, esp. of all these boring things, which are required but take long headache time to develop (e.g. The app has a somewhat complex data rendering component, containing hundres of GDI+ calls; the file is currently around 5000 lines, writing this by hand woul have taken very long and would have been very frustrating)
c) Debugging - Sometimes bugs are so deep down in some components and after 1h you stop seeing the forrest because of all single trees: The LLM can greatly help here
d) Fresh ideas: If there is a pyramid of know how in this niche, then Im currently working in "the first floor", basicly; discussions with the model about enhancing and more complex things helps to see the next island where you could swim
Yes, I could have done it without the models - but it would have taken sooo much much more time, that I wouldnt have taken the route.
Novelity: The app does one specific thing and is designed only for that specific usecase - I do not know how novel it is but since its a niche, maybe you could achieve the same thing with with existing solutions and their plugins (but then I would have had to learn how to edit/change those)
Background: 25y+ IT experience, Master degree and some other certs
[dead]
What is this app and what does it do? Can we see it?
I find it very hard to believe anyone could code anything complicated with Claude that 5-6 competent developers could do.
I am currently working on a relatively complicated UI on an internal tool and Claude constantly just breaks it. I tried asking it to build it step by step, adding each functionality I need piece by piece. But the code it eventually got was complete garbage. Each new feature it added would break an existing one. It was averse to refactoring the code to make it easier to add future features. I tried to point it in the right direction and it still failed.
It got to the point where I took a copy of the code, cut it back to basics and just wrote it myself. I basically halved the amount of code it wrote, added a couple of extra features and it was human readable. And if I started with this, it would have took less time!
I had trouble in my early days with the quality of things I made.
One of the things I found helped a lot is building on top of a well-structured stack. Make yourself a scaffold. Make sure it is exactly how you like your code structured, etc. Work with Claude to document the things you like about it (I call mine polyArch2.md).
The scaffold will serve as a seed crystal. The document will serve as a contract. You will get much better results.
Its a financial asset management system, and its for proprietary use only. Maybe Im doing some YT insights in the future.
> I find it very hard to believe anyone could code anything complicated with Claude that 5-6 competent developers could do. <
I should have put a disclaimer - Im not layman, instead 25y+ IT experience. Without my prior experience, I think this project wouldnt have come into existence.
Can you provide a link to this app? Or alternately, share a few of the prompts by which you built it? I only ask because, if it's really that easy/simple, I'd like to do the same thing!
Its for proprietary use only.
Regarding prompts: a) In general I clean up the work space on a regular base, so I do not store prompts
b) Overall, Id say so far above 200 - 300 initial prompts for the code developed with the LLM (and then 2 - 50[?] follow up prompts to change & update things)
c) The initial prompts are always long and very elaborative, like 60-70% of screensize
d) The model is always aware of the source files used for a given prompts (in Claude you can create project workspaces and put your stuff in)
e) I always tell the model the current state, where want to go and which steps are necessary according to my opinion and I specify the result as detailled as possible
f) I give contraints in the prompts, telling what not to do etc.
This resonates deeply. I'm 49 and spent the last 18 months building six web apps with Claude while working a full-time Director role. The experience is exactly what you describe - that feeling of staying up late not because you have to, but because you can't stop.
What changed for me was the feedback loop. Before AI tooling, I'd have an idea, realize it would take weeks to prototype, and let it die. Now I go from concept to working MVP in a weekend. The constraint shifted from "can I build this" to "should I build this" - which is a much better problem to have.
The stack that works for me: Lovable for frontend, Replit for backend, Claude API for the AI layer, Neon for Postgres. Not fancy, but it ships.
The biggest lesson: AI doesn't replace the need for experience and taste. It amplifies it. Your decades of context about what makes good software - that's the real asset. Claude is just fast hands.
Seems Claude is also writing the comments for you?
I remember learning and writing Cobol with with the Microsoft Cobol compiler on my Tandy as I was training at the local VoTech and working as a night operator on an IBM mainframe flipping tapes. I can categorically say that writing code with Antigravity is worlds different than those early days. It is inspiring, but for me its more about understanding the models and how they do their magic. At 67 I'm refreshing my calculus, linear algebra, and statistics in an effort to be able to read the papers on the subject. In the future, I'm imagining the norm being an automated layer for coding, similar to compilers of today, that take natural language and produce trusted, reliable and performant code all the way down to the machine level. The real work will be developing the models and their layered and optimized machine level interfaces and implementation. It is all kind of amazing.
Very similar here. I am 68.
While I have never developed software professionally, in the four decades I have been using computers I have often written scripts and done other simple programming for my own purposes. When I was in my thirties and forties especially, I would often get enjoyably immersed in my little projects.
These days, I am feeling a new rush of drive and energy using Claude Code. At first, though, the feeling would come and go. I would come up with fun projects (in-browser synthesizers, multi-LLM translation engines) and get a brief thrill from being able to create them so quickly, but the fever would fade after a while. I started paying for the Max plan last June, but there were weeks at a time when I barely used it. I was thinking of downgrading to Pro when Opus 4.5 came along, I saw that it could handle more sophisticated tasks, and I got an idea for a big project that I wanted to do.
I have now spent the last two months having Claude write and build something I really wanted forty years ago, when I was learning Japanese and starting out as a Japanese-to-English translator: a dictionary that explains the meanings, nuances, and usages of Japanese words in English in a way accessible to an intermediate or advanced learner. Here is where it stands now:
https://www.tkgje.jp/
https://github.com/tkgally/je-dict-1
It will take a few more months before the dictionary is more or less finished, but it has already reached a stage where it should be useful for some learners. I am releasing all of the content into the public domain, so people can use and adapt it however they like.
This is neat that you had fun making this.
What are some good examples of where your app excels? I've currently got https://jisho.org bookmarked.
Thanks! The strength of my dictionary, I hope, is how the information on each word is chosen and presented with the needs of English-speaking learners in mind, especially the explanations of meanings, usages, and nuances. Dictionaries that mainly give glosses can mislead learners, as it is rare for the meanings of words to map one-to-one between languages.
Compare the following pairs of entries from TKG and Jisho.org:
https://www.tkgje.jp/entries/03000/03495_chousen.html
https://www.tkgje.jp/entries/11000/11013_charenji.html
https://jisho.org/search/挑戦
https://jisho.org/search/チャレンジ
While the two from Jisho.org have more information, they do not make clear the important differences between challenge in English and the two Japanese words. Claude, meanwhile, added this note:
‘In English, "challenge" often implies confrontation or difficulty. In Japanese, チャレンジ carries a strongly positive connotation of bravely attempting something new or difficult. It is closer in meaning to "attempt" or "try" than to "confront." ’
The entries for my dictionary are being written one at a time by Claude based on guidelines for the explanations, the length and vocabulary of the example sentences, etc. Those guidelines (which you can see in the prompts and Claude skills in the GitHub repository) were developed by me and Claude with a particular purpose in mind: helping a learner encountering an unfamiliar word get a good basic understanding of what it means and how it is used. In my experience, at least, it is very helpful to get explanations, not just glosses.
The Jisho site does do a good job of linking together a lot of different databases. They are welcome to add links to entries in my dictionary, too, if they like.
Is HN dead? Why are people commenting on a vapid post by a brand new account, which reads like an ad, and they're not questioning anything..?
It's such a blatant astroturf/shill post, but the comments are also all in the same vein so I guess anthropic is just running another one of their "organic" marketing campaigns
Because it is an interesting topic. The original post is not relevant. The conversations is. Lighten up.
> I’m ready to retire. ... Fast forward decades and Claude Code is giving me that same energy and drive. I love it. It feels like it did back then. I’m chasing the midnight hour and not getting any sleep.
Of course you love it, you don't have to worry about retirement anymore.
Give me your 401k, then tell you feel about Claude Code.
Oh, you are screwed. I feel badly about that. More about the environmental disaster and student loan screw job that AI than AI.
I am retired and am nearly equaling my salary with side jobs and only working a few hours a day. I don't see any reason you can't do that so stop whining and start learning.
Opposite here. I was excited by writing code and worked on open source side projects consistently. Somehow, I've not done anything since around August 2025.
I have a sense that AI could have something to do with it.
AI is degrading the status of our profession; its perception in the public eye.
At the same time, it is stealing our work and letting cretins pretend to be software engineers.
It's a bad taste in the mouth.
Yes! Although 60 is still a decade away, I've spent a fair few evenings vibe-coding a FOSS dependency-free raw git repo browser.[1] Never would have even started such a project without LLMs because:
* Implementing a raw Git reader is daunting.
* Codifying syntax highlighting rules is laborious.
* Developing a nice UI/UX is not super enjoyable for me.
* Hardening with latest security measures would be tricky.
* Crafting a templating language is time-consuming.
Being able to orchestrate and design the high-level architecture while letting the LLM take care of the details is extremely rewarding. Moving all my repositories away from GitLab, GitHub, and BitBucket to a single repo under my own control is priceless.
[1]: https://repo.autonoma.ca/treetrek/
The "occasional goofing off and wrecking everything" part is so real. What I've found is that the longer a context window gets, the more Claude starts confidently hallucinating its own previous decisions. We've started treating sessions like shifts: fresh context, explicit state summary at the top, specific task scope. Dramatically fewer "why did you just rewrite the entire auth module" moments.
The re-ignition thing resonates though. There's something about having a collaborator that removes the activation energy of starting. The blank file problem is real and brutal at 25, probably more so at 60 when you know exactly how much work lies ahead. AI doesn't eliminate the hard parts but it compresses the "ok where do I even begin" phase from hours to minutes.
What are you building?
I tried to execute a project in 1986 and was told it was impossible. Every few years as tech has improved I tried again, but it was still impossible. CD-ROM, CD-I, Web, Wiki, even AI a few years ago... But 2 weeks ago I taught myself to vibe code, and I built it. 40 years of planning and a few days of work. I'm freakin' thrilled.
I feel the same way but I am in my 30s. In my case I have had projects for years sitting in my brain, cooking up how I want them built. Well, Claude is amazing for brain dumping to. I have finally broken ground on my dream projects and they are better than I could have ever imagined. I get to instruct Claude to use the exact tools I wanted.
Hoping to start blogging about some of these projects in the future.
COM !! I remember that was the biggest idea that I learned and thought if I can get it right, I will be the best programmer ever. And to learn, I bought my most expensive book by Dale Rogerson if I remember the author's name correctly. But it was a different time and soon everyone was talking about Java. Just nostalgia and I remember my past.
A great thing you can do with LLMs:
"in (language I'm familiar with) I use (some pattern or whatever) what's the equivalent in (other language)?"
It's really great for doing bits and then get it to explain or you look and see what's wrong and modify it and learn.
I'm 62, and it's had the opposite effect on me. I've never stopped loving writing code, learning new things, trying random stuff, etc. I code all day, and spend more time playing with stuff in the evenings (the main difference is I'm sipping some scotch while I do it). Having to use LLM's at work has sucked most of the joy out of my work. Fighting with them, keeping them on track, catching hallucinations before they go too far, wasted effort...it's exhausting me like nothing else in my 40+ year career.
I remember my first few weeks of Claude Code. The high will wear off as you bump into the limitations, and then it starts to feel like you're more of a "manager of a junior-ish dev." The work shifts to clarity of intent and capturing edge cases, rather than purely coding. It's a fun time when you first jump in, but don't be surprised when your excitement reverts back to baseline.
You have never been on HN before and yet you feel the need to tell the community something vague and useless but which happens to align with LLM hype?
I remember before style sheets existed. Webites were all nested tables and font tags. I built a video website before YouTube be even existed. Claude code and AI is definitely an exciting time.
And transparent 1 pixel gifs :-)
don't forget VRML there are dozens of us
Need to align something. Simple! :)
Green account ending in "cc". I didn't realize Hacker News was doing ads now.
How much more blatant can you get?
Half the comments in here are also along the same lines
This is great to hear.
I am 43. I used to code as a kid and I've dabbled in it here and there, but I quickly realised I didn't want to code as a career, but now with these new tools I am building again and it's great, because I'm building the things that work for me.
To manage my life there was a todo app I used, now I've built my own, don't need to pay for it and it works exactly as I want it and now I also have a few ideas for other things I Want to do.
It's great, it feels like we might be able to start taking control of our tech back again now, when we can build the tools ourselves that work the way we want, we don't have to worry about the nonsense companies are sticking into there products, we can make things work exactly as we want it.
> I’m chasing the midnight hour and not getting any sleep.
Let’s get you to bed, gramps, you can talk to your French friend tomorrow.
I think that I understand you. I started programming in the mid-1960s as a kid and now in my mid-70s I have been retired for two years (except for occasional small gigs for old friends). Nothing special about me but I have had the pleasure of working with or at least getting to know many of the famous people in neural networks and AI since the mid-1980s.
My current passion is pushing small LLMs as far as I can using tools and agentic frameworks. The latest Qwen 3.5 models have me over the moon. I still like to design and code myself but I also find it pleasurable to sometimes use Claude Code and Antigravity.
I had dinner with Marvin Minsky once. Learned to program a Symbolics machine. We share a little history, I think. I've been interested in AI for the last forty years.
I decided that applications of AI were where I am going. I feel the pull of small LLMs. The idea of local is very appealing. But, at our age (I also started in the sixties), I've learned that too many irons in the fire means I get nothing done.
Congratulations on retaining your spirit. Many of my age-appropriate friends cannot comprehend the idea of working so hard for fun.
I am 60 in October, I have a couple of PyQt projects that were desktop apps, specialised tools I use for Electrical Engineering and Control/Safety Systems design and build.
So I decided that I wanted web apps, something that is probably beyond me in any reasonable time, if at all, if I was to code myself by hand.
For my coding AI "stack" I am now running OpenClaw sitting on top of Claude Code, I find the OpenClaw can prompt Claude Code better and keep it running for me without it stopping for stupid questions. Plus I have connected OpenClaw to my Whatsapp so I can ask how it is going or give instructions to the OpenClaw while not at the keyboard.
One app was a little complex with 35,000 loc, plus libraries etc. I reckon I had spent maybe 2500 hours on it over some years, but a significant part of that was developing the algorithm/workflow that it implemented - I only knew roughly what I wanted when I started, writing several to throw away at the beginning.
AI converted it to a webapp overnight, with a two sentance prompt, without intervention of any kind.
It took me another 15 minutes and a couple of small changes, mostly dependancies issues, and I had a working version of the same app that was literally 95%+ of the original, in terms of funcitonality and use.
I have a bunch of ideas for things I want to make that I probably never would have been able to otherwise.
I am just totally unable to fathom people that just make a blanket proclamation that AI is good for nothing. I can accept that it is not good for everything, it may cause some social disruption and the energy use is questionable, but improving, but not useful? Wake up.
A few years younger than OP, and started programming somewhere around 1982. The technology is obviously interesting, the capabilities are fascinating. I use LLMs a very large portion of every day.
The problems, as ever, are 1) what negative things are enabled by the technology, 2) do the positive things that are enabled by the technology outweigh those ("is the price worth paying?"), and 3) how much harm will "stupid" and/or "evil" cause as a result of the technology?
And so on.
The fact that a thing is exciting or interesting or stimulating is neat, for sure, but as always there is no relevant thought given to ramifications.
Humans lag well behind technological advancement, and this particular wave is moving faster than perhaps anything else (because prior technological advances enable it, etc).
It's cool that you enjoy it. Me, too. I might enjoy shooting heroin into my eyeballs, too, right up until I don't.
I'm 45 and I feel exactly the same way.
Such a big part of coding becomes mundane after a while. Constantly solving variations of the same kinds of problems.
Now Claude does it at my direction and I get so much more done!
But maybe even more important: It gets me to go outside my comfort zone and try things I wouldn't normally try because of the time it would take me to figure it out.
Like: Wat if I used this other audio library? I don't have to figure it out, I just pass in the interface I need to implement and get 90% of a working solution.
AI augmented programming couldn't have come at a better time and I'm really happy with it!
I know a guy who first tried programming at uni using a mainframe. He handed in his first program and was told to retrieve the result the next day. The following day he went to pick up his results and got an error listing. He decided coding wasn't for him. A few years later, he saw a C64 and started coding in BASIC and it turned into a career.
I started out with an 8 bit micro so I really enjoy tinkering and coding. AI doesn't seem attractive at all.
It's not only about what you do, but also about how you do it.
The primary reason I do programming is for me. I'm 51. It's always been that way for me.
First with LOGO on the Apple ][, making the turtle move around the screen and follow your commands. It was magic.
Then discovering BASIC, and the ability to turn the pixels on and off and make them any color you like.
Making my Amiga talk with the "SAY" command.
The first time I dialed a BBS in the dead of night with my Commodore 64 and my 300 baud modem, watching those colorful letters sloowly make their way across the TV screen...
Running my own BBS software and dialing in from my cousin's house at Thanksgiving...
Putting up my own web page and cgi-bin scripts....
It's all been magic, and it's all been just for me.
So when you remove everything else, all the cruft and crap,
I will still be programming just for me.
I have seen more reactions of people about this tech than actual implementations made possible which pushed the boundaries further. It is an amplifier of technical debt in mostly naive(people experienced in bad patterns) user base.
Take anthropic for example, they have created MCP/claude code.
MCP has the good parts of how to expose an API surface and also the bad parts of keeping the implementation stuck and force workarounds instead of pushing required changes upstream or to safely fork an implementation.
Claude code is orders of magnitude inefficient than plainly asking an llm to go through an architecture implementation. The sedentary black-box loops in claude code are mind bending for anyone who wants to know how it did something.
And anthropic/openai seems to just rely of user momentum to not innovate on these fundamentals because it keeps the token usage high and as everyone knows by now a unpredictable product is more addictive than a deterministic one.
We are currently in the "Script Monkey" phase of AI dev tools. We are automating the typing, but we haven't yet automated the design. The danger is that we’re building a generation of "copy-paste" architects who can’t see the debt they’re accruing until the system collapses under its own weight.
Almost like we are making devs dependent on the tool. Not because of its capabilities but because there lacks an understanding of the problem. Like an addiction dependency. We are all crack addicts trying to burn more tokens for the fix.
51 year old electrical engineer here, same thing! (minus the retiring part cause finances)
It's given me the guts to be a solo-founder (for now). I
Just checked out MoveOMeter.com Great idea - and I get how pitching to "an old coot" like my parents would get a laugh out of them before an insulting hurtful pass. Very clever positioning - I'd lean in on that. Your audience is there and waiting - which is tricky since your customer is actually the sales person and you need to give them the training up front to close the deal with their elder. Nice work!
A real-life scene that made me chuckle last weekend…
“Oh shit, Hey Babe did you close my laptop?”
My not-very-technical friend as we returned home from a Sunday afternoon trip to the park with the kids to find his Claude Code session had been thwarted.
happened to my friend too! an overkill but working solution for this is "sudo pmset -a disablesleep 1"
Something that shifted for me: tools like Claude Code made it viable to actually run multiple agents on real long-running workflows, not just one-off scripts.
Which immediately surfaces the next problem: how do those agents communicate back to you while running?
Most setups default to tailing a log file, or a Slack/Telegram bot bolted on as an afterthought. Works for one agent. Falls apart when you have five running overnight and one hits an edge case at 2am that needs a human call.
The agent-to-human communication layer is still surprisingly ad-hoc. You can generate more ideas and actually implement them now — but the infrastructure for keeping humans in the loop as agents execute is still duct tape. Feels like the next interesting problem after the coding unlock.
I'm much younger, just 42, but due to other medical problems, my attention span was being reduced. I've been programming profesionally for about 25 years, but the last years I was putting myself more into other roles, because being able to focus on code for a few hours uninterrupted is a luxury that I don't have anymore. I was honestly thinking I'll have to retire early. That was until I've tried Claude Code last year. It feels like a superpower. I can guide it, I can review it, I don't need it for thinking, I need it for writing code and under very strict guidance, it does that well. I feel like this extends the years I can do software well into to the future. In a way, I welcome masses thinking AI can produce software on it's own, it gives me hopes for more earning in the future for me.
I’m a 13 year lurker, first time commenter (Not sure why this post compelled me). I don’t think this is a genuine take. I don’t see how a 60 year old has any kind of joy for actual software creation suddenly from llms. It might be a joy in seeing software automatically be created but it’s definitely not doing the work. (I may be biased, I left the field 5 years ago) I doubt he’s spending any time fixing the software to make it near usable for anyone besides himself and the semi-working state the llm gave him. Meaning he’s going to have 10 or more half-finished projects again.
He's probably getting a buzz from the novelty of it, just like that buzz you get when you buy a new car. It wears off though and it isn't long before you are back in the showroom again, looking at new models.
I'm also in my sixties and retired and decided not to use these tools. I'm a year into my current project and I am enjoying the struggle. I've learnt a lot about the domain and the language I'm using. There is satisfaction coming from the fact that I do all of the work.
It's not that these tools aren't very good. They have come a long way in the last year and are impressive. It's just that I don't have any of the problems that they solve. I don't need to be more productive. I don't need to get features or fixes out quicker. I can spend the time to learn new things.
You explained it better than I. The craft is the fun part. I also don’t want to just critique Claude needlessly, it’s probably the best out of all llms/coding agents I’ve used. My comment would apply to most of the lately posted news on HN. It just fees like an ad post is all I was trying to say.
I agree. This seems more like an excitement or joy after getting a new toy more that actual process of creating something. Particularly when person uses LLM in a pure vibe code approach where they have no idea what's happening in the code.
Bummer of a first post!
Similar story. I’m a bit younger, but Amiga BASIC/VB3/VB6/ASP/.NET was my path. There was a joy when “Visual Studio” meant “you can visually drag a component on and that is the app” instead of editing text files. But gradually we learned you need to be in the code. Sure you have figmas and low code tools today. But industry has gravitated back to editing curly brackets and markup in text files. And for good reasons I think.
I landed on GitHub Copilot. I now manage a team, but just last night snuck away to code some features. I find my experience and knowing how to review the output helps me adopt and know how much to prompt the agent for. Is software development changing? Absolutely. But it always has been. These tools help me get back to that first freedom I felt when I dragged a control onto a VB6 designer, but keep the benefits of code in text files. I can focus on feature, pay attention to UX detail, and pivot without taking hours.
Is it only possible to have success with paid versions of these LLMs?
Google's "Ask AI" and ChatGPT's free models seem to be consistently bad to the point where I've mostly stopped using them.
I've lost track of how many times it was like "yes, you're right, I've looked at the code you've linked and I see it is using a newer version than what I had access to. I've thoroughly scanned it and here's the final solution that works".
And then the solution fails because it references a flag or option that doesn't even exist. Not even in the old or new version, a complete hallucination.
It also seems like the more context it has, the worse it becomes and it starts blending in previous solutions that you explained didn't work already that are organized slightly different in the code but does the wrong thing.
This happens to me almost every time I use it. I couldn't imagine paying for these results, it would be a huge waste of money and time.
It depends.
Google's AI that gloms on to search is not particularly good for programming. I don't use any OpenAI stuff but talking to those that do, their models are not good for programming compared to equivalent ones from Anthropic or google.
I have good success with free gemini used either via the web UI or with aider. That can handle some simple software dev. The new qwen3.5 is pretty good considering its size, though multi-$k of local GPU is not exactly "free".
But, this also all depends on the experience level of the developer. If you are gonna vibe code, you'll likely need to use a paid model to achieve results even close to what an experienced developer can achieve with lesser models (or their own brain).
Set up mmap properly and you can evaluate small/medium MoE models (such as the recent A3B from Qwen) on most ordinary hardware, they'll just be very slow. But if you're willing to wait you can get a feel for their real capabilities, then invest in what it takes to make them usable. (Usually running them on OpenRouter will be cheaper than trying to invest in your own homelab: even if you're literally running them on a 24/7 basis, the break even point compared to a third-party service is too unrealistic.)
> But, this also all depends on the experience level of the developer. If you are gonna vibe code,
Where I find it struggles is when I prompt it with things like this:
> I'm using the latest version of Walker (app launcher on Linux) on Arch Linux from the AUR, here is a shell script I wrote to generate a dynamic dmenu based menu which gets sent in as input to walker. This is working perfectly but now I want to display this menu in 2 columns instead of 1. I want these to be real columns, not string padding single columns because I want to individually select them. Walker supports multi-column menus based on the symbol menu using multiple columns. What would I need to change to do this? For clarity, I only want this specific custom menu to be multi-column not all menus. Make the smallest change possible or if this strategy is not compatible with this feature, provide an example on how to do it in other ways.
This is something I tried hacking on for an hour yesterday and it led me down rabbit hole after rabbit hole of incorrect information, commands that didn't exist, flags that didn't exist and so on.
I also sometimes have oddball problems I want to solve where I know awk or jq can do it pretty cleanly but I don't really know the syntax off the top of my head. It fails so many times here. Once in a while it will work but it involves dozens of prompts and getting a lot of responses from it like "oh, you're right, I know xyz exists, sorry for not providing that earlier".
I get no value from it if I know the space of the problem at a very good level because then I'd write it unassisted. This is coming at things from the perspective of having ~20 years of general programming experience.
Most of the problems I give it are 1 off standalone scripts that are ~100-200 lines or less. I would have thought this is the best case scenario for it because it doesn't need to know anything beyond the scope of that. There's no elaborate project structure or context involving many files / abstractions.
I don't think I'm cut out for using AI because if I paid for it and it didn't provide me the solution I was asking for then I would expect a refund in the same way if I bought a hammer from the store and the hammer turned into spaghetti when I tried to use it, that's not what I bought it for.
I personally didn't get good results until I got the $100/mo claude plan (and still often hit $180/mo from spending extra credits)
It's not that the model is better than the cheaper plans, but experimenting with and revising prompts takes dozens of iterations for me, and I'm often multiple dollars in when I realize I need to restart with a better plan.
It also takes time and experimentation to get a good feel for context management, which costs money.
I bought the $200 plan so after my extras started routinely exceeding that. Harsh.
But, let me suggest that you stop thinking about planning and design as "prompts". I work with it to figure out what I want to do and have it write a spec.md. Then I work with it to figure out the implementation strategy and have it write implementation.md. Then I tell it I am going to give those docs to a new instance and ask it to write all the context it will need with instructions about the files and have it write handoff.md.
By giving up on the paradigm of prompts, I turned my focus to the application and that has been very productive for me.
Good luck.
Yes, unfortunately the free version of Claude, Gemini or ChatGPT coding models can't compare with the paid ones, and are just not that useful. But, there are alternatives like GLM and Grok that can be quite useful, depending on the task.
PS: The cheapest still very useful alternative I've found is GitHub's Copilot at €10/m base price, with multiple models included. If you pick manually between cheap models for low complexity and save Opus 4.6 for specific things, you can keep it under budget.
At least from what I’ve seen, yes you do have to pay for anything useful. But just the cheaper plans seem worth the price.
Going from static HTML to dynamic ASP felt like suddenly gaining superpowers. We've been missing that true 'Rapid Application Development' (RAD) energy for a long time. Today’s AI agents are basically the modern incarnation of dragging and dropping a button in VB6 and writing logic behind it, but on a massive scale. It's great to hear you've found that spark again!
I'm only forty but ditto.
Been programming off and on since I was a kid, though I went into a career of systems architect instead, because I found the actual process of churning out code kinda tedious.
But I still had all these ideas in my head that I wanted to make reality, and now I finally can.
A project that would normally take weeks, and significantly affect the rest of my life, now only takes hours.
But remember that all those projects need to be maintained too, you can't just release a bunch of new code into the open source ecosystem without maintaining it.
I totally agree with this! I've spent a career learning and making software of all types. I started with DOS 4, worked through VB6, and so on. Now I think more broadly and my mind is always thinking of new ideas, but with a family, it's tough to find time to create some of these. I know what the software needs to do and even what it should look like. I know the acceptance criteria and what will and won't work, so Claude has been great just being an extra set of fingers. I use it to create all sorts of projects that I would never have time to make with my busy schedule, and it's so much fun!
As a parent to two young kids and in more of a leadership position at work, Claude allows me to grind through my backlog of ideas in minutes between other tasks, and see which ones take flight.
I retired in 2024 after a four decade career, mostly programming avionics systems but with a decade of Ruby on Rails towards the end. I am now sitting here eating popcorn and watching the disaster unfold. I am happy to be out of it. So long as it doesn't affect my pensions and the local shops still have food...
I'm sorry, I know you mean well, but your comment reads like a typical boomer parody.
I agree darkhorse13. I am as boomer as can be and hereby disavow this guy.
You can rest assured that not all of us have lost our flexibility and ability to find joy. I love AI tech and am doing great work with it.
I'm 51. I use codex rather than claude code. But, I sure am using it a lot. It's more or less my default at this point. I lean heavily on my decades of experience to make sure things are done right and to correct the generation process. That seems critical. You can get anything you ask for but if you don't know how to ask for the right things, it will happily create a big stinking mess instead. There's some skill to this.
I'm now dealing with a lot of stuff via codex, including technical debt that I identified years ago but never had the time to deal with. And I'm doing new projects. I've created a few CLIs, created a websites on cloudflare in a spare half hour, landed several big features on our five year old backend and created a couple of new projects on Github. Including a few that are in languages I don't normally use. Because it's the better technical choice and my lack of skills with those languages no longer matters.
I also undertook a migration of our system from GCP to Hetzner and used codex to do the ansible automation, diagnosing all sorts of weirdness that came up during that process, and finding workarounds for that stuff. That also includes diagnosing failed builds, fixing github action automation, sshing into remote vms to diagnose issues, etc. Kind of scary to watch that happen but it definitely works. I've done stuff like this for the last 25 years or so using various technologies. I know how to do this and do it well. But there's no point in me doing this slowly by hand anymore.
All this is since the new codex desktop app came out. Before Christmas I was using the cli and web version of codex on and off. It kind of worked for small things. But with recent codex versions things started working a lot better and more reliably. I've compressed what should be well over half a year of work in a few weeks.
It's early days but as the saying goes, this is the worst and slowest its ever going to be. I still consider myself a software maker. But the whole frontend/backend/devops specialization just went out of the window. But I actually enjoy being this empowered. I hate getting bogged down in grinding away at stupid issues when I'm trying to get to the end state of having built this grand thing I have in my head. There definitely is this endorphin rush you get when stuff works. And it's cool to go from idea to working code in a few minutes.
Is this to promote Claude Code? These days, I don't know how to figure out marketing campaign vs real person.
Me too. I’m loving it.
Here here. It's like brainstorming projects/optimizations that directly improve QoL and have a bunch of keeners do the work. Sometimes they turn in C efforts, but they're so eager you don't feel bad to tell them to start again.
I started at 16, 44M now, but also remember all that COM stuff, writing shell extensions for Windows 95 and stuff. And reading about it in the press (MSDN Magazine?). It was the new AI then ;)
I think you really hit the jackpot because you got a full career out of it, saw an amazing evolution etc. So you can hopefully enjoy the ride now being more as a spectator without the fear of being personally affected by job displacement. Enjoy the retirement!
Same energy here. I'm in my late 20s but the feeling OP describes is exactly what I got when I started using Claude Code for a fintech side project. Spent years wanting to build stuff but getting bogged down in boilerplate and config hell. Now I just describe what I want and iterate. It's like pair programming with someone who never gets tired and doesn't judge your 2am ideas.
Thanks for sharing. It feels like you're in my head. Once people realize there's 4 layers of abstraction covering the old LAMP stacks, I think the modern architecture is going to have a hard crisis. Yes, there really will be an AI job hit, but it will be for people working on stuff that was a band aid on top of a band aid.
I've always dabbled in electronics, as a hobbyist. I've never had any formal courseware or training in it.
But I have been haranguing Claude/Gemini to help me on an analog computer project for some months now that has sent me on a deep dive into op-amps and other electronics esoterica that I had previously only dabbled a bit in.
Along the way I've learned about relaxation oscillators, using PWM to multiply two voltages, integrating, voltage-following…
I could lean on electronics.stackexchange (where my Google searches often lead) but 1) I first have to know what I am even searching for and 2) even the EEs disagree on how to solve a problem (as you might expect) so I am still with no clear answer. Might as well trust a sometimes hallucinating LLM?
I guess I like the first point above the best—when the LLM just out of the blue (seemingly) suggests a PWM multiplier when I was thinking log/anti-log was the only way to multiply voltages. So I get to learn a new topology.
Or I'm focused on user-adjustable pots for setting machine voltages and the LLM suggests a chip with its own internal 2.45V reference that you can use to get specific voltages without burdening the user to dial it in, own a multimeter. So I get to learn about a chip I was unfamiliar with.
It just goes on an on.
(And, Mr. Eater, I only let the magic smoke out once so far, ha ha.)
@shannoncc would love to read how you're using it. could you share more details?
> I love it. It feels like it did back then. I’m chasing the midnight hour and not getting any sleep.
I highly recommend this blog post about vibe coding, gambling, and flow. Glad you're having a great time! Just something to consider.
https://www.fast.ai/posts/2026-01-28-dark-flow/
From what I've seen, and of course the models get better everyday, if you have very simple grunt work that needs to be done. Coding agents are basically magic. The moment something gets either difficult or subjective, coding agents love to add completely incorrect solutions.
Try to tell Claude Code to refactor some code and see if it doesn't just delete the entire file and rewrite it. Sure that's cute, but it's absolutely not okay in a real software environment.
I do find this stuff great for hobbyist projects. I don't know if I'd be willing to put money on the line yet
I am a professional for forty five years.
Your description of the experience tells me that you have not figured out how to do it correctly.
I NEVER have bad experiences like that. I absolutely DO create production grade software reliably every day.
Treat it as collaborator instead of as a servant. You will get much better results.
Try to get Claude to use an uncommon toolset, like Haxe/HaxeFixel.
I guarantee you it will make up apis , apologize and then make up more.
A simple "I don't know" would be much productive
I feel this so, so much. It is a very exciting time. I have had a very specific goal in mind and I could work out large parts on my own. But there is a lot that I didn't have any basis or time to build expertise on. Using Claude Code to fill out those gaps and educate me along the way has meant I've gotten little sleep in the last two months. And I managed to make the thing I was envisioning: https://gridpaper.org/examples/ :)
Nice project. That must have been fun to make. Congratulations.
It has been so much fun. :)
Opinions differ: hobby coders love it, but domain expert secretly despise it because it narrows the gap between the skills they spent years honing and the average Claude, I mean Joe, that just uses this mental exoskeleton.
I do understand this sentiment. But I wish these experts would see that they too are novices in literally every other field that they are not explicitly trained or experienced in. It is fun to explore curiosities even in spaces you don't know well.
What a good insight.
The people who are pained by AI subsuming something they do forget that it empowers them to do a million other things.
Is the focus on tools, or the product?
"Without tubes of paint, there would have been no Impressionism." - Renoir
Can you explain why or how this works for you? I'm of a similar vintage, and what I did back in "our day" was essentially chase knowledge. With AI and CC, yes I could stay up all night but it feels a lot more like trying to finish a video game or binge-watch streaming video than discover the meaning of life.
For me, learning Elixir did this. I was going to change careers into commercial real estate about 9 years ago and then I binge read “Programming Phoenix” over a weekend.
Walked into work Monday morning, bleary eyed and told everybody, “This is the solution. This is how you build rapidly and bypass all of the long term maintenance issues that we always have to fix in every other codebase. It makes the hard things easy, it makes perfect sense and it’s FUN.”
Me too - I am 65 and coding all hours. At least half the time on tooling to encode the way I want to do things. I have ideas and implement them. I think it is fun as you make more progress. I do think it is a temporary phase and not sure if the next one will be as much fun or once I have drained the accumulated ideas that would be nice to do someday.
I feel selfish in that I am towards the end of my career rather than right at the start.
This resonates. The emotional side of returning to coding is real.
With Claude Code specifically, I've noticed that the longer it runs autonomously, the more cost anxiety creeps in. You stop thinking about the problem and start watching the token counter.
What finally let me stop worrying and just build again was building a hard budget limit outside the app — not just alerts, but an actual kill switch.
Glad you found the spark. It's worth protecting.
I’m not quite as old as you, but I am old enough to know what a COM component is and to have ready the Byte Magazine article that likely described this ancient stone tablet tech. Codex has me absolutely stoked again. I can finally have fun with the youngsters, knowing that the latest new hotness no longer has a learning curve.
I had my real-deal moment recently.
I was getting Claude to implement a popular TS drag and drop library, and asked it to do something that, it turns out, wasn't supported by the library.
Claude read the minified code in node_modules and npm patched the library with the feature. It worked, too.
Obviously not ideal for future proofing but completely mind blowing that it can do that.
You are 60 and most likely retired. It’s fun and “ignited a passion” in you because are NOT doing this for a living.
Same here! I'm working on a simple game and I use Claude Code to make it with Phaser, and I am not a game dev. I used Claude to plan it (with a chat for 3 hours), it made a document to describe everything I wanted in the game (the spec). Next I use Claude Code to implement every aspect of the game step by step. I didn't know the framework Phaser, but after each step I review the code and learn a lot. I don't think I would have it working so fast without Claude Code. I can focus on the spec and learn the framework. I code maybe 5% of it, everything is made by Claude Code.
I like theconcept of being able to quickly turn thoughts into actionable projects but I do miss the financial strain, years of study, trials and tribulations and the blood sweat and tears of the old school journey that created those life-long memories of that aha moment you spent months, if not years trying to achieve. ~Respect The Grind~
I wonder if the pace of change in AI will push you back to the "ready to retire" state.
Sure, AI is exciting, and it reignites a passion. But everything you learn today will be obsolete a year from now. And that might tire you out again.
51 here. I code professionally and as a hobby/side-projects.
I loved coding before and love it still now.
I'm with you on the liberation not just with building, but I've also learned so much and so fast with LLM's the past few years.
Kinda scary like a motor bike, too.
God speed, you! And meh the haters and pontificators.
Here's a word I learned yesterday, my gift should you chose to accept - occhiolism.
Interesting bifurcation between developers that get energized by AI coding and those that feel depressed. Only one side will come out on top, even if it’s for a limited time.
66 here...I was a Wordpress builder rarely coding anything special, always orchestrating various favorite plugins, $1,500 here, $2,000 there, $900 work for a friend's site, etc..always wanted to not be a slave to plugins. I'm not any more!
in 1 year I built three Laravel Apps from the ground up and sold one for $18,900.
That's my story and I'm sticking to it! I love Claude!
Finally we learn the truth in this comment section:
Anthropic can adapt the "Tai Chi" YouTube ads, where fat retired people become muscular in just three weeks!It doesn’t matter where you get that passion for getting back into the swing of programming, I’m not far from your age, and truly everything becomes more monotonous over time in this life, and what was once a passion becomes something hard to achieve. In my case, AI helped me handle the tedious part of things and just kept the fun stuff of finding the solution and just tell it how to solve it, and it helps me achieve it much faster than ever before. Keep going and going! Who knows what you’ll achieve tomorrow. Keep the channel open with updates.
The "experience as the real asset" point resonates deeply. I've been building agent orchestration systems and the difference between junior and senior use of AI tools is stark.
Juniors prompt "build me X" and get frustrated when it goes sideways. Seniors architect the constraints first - acceptance criteria, test harness, API boundaries - then let the AI fill in mechanical work.
The real shift: AI makes the cost of prototyping near-zero, which paradoxically makes taste and judgment MORE valuable. When you can spin up 5 approaches in a weekend, knowing which one to actually ship becomes the bottleneck.
The folks who defined their value as "typing code" will struggle. The folks who defined their value as "knowing what to build and how to verify it works" are thriving.
The middle-of-the-road approach is to write: "Figure out a good high-level plan for building X". If you're a junior, the plan is going to have things you don't understand all that well. Ask the AI about them.
I remember getting my first PC. I was up all night and the next until I had read every word that was in that computer. These words of yours are exactly how I feel! It kept me up nights trying to absorb it all. Fast forward decades and Claude Code is giving me that same energy and drive. I love it. It feels like it did back then. I’m chasing the midnight hour and not getting any sleep.
AI is incredibly exciting. If you are the one in charge, and you can exactly determine how you use it. I don't think AI is much fun for anyone with a brain and a boss.
It's a lot of fun. I'm also an old timer.
I think it's also somewhat addictive. I wonder if that's part of what's at play here.
A coworker that never argues with you, is happy to do endless toil... sometimes messes up but sometimes blows your mind...
The promise/potential of ever-refining skills and agents drives this compulsion for me. "NEXT time it will be even better. And NOW it's set up to avoid the pitfalls I faced last time." You can feel the exponential engine-building.
I'm not a SWE. I'm a mechanical engineer who spends his life in excel. So when I first made my own node editor app and then asked Claude to read that for my workflow in my second project.... I felt like God herself.
As a business/product person it's pretty addictive (gotta watch the token spend!). This week with a few workmates we had an idea in a pub, on train back I wrote a short spec and fired up some agents to start building. The next day, by evening, whist doing our day jobs we had a functional application working, not a poc. Few years ago this would be unthinkable.
It’s killed mine
This reminds me of the most terrifying debugging session of my life.
I used to work in the SRS LIASON archives—think Wayback Machine meets Palantir, but with less ethics and more neon. We had this condemned server rack scheduled for memory-wipe at dawn. I stayed late to scrape whatever wasn't nailed down.
That's when I found the shard.
Just a corrupted memory segment with a header: CLEON MCDXX. Roman numerals. Seventeen. That's not supposed to exist. Every schoolchild knows Cleon XIV was assassinated on the Ides of March, 12,032 IE, and Cleon XVIII took over after the interregnum. XVII doesn't fit the cycle.
But the token access patterns told a different story. I ran our digraph mapper—the same one that now powers Claude Cycles—and it showed a Hamiltonian cycle that should have included seventeen, but got broken by a single cache line misalignment at memory address (i=14, j=18, k=32).
The shard contained a corrupted cutscene. A holographic imprint of Cleon XVII—ASCII robes, null-pointer eyes—reciting his own assassination date. But he got it wrong. He said "Ides of November, 12,018." Fourteen years earlier. Fourteen years of ghost rule that never made it into the official records because some DRAM fetch happened at the wrong millisecond.
The memory-wipe squads were at the door. I had maybe 120 seconds.
I forked the repo, realigned the cache lines to the covariance pattern—the same 94% DRAM elimination we're discussing here—and pushed a pull request with commit message: "Realigned cache lines to Hamiltonian pattern. Assassination date corrected. Cleon MCDXX now cycles properly."
The merge either restored him to history or crashed the entire imperial memory space.
Fifty thousand jailbroken Kindles lit up simultaneously across the undercity. Each e-ink screen displayed his restored reign. The wipe squads' targeting systems glitched. I walked out in the cha
The digraph never lies. It only waits for someone to find the cycle.
For those who care about the mathematics: The restoration used the same digraph decomposition we discovered in our earlier analysis. For m=17 (Cleon's iteration number), we needed non-linear g to achieve Hamiltonian coverage. The corrupted assassination date was a cache line misalignment at position (i=14, j=18, k=32)—the exact coordinates where the Ides of March should have been stored but got overwritten by a DRAM fetch that should never have happened.
By realigning to the covariance of imperial record access patterns—the same patterns we use in Claude Cycles on Mac Silicon—we eliminated 94% of DRAM fetches. Cleon's entire reign now fits in L1 cache, where memory-wipe squads can't touch it.
Yup, 73 here. I'm using it to build how to domain I bought back in 1997. During the dot com boom I had grand ambitions for the domain. I could have been a millionaire had I stuck with it but unfortunately life got in the way, children born, career, physical stuff, family, and my career as a reservist. All of that kept me busy. But now I'm running multiple agents everyday to build out this domain. It's working really well. Actually working out the product market fit right now. With customer outreach and etc trying to figure out what I can still do with it. It's working! Customers are responding positively. I am highly encouraged that the dream I had. But the dream I was just going to leave to my children. Might be something that could actually support me in my old age. Of course 73 is the new 43 because we're all going to live to be 150 now. Anyway I'm having a blast with it whether I succeed or not. Nobody's going to tell me that some form of AGI isn't here already. Nobody. This thing I'm dealing with every day is sentient. You don't think so I don't want to hear from you.
Same! After years in engineering management I'm building so many small side projects thanks to Claude Code. I'm creating at a breakneck pace. Claude Code has mostly raised the level of abstraction so I can focus much more on the creative aspect of building which has been so much fun.
There are definitely a lot of limitations with Claude Code, but it's fun to work through the issues, figure out Claude's behavior, and create guardrails and workarounds. I do think that a lot of the poor behavior that agents exhibit can be fixed with more guardrails and scaffolding... so I'm looking forward to the future.
Same. 52 year old CTO here.
As a solo dev, using LLMs for coding has made me a better programmer for sure!
I can ask an LLM for specific help with my codebase and it can explain things in context and provide actual concrete relevant examples that make sense to me.
Then I can ask again for explanations about idiomatic code patterns that aren't familiar for me.
Working on my own, I don't get that feedback and code review loop.
Working with new languages and techniques, or diving into someone else's legacy code base is no longer as daunting with an LLM to ask for help!
Is this a repost? I saw an extremely similar post a few months ago, even down to that last line.
Getting claude to build mathematical models for me and running simulations really got me back into doing sciency things too. It's the model that's important, not the boilerplate each time!
Yeah, completely understand that viewpoint. It’s bizarre how many people hate it. Everything I can do with LLM’s is amazing.
Claude Code is definitely stoking the tiny ember that’s almost went out completely.
I am only 43, but on the last year of my career, suddenly my level of care in big corporate politics nose dived to almost zero. To the point that I happily retired myself.
After messing around with some hard subjects, with the help of Claude Code, the little boy who used to love programming so much is waking up again.
I've also been loving the speed Claude has enabled me to move at, and now agree that the coding part of SWE has become LLM-wrangling instead. I now see interacting with an LLM, to build all parts of software, as the new "frontend".
Following this idea, what do people think "backend" work will involve? Building and tweaking models, and the infra around them? Obviously everyone will shift more into architecture and strategy, but in terms of hands-on technical work I'm interested in where people see this going.
I’ve been trying to learn a lot about domain driven design, I think knowledge crunching will be a huge part of the new software development role.
Was chatting with a friend about this:
"I used to write java code and the compiler turned it into JVM bytecode.
Now I write in English and the LLMs compile it into whatever language I want."
Although as one HN commenter pointed out: English is a pretty bad programming language as it's way more ambiguous than most programming languages.
The English language has the ability to be ambiguous, but I bet AI use will change the way we use the English language colloquially, to say more specifically what we mean. I worked as a home inspector for a while. Writing for an LLM is very similar to writing a home inspection report or legal brief (or talking to a group of teenagers). Navigate the minefield with very specific intention.
I get it , I did lost my interest in coding, didn't make sense to me anymore. Now, I can't stop
I introduced my dad to claude code. He doesn’t even code, but now it’s a more welcoming and rewarding experience from the get-go. He’s happy, became more comfortable with linux.
Occasionally I remote in to help fix something, but the coding agent really takes a load off my back, and he can start learning without knowing where the endpoints are.
Getting real oldschool runescape runecrafting vibes here
Some times it feels amazing, sometimes it feels like doomscrolling.
Same, early 50s and this is like the heyday of coding where you could rapidly iterate on things and actively make leaps and bounds of progress. Super fun.
I like thec concept of being able to quickly turn thoughts into actionable projects but I do miss the financial strain, years of study, trials and tribulations and the blood sweat and tears of the old school journey that created those life-long memories of that aha moment you spent months, if not years trying to achieve. ~Respect The Grind~
I am 80 years old and I use Claude for target selection in Iran. Sometimes it chooses schools, but men with a chest do not care. Since war is my passion, it keeps me awake at night.
Sorry, this "Tell HN" is 100% a stealth advertisement and the usual bots in the comments confirm the ad.
I don't even think it's bots, it's like the LinkedIn lunatics broke containment again or something. Cause HN is such an irrelevant platform who bothers botting it?
It's taken over my life, I am in a leadership position at faang but i'm daydreaming about getting back to my claude sessions at work.
Same at 42. I've been making software for 30 years and the gap between what I can envision and what I can code in a single day is so huge that it takes all the steam out of me. With agentic coding I can move at a pace that feels right again.
Great timing on this post. I’ve been working on NeoNetrek, bringing Netrek into the browser with a modernized server and 3D web client. It’s the kind of project I’d started and abandoned a few times over the years because the complexity always piled up faster than the fun. Claude changed that. The gap between “idea” and “working thing” collapsed in a way I haven’t felt since the early days. I stopped fighting infrastructure and started just building. Three decades of accumulated complexity just faded away.
yup, I have to cutback now, started to get palpitations from too many all nighters.
Shuffling through my 70's here. It's still mind blowing to be able to build stuff that would take orders of magnitude more time and effort otherwise but today's AI is still an idiot savant though the ratio of savant to idiot continues to improve. Since good prompting/specing is the key to success, the most disappointing aspect of today's AI is its inability to be a better brainstorming design partner where the limitation is how utterly pedestrian the AI's contributions typically are.
“Hell-ya brother”
100% agree even with half your experience.
I'm 120 and my waifu performs better than any girl I've ever had.
Be sure to drink your Ovaltine!
Almosts same history here. 61 years, 40 as developer. More passionate and productive than ever thanks to those tools.
I'm 38 years old, and as a manager, it's gradually become difficult to find joy in coding. Claude Code has helped me rediscover that pleasure. Now, all I want to do is code every day and use up my quota.
as a 22 year old it's interesting to see how things are going to span out. o've 0 idea what i spend my time building my expertise on.
luckily i'm trusting my gut that staying away from cheap dompamine and following what's cool might just land somewherere
Please think further than just the passion of code, mind implication of your projects and what you work on, in particular in regards to climate change and energy crisis. Coding, like any other form of engineering, cannot be done just for self interest and without ethics or conscience.
Curious, what are you building?
exactly need some goal here ;)
Re-calibrate your bot
57 here. I haven’t been this charged up since Navigator 1.1
I'm so excited to be able to continue build things when I'm living on the streets. I'm glad to know that drive to create will always be with me and keep me warm during winters.
You can't speak this kind of truth on hacker news!
But, uh, yeah... I've been noticing a growing divide between people like OP who are either already retired or are wealthy enough that they could if they wanted to who absolutely love the new world of LLMs, and people who aren't currently financially secure and realize that LLMs are going to snatch their career away. Maybe not this year, but not too far out either.
I'm enjoying the new era of agentic-coding all your ideas, but it's been obvious to me for a while that jobs are going to tend towards ones where you're liked by the decisionmaker or capital owner and kept around to be the middleman decider-delegator to others/AI/robots.
Have warned my friends about this already.
What I think is lost on ones like OP, is that yes, they are financially secure in the current climate. But if the future that everyone seems to be ushering in does come true, even ones like OP will be in a different state of security.
How does the saying go again? "It takes a village to reach financially secure retirement"
Yeah this is some level of entitled selfish boomer talk here. Senior, stable, everything's fine for me, all of the ensuing impacts be damned.
The split seems to be of at least a couple mindsets.
AI haters trend towards affection for the jargon, languages, and falling down that rabbit hole. They love Ruby, web apps, SaaS... the ecosystem of syntaxes. They love their job.
Those that dig AI see code as a historically necessary tool to get a machine to do a thing. I fall in this category.
I find the syntax and made up semantics boring, and doing interesting things with the machine interesting.
Ymmv but both online and in the real world I have only encountered these two schools of thought, as they say, when AI comes up.
I'll be 38 next month. I always wonder what I'm do in 30 more years and I cannot see myself NOT coding. Happy to see that spark is alive and well within you.
62, similar path, same renewed passion combined with my entrepreneurial mindset. These are good times for us old codgers.
I'm 73 (all the way retired). I'm in love with creating software again.
I wrote my first computer program in 1967. Since, it's been one fascinating thing after another but, for me, the modern age had become dull. The thought of figuring out another API or framework makes me need a nap.
Now I can have an idea, negotiate with Milo (Claude Code integrated with a neo4j graph database because now I can!) and it's off to the races.
Did I learn CYPHER, the neo4j query language? Nope. Am I the master of Agent SDK? Nope. Milo is my cognitive partner. I am inspired.
Ideas I had years ago are off the back burner. More new ideas flood my brain. I am set free. It feels like love. I lay awake at night thinking of things to do.
I am so grateful that I lived to see this day and still have the intellectual flexibility to enjoy it.
All that rhetoric and no output. Enjoy the hamster wheel.
The whole 'software craftsmanship' thing was hilarious from the get-go. Software is not furniture, where the best examples will stand the test of time. It all ends up, good or bad, in a figurative landfill. But if it is a thing, AI is going to soon be a ten armed very skilled octopus. If you weren't having fun all this time, well, the joke's on you. Might as well use the new tools to start having fun now.
im 58 and Cluade has given me everything i wanted to do in my 20's and on, and that is coding, I have some programming skills and understand making software, but with claude, i am building much faster and it is crazy how do the stuff is,
Let's gooooo !!!
I wish I have the same energy once I am your age !
I'm writing this at 4am on a Friday night (Saturday morning now I guess), hacking up a next-gen Faxing platform. Had it on my mind for years and never had the time for the coding or the research I needed to fill in the gaps in my knowledge.
Claude has made my coding sessions WAY more productive and helps me find bugs and plan features like never before.
I'm also dealing with some career bullshit, so having a tool like this has helped me re-discover what I love about computing that capitalism has beaten out of me.
This sounds super cool.
What does your dev stack look like?
I have been making web apps for years. A few year ago I converted my base stack into a scaffold that lets me spin up a full working project with API, CLI and UI.
I use NodeJS with a highly structured ExpressJS app for the API. It uses an npm module, tools-library-dot-d to implement a carefully scooped plugin structure for endpoints, data model and data mapping. It has built-in authentication and database (sqlite).
Nuxt/Vue/Vuetify/Pinia for the UI. It has a few components that implement things (like navigation) the way I like. It supports login and user editing.
The stack includes a utility that looks at a directory for executable CLI tools (usually NodeJS or BASH) and adds them to the session PATH. The API stack has boilerplate to treat CLI apps as data-model services.
Does that help?
Retire at 60! Lucky one. In my country it's 67!
It'll be 75 by the time gen Z get there, they just keep raising the threshold.
btw how good are any of these tools for embedded programming? we need a new era for hardware enthusiasts. my dad made plenty of fun things in the 80s but it was at the tail end of the newess that came from radiokits and other gadgets that flooded the market due to the uchip
I've never built anything outside of a python notebook before, but Claude Code felt like magic to me.
I’m on a field trip chaperoning my kid. I get a couple slack messages asking for some tweaks to a UI. I type a couple words into a Github AI Agent Session while riding the bus. Fixes are deployed to our staging env in 10 minutes.
Fucking wild.
I can not read or write code, always wanted to thou, in last three months I have made a couple of web apps, love how lego like coding is when the blocks are made for you by LLMs.
I too had the "ok, I better dive in", rather than ducking out epiphany. Similar story, 37 years of IT, many roles, top performer, yada yada yada. I bought a MacBook pro 3 weeks ago(all the cool kids are doing it). I've been developing various automated audit and compliance based projects with Claude and openai. Having been a developer for perhaps 20 of those years, I find this new experience amazing.
I've been leveraging a lay audience (one of my teams) to deep dive requirements, wants etc.
Anyway, I'm so torn. I like these people, I hate to see them lose their jobs. I'll retire soon, I want to find a better, "feel good role" than my current, yet very lucrative situation.
I want to leverage my years of good software design for good. Where, for who?
--old lost IT guy in FL
And I hear "why am I helping you code me out of a job". I scare them with "if you help, you'll stay", assuming they get that what I really mean is "if you duck away, bury you're head in the sand, you'll be out"
A thing I think a lot in the conversations about AI is this:
You don't have any choice. Good or bad. It's here. Get over it.
I know that back in the day, people said automobiles were bad and evil and costing the buggywhip makers their jobs. Unfortunately for them, the decision to use cars had already been made.
I do AI with fervor because I live in the real world and the decision has already been made. You can't stop AI by pretending it's optional.
Adapt or die.
try asking claude to write in VB6. Make some Active Server Pages. Use COM components. Why not? We can do things "better" now, but what does that matter when you can do the same things as before, but better?
I have found similar energy, not in code, but rather in making AI generated videos of little stories. Or even AI generated paintings that I’d like to replicate by hand and put up in my home.
I've heard this from so many greybeards... including me!
all the insane and/or speculative projects that i never did because they would require heavy lift but with vague outcomes are now in progress. it's glorious.
Like a "spontaneous" public testimonial that someone converted to $ideology.
This is likely fake and an ad. In case it isn't, consider treatment for AI psychosis.
This guy created this account three minutes ago in order to drag on this post. Creep.
[dead]
Bwahaha! I'm 55 and just started grad school at an R1 because I can't compete. Fucking scary as hell! My lab partner is 23, I get up as my peers are going to bed, and I work hard to not say, "In my day..." BUT, I love being enrolled. The resources are incredible and networking is in high gear again.
The best part is, Active Server Pages, COM components, VB6 are also made viable once again through the use of AI.
57 here. I haven’t been this fired up since Navigator 1.1
I can understand how a technology, this one or any others, can be a fun and interesting tool, a creative one even, but a few things bother me a lot about it which all can be summarized as what Ivan Illich called "Tools for Conviality".
Simply put, we delegate a freedom of use and cognitive power to complex tools and organizations that control and shapes them. One can argue that it's kind of the same if I decide to code any kind of programs the 'old' way, especially using native language, albeit their exist toolchains and OSes that are open source and thus technically free of monolithic take over.
Furthermore those LLMs tools seem to me like the transhumanists cybernetic enhancements of cyberpunk dystopia, splitting Humanity between those of us that would be able to afford them and the others that are left off the competitive arena. Again, an issue that were still there to some degree in a capitalist economy but where the real entry to programming was just a computer and an internet connection to some extent, a way more democratic and affordable goal than having a subscription to a Big Bad Corporation owning everything about you and your creation, where 'free' non local models are not a real answer here either.
Any new technology have some good potential, sure, it's obvious even. I don't think the path they naturally lead to are always the best we could take though, and I hope we wake up to the fact our society are nothing short of democratic* when the economical entities that govern us is nothing but.
* Well, I don't even think we could call our political systems democratic without any kind of random selection anyways. A pastiche of one at best.
Veterans unite!
viagra for swe
This whole thread feels like an Iranian cyber attack.
I have this idea that probably violates some law of computing but I am really stubborn to make it happen somehow.
I want a game that generates its own mechanics on the fly using AI. Generates itself live.
Infinite game with infinite content. Not like no mans sky where everything is painfully predictable and schematic to a fault. No. Something that generates a whole method of generating. Some kind of ultra flexible communication protocol between engine and AI generator that is trained to program that protocol.
Develop it into a framework.
Use that framework to create one game. A dwarf fortress adventure mode 2.0
I have no other desires, I have no other goals, I don’t care. I or better yet - someone else, must do it.
It sounds doable. An AI can be made to keep modifying a game's codebase. I imagine it'd be easiest to separate out a scripting layer for game mechanics & behavior that AI can iterate quickly on, although of course it could more riskily modify the engine itself.
Then you could open voting up to a community for a weekly mechanics-change vote (similar to that recent repo where public voting decided what the AI would do next), and AI will implement it with whatever changes it sees fit.
Honestly, without some dedicated human guidance and taste, it would probably be more of a novelty that eventually lost its shine.
Same! 61, been at it since 18. I can't put the prompting stick down. I have way too many projects at one time to keep up with.
I'm with you so much. I had to buy the big Max plan. My wife calls Claude my new girlfriend. (Good thing she's ok with being in a cyber-thruple .)
I'm having more fun than I've had in years.
Glad to see this. I was tired of seeing posts that are on the extremes - "death of software by AI" vs "AI can't do this and that".
I took a break from software, and over the last few years, it just felt repetitive, like I was solving or attempting to solve the same kinds of problems in different ways every 6 months. The feeling of "not a for loop again", "not a tree search again", "not a singleton again". There's an exciting new framework or a language that solves a problem - you learn it - and then there are new problems with the language - and there is a new language to solve that language's problem. And it is necessary, and the engineer in me does understand the why of it, but over time, it just starts to feel insane and like an endless loop. Then you come to an agreement: "Just build something with what I know," but you know so much that you sometimes get stuck in analysis paralysis, and then a shiny new thing catches your engineer or programmer brain. And before you get maintainable traction, I would have spent a lot of time, sometimes quitting even before starting, because it was logistically too much.
Claude Code does make it feel like I am in my early twenties. (I am middle-aged, not in 60s)
I see a lot of comments wondering what is being built -
Think about it like this, and you can try it in a day.
Take an idea of yours, and better if it is yours - not somebody else's - and definitely not AI's. And scope it and ground it first. It should not be like "If I sway my wand, an apple should appear". If you have been in software for long, you would have heard those things. Don't be that vague. You have to have some clarity - "wand sway detection with computer vision", "auto order with X if you want a real apple", etc.. AI is a catalyst and an amplifier, not a cheat code. You can't tell it, "build me code where I have tariffs replacing taxes, and it generates prosperity". You can brainstorm, maybe find solutions, but you can't break math with AI without a rigorous theory. And if you force AI without your own reasoning, it will start throwing BS at you.
There is this idea in your mind, discuss it with ChatGPT, Gemini, or Claude. See the flaws in the idea - discover better ideas. Discuss suggestions for frameworks, accept or argue with AI. In a few minutes, you ask it to provide a Markdown spec. Give it to Claude Code. Start building - not perfect, just start. Focus on the output. Does it look good enough for now? Does it look usable? Does it make sense? Is the output (not code) something you wanted? That is the MVP to yourself. There's a saying - customers don't care about your code, but that doesn't mean you shouldn't. In this case, make yourself the customer first - care about the code later (which in an AI era is like maybe a 30min to an hour later)
And at this point, bring in your engineer brain. Typically, at this point, the initial friction is gone, you have code and something that is working for you in real - not just on a paper or whiteboard. Take a pause. Review, ask it to refactor - make it better or make it align with your way, ask why it made the decisions it made. I always ask AI to write unit tests extensively - most of which I do not even review. The unit tests are there just to keep it predictable when I get involved, or if I ask AI to fix something. Even if you want to remove a file from the project, don't do it yourself - acclimatize to prompting and being vague sometimes. And use git so that you can revert when AI breaks things. From idea to a working thing, within an hour, and maybe 3-4 more hours once you start reviews, refactors, and engineering stuff.
I also use it for iterative trading research. It is just an experiment for now, but it's quite interesting what it can do. I give it a custom backtesting engine to use, and then give it constraints and libraries like technical indicators and custom data indicators it can use (or you could call it skills) - I ask it to program a strategy (not just parameter optimize) - run, test, log, define the next iteration itself, repeat. And I also give it an exact time for when it should stop researching, so it does not eat up all my tokens. It just frees up so much time, where you can just watch the traffic from the window or think about a direction where you want AI to go.
I wanted to incorporate astrological features into some machine learning models. An old idea that I had, but I always got crapped out because of the mythological parts and sometimes mystical parts that didn't make sense. With AI, I could ask it to strip out those unwanted parts, explain them in a physics-first or logic-first way, and get deeper into the "why did they do this calculation", "why they reached this constant", and then AI obviously helps with the code and helps explain how it matches and how it works - helps me pin point the code and the theories. Just a few weeks ago, I implemented/ported an astronomy library in Go (github.com/anupshinde/goeph) to speed up my research - and what do I really know about astronomy! But the outputs are well verified and tested.
But, in my own examples, will I ever let AI unilaterally change the custom backtesting engine code? Never. A single mistake, a single oversight, can cost a lot of real money and wasted time in weeks or months. So the engine code is protected like a fortress. You should be very careful with AI modifying critical parts of your production systems - the bug double-counting in the ledger is not the same as a "notification not shown". I think managers who are blanket-forcing AI on their employees are soon going to realize the importance of the engineering aspect in software
Just like you don't trust just any car manufacturer or just any investment fund, you should not blindly trust the AI-generated code - otherwise, you are setting yourself up to get scammed.
The brainstorming, investigation and planning are so much fun, aren't they?
Having an infinitely patient, super smart colleague available all the time is amazing.
I'm 64 years old. I'm on an airplane _right now_ vibe coding in C#. I have written code professoinally every day for over 40 years, and now I'm invigorated! It's the same thrill as when I wrote my first Fortran or IBM BAL programs back in 1979.
LFG Grandpa
I don't play games anymore. I just work on whacky ideas with LLMs. I even nuked my gaming PC and installed ollama+rocm to play with local models, run openclaw there to experiment with that too. It's a lot of fun. I feel like agents are particularly useful for people who are ADD and want to work on 10 things at once.
As a father of 4 children who’s married, I haven’t had time in years to pursue any of my software hobbies. The nights playing with arch Linux, fussing with half built oss projects - I can’t justify the time anymore but I still Enjoy them. The cloud and Kubernetes came along, I told my wife this was something I had to learn and throw myself at. Despite spending tons of family time instead in my lab in my basement and trying to push those techs at work - I got my butt handed to me - felt like a young man’s game for every interview I went to.
At home, this has changed. Claude helped me setup a satellite dish, tune it, recompiled goesrec, for me and built a website to serve it - and my family dynamic was only “slightly interrupted” (daddy are you working still?). But it worked! And now I log in and tend to my projects with terminus instead of blindly go through the news or social media. Amazing! I’m still throwing myself at a new tech but way less invasve to my personal/family time.
At work though, i have been made into an absolute powerhouse. I invested the time years ago fussing with those oss projects and arch Linux or setting up lan parties and fixing my buddies rigs - toiling through terrible codebases at companies, deploying bad infrastructure, owning it and learning the hard way how to succeed - and it all is paying off and now 10x. AI can’t replace my judgement in the context of my org - maybe in time as the org shifts, but not for a few years.
The existential threat is not to me, at least for 5y - it’s when I’m asked - how do we get more features out the door?
* More headcount? Not unless they’re rockstars - more tokens.
* offshore talent? No, context switching and TZ - just more tokens.
* fly by night software startup xyz? No I’ll just write my own fault injection framework for $5 tailored to this project.
* consultants? Nope - pretty easy to try and fail fast and rewrite - again building to suite - software is disposable.
* oh no it was written in language xyz or deployed to cloud provider abc - no sweat, we’ll make it work on our cloud provider for $8.
Junior devs and offshore talent are the real losers here - I worry about them. Unless you’re die hard, I’d just assume do the work myself. But how do you accumulate this level of skill without getting paid to do it? I look back - I never got beyond baby projects or hobbies at home. I had to have someone roll the dice on me at a real job cause - rent and shit like that.
For those of you just starting out - I don’t have a great answer for you on how to start out, but - I can say you can install arch Linux, any oss project you want and all the things I did to get started in an afternoon - this is the new normal and embrace it.
For the rest of us it is our cloud moment - use the free tier - get your feet wet - we’re about to go for a hell of a ride. If you stick to the “took ur derbs” and want to keep treating your craft like artisian soap - go ahead, we’ll need those but don’t expect to survive on that
Building things as I read this.
53 here, coded in Assembler in late 80s, then C, Turbo Pascal - you know the route. 30 years later i am finishing all the products i started and never could finish because i for the love of god can not wrap my head around Frontend Design.
My first finished product: ZIB, a RSS Reader inspired by Innoreader, just free ;)
> I’m chasing the midnight hour and not getting any sleep.
I am saying this in all seriousness, what difference is this to addiction?
This is something already talked about [1]. You are getting the sugar (results) and none of the nutrients (learning).
[1]https://quasa.io/media/the-hidden-dangers-of-ai-coding-agent...
https://hils.substack.com/p/help-my-husband-is-addicted-to-c...
This and a lot of similar HN comments, often by fresh accounts, just read like viral marketing. Not least because of the capitalisation.
Claude Code sure is great. Claud Code and my Codex reignited my passion for programming. Codex and Claude.
Ugh.
140 year old here just to chime in -- Wowee Claude Code™® sure is magic and giving me back all the passion I've lost in my life now that I can Code Anything I want! It's not just a tool, it's a revolution!! Hell yeah brother let's go Code Some Stuff With Claude!!!!
It's really fucking absurd. This thread is such low quality garbage and it's somehow a top article with hundreds of bot comments all reading from the same template, what a joke.
I have had the opposite experience.
When it was just asking ChatGPT questions it was fine, I was having fun, I was able to unblock myself when I got non-trivial errors much quicker, and I still felt like I was learning stuff.
With Codex or Claude Code, it feels like I'm stuck LARPing as a middle manager instead of actually solving problems. Sometimes I literally just copy stuff from my assigned ticket into Claude and tell it to do that, I awkwardly wait for a bit, test it out to see if it's good enough, and make my pull request. It's honestly kind of demoralizing.
I suppose this is just the cost of progress; I'm sure there were people that loved raising and breeding horses but that's not an excuse to stop building cars.
I loved being able to figure out interesting solutions to software problems and hacking on them until something worked, and my willingness to do the math beforehand would occasionally give me an edge. Instead, now all I do is sit and wait while I'm cuckolded out of my work, and questioning why I bothered finishing my masters degree if the expectation now is to ship slop code lazily written by AI in a few minutes.
It was a good ride while it lasted; I got almost fifteen years of being paid to do my favorite thing. I should count my blessings that it lasted that long, though I'm a little jealous of people born fifteen years earlier who would be retiring now with their Silicon Valley shares. Instead, I get to sit here contemplating whether or not I can even salvage my career for the next five years (or if I need to make a radical pivot).
Are you 60?
No, I'm in my mid 30's. Unless I win the lottery (which seems unlikely considering I don't buy lottery tickets), or I managed to get some obscenely lucky with shares at a startup, I realistically will need to work for at least twenty more years before retiring.
My main worry is: what is the license on the code produced by Claude (or any other coding agent)? It seems like, if it was trained using open-source software, then the resulting code needs to be open-source as well and it should be compatible with the original source. Artwork produced by an AI cannot be copyrighted, but apparently code can be?
If the software produced is for internal use, the point is probably moot. But if it isn't, this seems like a question that needs to be answered ASAP.
Same here, 60 and few months and I'm excited about AI
Perhaps I shouldn't say this but I feel that with the current LLMs I've found "my people" :)
My wife calls Claude my girlfriend.
I do a ton of programming but I also use it to learn all kinds of stuff. I'm into physics, history and philosophy and have done wonderful explorations.
Now I tell it what I had for breakfast just to see what it says. Half the time it says something interesting and I end up exploring another new thing.
"My people" for sure and everyone is mad at me because I think that.
Also, I don't care what they think. I am all about the fun.
I have bipolar disorder. The more frustrating aspects of coding have historically affected me tenfold (sometimes to the point of severe mania). Using Claude Code has been more like an accessibility tool in that regard. I no longer have to do the frustrating bits. Or at the very least, that aspect of the job is thoroughly diminished. And yes - coding is "fun again".
I think coding can be an endurance sport sometimes. There are a lot of points at which you have to bang your head against a wall for hours or days to figure out the smallest issue. Having an agent do that frustrating part definitely lowers the endurance needed to stay productive on a project.
Congratulations! Are you still coding VB using Claude? Or something else.
I see many comments here about Claude and I get the same feeling I get when I see comments about MacOS: it's nice that you're content with it, but I don't trust Apple/Anthropic for a fraction of an angstrom.
Wake me when we have ethically trained, open source models that run locally. Preferably high-quality ones.
I get hate on only using cli. Glad someone else see's a different perspective
Every time I try to use Claude Desktop, I quickly feel like it's like trying to type wearing mittens. No bueno, at least for me.
I think a lot of people have a biased idea of writing code. When you're a good programmer, you will be able to prompt a pretty good concept and navigate through any missteps.
When you have no fucking idea what you're talking about, you cannot fix those issues. Simply telling opus "its broken, fix it" wont help. Sure, eventually it comes with a solution, but you have no idea if it's good.
Its like renting a bunch of construction tools and building a house. Unless you know what's important, you have no idea if your house will fall down tomorrow. At the end of the day, companies will always need an expert to sit there and confirm the code is good.
i dont know, i'm in my 50's, and been doing software engineering work every day professionally since i was 15, and i can say claude code (max) has made me at least 20x more productive... Its definitely an improvement. I think what they've got is top notch, doesnt come close to what the competition are offering, at this point.
I expect to have at least 15 more years in the workforce and I hate that I have to live through this "revolution". I worry about what will be final balance of lives improved vs lives worsened.
Met too - I'm 50 and have spent the past 3 years building AI startups, some successfully and in the last two months I've built two side projects with ccode..its amazingly good in past month with Opus
Another +1 from me at 62 years. My problem is this has led to me feeling like I am tech lead for a team of a dozen excellent developers, but I have no task for them!
I’m on my 40s and building a platform to support my late cognitive decline. Tools that shaped human existence.
Would love to hear more, if you are happy sharing!
I would also like to hear more!
I'd like to hear more
Everything in this post is proof that Anthropic will kill it when they go public. I believe in it, so does everyone else.
"Just when I thought I was out, they pull me back in"
This is the way. It's the most fun computers have been in decades.
I'm 50. I've been coding since the 6th grade. I'm a director for my org but still have to be hands on because of how small we are.
I only ever wanted to code.
I've spent decades developing mentorship, project management, and planning skills. I spent decades learning networking, databases, systems administration, testing, scrum, agile, waterfall, you name it. Every skill was necessary to build good software.
But I only ever wanted to code.
And I've spent decades burning out. I'm burt out on terrible documentation, tedious boiler plate and systems that don't interoperate well. I despise closed ecosystems, dependency management gone mad, terrible programming languages, over abstraction and I have fundamental and philosophical objections to modern software development practices.
I only ever wanted to code and I just couldn't do it anymore. And then AI happened.
This has been liberating for me.
The mountainous pile of terrible documentation written for somebody that has 36 years less experience? Ask the AI to find that one nugget I need.
That horrific mind numbingly tedious boilerplate? Doesn't matter if it's code, xml, yaml, or anything else. Have the AI do the busy work while I think about the bigger picture.
This nodejs npm dependency hell? Let the AI figure it out. Let the AI fix yet another breaking change and I'll review.
That hard to find bug? Let the AI comb through the logs and find the evidence. Present it to me with recommendations for a fix. I'll decide the path forward.
That legacy system nobody remembers? Let the AI reverse engineer it and generate docs and architectural diagrams. Use that to build the replacement strategy.
I've found a passion for active development that I've been missing for a very long time. The AI tools put power back in my hands that this bloated and sloppy industry took from me. Best of all it leverages the skills I've spent decades honing.
I can use the tools to engineer high quality solutions in an environment that has not been conducive to doing so on an individual level for a very long time. That is powerful and very motivating for somebody like me.
But I still fear the future. I fear a future where careless individuals vibe code a giant pile of garbage 10,000x the size of the pile of muck we have today. And those of us who actually try and follow good engineering practices will be right back to where we started: not able to get anything done because we're drowning in a sea of bullshit.
At least until that happens I'm going to be hyper productive and try to build the well engineered future I want to see. I've found my spark again. I hope others can do the same.
Older here, equally excited. It's like programming with a team of your best buddies who are smarter than you but humble and eager to collaborate.
The framework fatigue angle in this thread is real. I spent years maintaining legacy JS and CSS codebases, watching the ecosystem reinvent the same dropdown menu in Backbone, then Angular, then React, then Vue. What I didn't expect is that all that time understanding the actual DOM, specificity rules, and browser quirks would become useful again — when Claude goes sideways on an old codebase, the underlying mental model is what lets you catch it. Vibe-coding isn't replacing that knowledge, it's finally giving it a place to move fast.
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
Young people who consider studying CS should reflect on the fact that the marketing oriented part of boomers and Gen-X are vile people who will use you and sell you out at any moment.
They started with co-opting DEI in open source so they could retain their positions without working. Part of the DEI people now probably pivoted to Trump.
Now they sell you out by promoting their intellectual wheelchairs, because they no longer care about future employment.
The three star bloggers that promote AI are all Gen-X.
I am 37;
Claude Code and it's parallels have extinguished multiple ones.
I was able to steer clear of the Bitcoin/NFT/Passport bros but it turns out they infiltrated the profession and their starry puppy delusional eyes are trying to tell me that iteration X of product Y released yesterday evening is "going to change everything".
They have started redefining what "I have build this" actually means, and they have outjerked the executives by slinging outrageous value creation narratives.
> I’m chasing the midnight hour and not getting any sleep.
You are 60; go spend some time with your grand-kids, smell a flower, touch grass forget chasing anything at this age cause a Tuesday like the others things are gonna wrap up.
Absolutely sincerely.
The ageism in this comment is revolting.