Ask HN: Am I getting old, or is working with AI juniors becoming a nightmare?

31 points by MichaelRazum 9 hours ago

This is already the second time I’ve observed this. People coming from highly respected universities are doing everything with AI. It’s even hard to argue with them, since it’s all cross-checked with ChatGPT and similar tools.

The picture of software development also looks completely different. Code that used to be readable in a few lines becomes 100 lines—overblown because, well, code is cheap. Now, I could argue that it makes things unreadable and so on, but honestly, who cares? Right? The AI can fix it if it breaks...

So what do you guys think? Is this the future? Maybe the skill to focus on is orchestrating AI, and if you don’t do that, you become a legacy developer—someone with COBOL-like skills—still needed, but from the past millennium.

Daedren 8 hours ago

It's a problem. Seniors with AI perform far better because they have the skills and experience to properly review the LLM's plans and outputs.

Juniors don't have that skillset yet, but they're being pushed to use AI because their peers are using it. Where do you draw the line?

What will happen when the current senior developers start retiring? What will happen when a new technology shows up that LLMs don't have human-written code to be trained on? Will pure LLM reasoning and generated agent skills be enough to bridge the gap?

It's all very interesting questions about the future of the development process.

  • freedomben 2 hours ago

    Indeed, great (though scary) questions to ponder. There are two possibilities I see:

    1. AI gets better enough fast enough that by the time the senior people are retiring, it won't matter anyway

    2. Software becomes mostly unreadable and nobody really understands how it works, but the AI is good enough that this is ok

    Both are hard for me to imagine right now, but if you'd asked me five years ago if AI would ever be good enough to commit to my codebase, I would have said, "I really doubt it". Yet here we are, AI code is sometimes better than handwritten code (depending on the person of course).

    Would love to hear others thoughts on these as well.

baCist 8 hours ago

I think all of this has a dark future. And this can be argued based on how AI works.

AI systems look at code on the internet that was written by humans. This is smart, clean code. And they learn from it. What they produce — unreadable spaghetti code — is the maximum they can squeeze out of the best code written by humans.

In the near future, AI-generated code will flood the internet, and AI will start training on its own code. On the other hand, juniors will forget how to write good code.

And when these two factors come together in the near future, I honestly don’t know what will happen to the industry.

  • tracker1 2 hours ago

    But it's not all smart, clean, good code... I've seen AI repeatedly make the same kinds of errors and interpretations that I would expect from a human working on something. I find that more time in planning, (pre)documentation and testing, even some TDD helps a lot.

    I agree, that AI generated code will really start to piss in the pool so to speak. I'm not sure the models will get better without a lot of hand curation and signals of what is good vs bad vs popular code. They emphatically are not the same.

  • pseudocomposer 5 hours ago

    We’ve had a looming crisis for decades of young people increasingly not understanding a lot of the fundamentals of mathematical logic. And I think treating LLMs (which are amazing tools) as “AI,” and having it play this type of role, is the final step towards a lot of unrecoverable self-destruction.

    We need to remember that the core of what “logic” is can be understood by every human mind, and that it’s our individual responsibility to endeavor to build this understanding, not delegate or hand-wave it. For all of human history, delegating/hand-waving away basic logic that can be understood by actuarial/engineering types has never gone well in the long term.

    • tim333 2 hours ago

      Do most young people need to understanding the fundamentals of mathematical logic? We seem to get by without that.

      Even at the presidential level, today:

      >RFK Jr claims basic math rules don’t apply to White House https://www.independent.co.uk/tv/news/rfk-jr-math-percentage...

      • tracker1 2 hours ago

        You do if what you are implementing requires it. Beyond this, if you don't understand the code the AI agent outputted, you shouldn't let other people run it in production.

      • estimator7292 an hour ago

        Yes, and the country is destroying itself because those in power lack basic reasoning skills.

  • tim333 2 hours ago

    The AIs seem to be getting better faster than the training on it's own code thing becomes a problem. Dunno about the juniors. Maybe they'll become 'prompt engineers'?

  • MichaelRazum 7 hours ago

    Not sure tbh. The labs which are creating the AI - definitely know what they are doing, and its incredible. Would just argue that the AI will become only better in the future

    • sdevonoes 3 hours ago

      They are interested in money and ads. We cannot expect anything good from openai, anthropic, meta, google.

      We had a couple of decades of brilliant engineers working for faang. What did we get as a result? Just crap: twitter, instagram, youtube, facebook. Imagine all those brilliant minds working on something meaningful instead.

      Same goes for LLMs

    • dieselgate 3 hours ago

      Respectfully, I am starting to find "AI will become only better in the future" to be a cheap and empty statement. Optimism is good but it does not take into consideration the tremendous nuance of the topic and current thread.

MarcelinoGMX3C 5 hours ago

MichaelRazum, you're hitting on something crucial many of us in the trenches are seeing. The "code is cheap" mentality, as you call it, leads to bloated, unreadable code. As baCist points out, if AI starts training on its own generated code, we're headed for a real problem with quality degradation.

I've found experienced developers leverage AI as a force multiplier because they can scrutinize the output, unlike juniors who often just paste and move on. The real skill is becoming an AI orchestrator, prompting effectively, and critically validating the output. Otherwise, if you're just a wrapper for AI, then yes, you become the "legacy developer" you mention because you're adding no critical thinking or value.

  • tracker1 2 hours ago

    My own repeated analogy is that it's been a lot like managing/leading a few foreign dev teams on a project. You have to document a lot more and have really well defined tasks, you also have to have dalliance in follow-up and QA/QC. The real difference is that you are getting results in minutes instead of days.

    I can't imagine the people using many agents in parallel are actually even checking the fitness of the output they are generating, let alone the design, structure and quality of the code itself.

drrob 3 hours ago

We strictly don't use agentic development, so it's not so much a problem for us. Copying and pasting from LLMs is about the height of our AI use, aside from the AI auto-complete in Visual Studio, and any new starters are made aware during the interview process that agentic dev isn't permitted, so we cut it off at the source.

  • MichaelRazum 3 hours ago

    Nice! Although if you have new young team - things look different. Also, to be fair, agentic dev is something that might be just the right answer for the future.

decasteve 8 hours ago

Reviewing code becomes more arduous. Not only are the pull requests more bloated, the developer who pushed them doesn't always understand the implications of their changes. It's harder to maintain and track down bugs. I spend way too much time explaining AI generated code to the developer who "wrote" it.

  • MichaelRazum 7 hours ago

    Agree, especially a review is always an knowledge update/exchange and for juniors a learning experience. If it is AI generated, its just not worth the time.

clintmcmahon 4 hours ago

It's still crucial for senior level people to review and scrutinize code generated by Jr and AI developers.

There's always been the need to verify the code matches the business requirement, right? It used to be when you asked someone why they wrote the code the way they did, they'd tell you they thought it was the right way because X or Y. But with AI they can respond saying they actually don't know why they wrote it a certain way. That's just what ChatGPT or Claude told them to do. So, that's the nightmare part that people are experiencing.

Code reviews are important and software architecture skills are just as important now.

  • gdulli 3 hours ago

    It's still crucial to keep your hands on the wheel and be able to take over from a self driving car within seconds. But we know this isn't happening and won't happen. The nature of the invention nurtures the behavior to use it unsafely.

moezd 3 hours ago

Ask AI to ruthlessly reduce cognitive debt, purge unnecessarily defensive code and be extremely pragmatic about what you want to build. If an AI junior is building you Vault when you just asked for a secret rotator script, he's just showing off. Gently pull him from the clouds, since this is also within the JD of a senior engineer.

  • cableshaft 2 hours ago

    The defensive code is getting to me (it's adding a ton of bloat) and I'm trying to fight it but I'm not sure how best to word it. What I've attempted hasn't worked too well so far.

    How do you get it to not add so much unnecessarily defensive code?

yodsanklai 5 hours ago

> People coming from highly respected universities are doing everything with AI

Nowadays, everybody is doing everything with AI, young and old alike. It's very hard to justify not doing it. That being said, you can produce good code with AI, if you know what it should look like and spend the time to prompt and iterate.

  • davidhaymond 2 hours ago

    I realize I am an extreme outlier, but I have not yet once used AI in my software development job.

    • yodsanklai 2 hours ago

      I'm forced to do so at work. I don't like it, but no doubt I'm more productive with it. Certainly not 10x like leadership claims, but it helps.

kf 7 hours ago

Yes, absolutely, if you don't use AI in coding you will be a legacy developer sooner rather than later.

Everyone seriously doing it has a bunch of agents in a corporate like structure doing code reviews, the bad AI code is when someone is just using a single instance of Claude or Chat, but when you have 50 agents competing to write the best code from a single prompt, it hits differently.

kpbogdan 9 hours ago

Yea, the development process is changing rapidly now. We are in this transitional period. I have not idea where we will end up but it will be different place vs were we were like 1 year ago.

sfmz 6 hours ago

Meta/Google/Anthropic report 75%+ of coding is now AI. For every engineer orchestrating AIs -- X will be let go -- but at what ratio 3:1? 5:1? 10:1? Seems like its at least 3.

  • orphea 2 hours ago

    Of course Anthropic is going to report whatever they want to sell more shovels. This metric from an AI provider is not interesting.

  • MichaelRazum 6 hours ago

    Actually not worried about unemployment. This is an awesome development thing - called technological progress.

    PS: Compare Assembly with Python - for sure the ration is more then 10x. Still we need much more devs compared to early days. For me the question is what the future software dev looks like (if the job still exists).

truemotive 3 hours ago

Yes. It's even more frustrating when you land in an office full of them.

  • MichaelRazum 3 hours ago

    And tbh it get things done very quickly. So, it is also very hard to argue, for a different coding style

coldtea 8 hours ago

>So what do you guys think? Is this the future?

Yes. The feature is quickly produced slop. Future LLMs will train on it too, getting even more sloppy. And "fresh out of uni juniors" and "outsourced my work to AI" seniors wont know any better.

davidajackson 4 hours ago

Many here will be sad but there will be a day when writing code is seen as as antiquated as using a slide rule. It is coming.

damnitbuilds 8 hours ago

There seems to be a disconnect, with some people claiming they don't write code any more, only specs, and me trying to get Copilot to fix a stupid sizing bug in our layout engine and it Not Getting It.

Is this because the guys claiming success are working in popular, known, more limited areas like Javascript in web pages, and the people outside those, with more complex systems, don't get the same results ?

I also note that most of the "Don't code any more" guys have AI tools of their own to promote...

  • gdulli 3 hours ago

    Don't forget how many people here (and elsewhere, but especially here) need you to think this stuff works better than it does because they're selling it or otherwise benefit from its success.

  • drrob 2 hours ago

    Indeed, there's quite the echo-chamber of agentic encouragement going on, but the overwhelming feeling is that everyone's shilling and no one's buying.

  • nazgu1 7 hours ago

    In my opinion these guys just don't give a sh** on "stupid sizing bugs". Those who cares about how they software behaves and looks like realises after a while that most of AI claims are scam.

    • rtmx 2 hours ago

      TBH, I'd say we were there long before LLMs came to 'help'. Software world was in a dreadful state for a decade or so, maybe longer, I'd say — devs get powerful machines not like normal users, everyone is 'just doing their job', software tested in isolated environments so no one cares about installing a couple of their own 'background services in NodeJS' on a user machine — not a big deal, yeah? And so on...

      At the same time, I see the future being brighter with the help of these coding LLMs — I personally was not building software for years, focusing on management-like work. Serious coding during 'free time' was just too heavy to lift — you need time to sleep, eat and do some IRL things too...

      Now, having experience in building software and caring of what I create and why I can do this far more quickly with LLMs and it kinda opens possibilities I could only dream of before. Like get a few spare $$$ millions and hire a team to build something before = pay $20 to Cursor/Claude and spend a few days guiding it like if it was a team of junior outsource devs: it's painful sometimes, but if you really know what you're doing and why — it works. And no one stops you from tweaking pixels when the majority of work is done — you'll even have a will to, as opposed to writing it all by yourself and spending all your mental energy on routine stuff.

      So... if people learn to use this hammer properly — I suppose the future might be brighter than the past. And also those who actually care but didn't have time to do things they're passionate about now can do things on their own.

  • MichaelRazum 7 hours ago

    Maybe try claude. Also people are orchestrating AI for example with ralph. I think it is possible to write pretty decent, test driven, code with AI

  • eudamoniac 4 hours ago

    > Is this because the guys claiming success are working in popular, known, more limited areas like Javascript in web pages

    Nope because this is all I do and the AI doesn't do it right either

  • foldr 4 hours ago

    AI tools can certainly fail to fix bugs, but if you’re consistently finding them of minimal use for debugging, I’d say that you’re either working in a fairly niche domain or that you’re maybe not fully exploiting the capabilities of the tool.