Rendered at 15:45:54 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
baCist 5 hours ago [-]
I think all of this has a dark future. And this can be argued based on how AI works.
AI systems look at code on the internet that was written by humans. This is smart, clean code. And they learn from it. What they produce — unreadable spaghetti code — is the maximum they can squeeze out of the best code written by humans.
In the near future, AI-generated code will flood the internet, and AI will start training on its own code. On the other hand, juniors will forget how to write good code.
And when these two factors come together in the near future, I honestly don’t know what will happen to the industry.
pseudocomposer 2 hours ago [-]
We’ve had a looming crisis for decades of young people increasingly not understanding a lot of the fundamentals of mathematical logic. And I think treating LLMs (which are amazing tools) as “AI,” and having it play this type of role, is the final step towards a lot of unrecoverable self-destruction.
We need to remember that the core of what “logic” is can be understood by every human mind, and that it’s our individual responsibility to endeavor to build this understanding, not delegate or hand-wave it. For all of human history, delegating/hand-waving away basic logic that can be understood by actuarial/engineering types has never gone well in the long term.
MichaelRazum 4 hours ago [-]
Not sure tbh. The labs which are creating the AI - definitely know what they are doing, and its incredible. Would just argue that the AI will become only better in the future
dieselgate 39 seconds ago [-]
Respectfully, I am starting to find "AI will become only better in the future" is a cheap and empty statement. Optimism is good but it does not take into consideration the tremendous nuance of the topic and current thread.
sdevonoes 18 minutes ago [-]
They are interested in money and ads. We cannot expect anything good from openai, anthropic, meta, google.
We had a couple of decades of brilliant engineers working for faang. What did we get as a result? Just crap: twitter, instagram, youtube, facebook. Imagine all those brilliant minds working on something meaningful instead.
Same goes for LLMs
truemotive 14 minutes ago [-]
Yes. It's even more frustrating when you land in an office full of them.
clintmcmahon 1 hours ago [-]
It's still crucial for senior level people to review and scrutinize code generated by Jr and AI developers.
There's always been the need to verify the code matches the business requirement, right? It used to be when you asked someone why they wrote the code the way they did, they'd tell you they thought it was the right way because X or Y. But with AI they can respond saying they actually don't know why they wrote it a certain way. That's just what ChatGPT or Claude told them to do. So, that's the nightmare part that people are experiencing.
Code reviews are important and software architecture skills are just as important now.
MarcelinoGMX3C 2 hours ago [-]
MichaelRazum, you're hitting on something crucial many of us in the trenches are seeing. The "code is cheap" mentality, as you call it, leads to bloated, unreadable code. As baCist points out, if AI starts training on its own generated code, we're headed for a real problem with quality degradation.
I've found experienced developers leverage AI as a force multiplier because they can scrutinize the output, unlike juniors who often just paste and move on. The real skill is becoming an AI orchestrator, prompting effectively, and critically validating the output. Otherwise, if you're just a wrapper for AI, then yes, you become the "legacy developer" you mention because you're adding no critical thinking or value.
Daedren 5 hours ago [-]
It's a problem. Seniors with AI perform far better because they have the skills and experience to properly review the LLM's plans and outputs.
Juniors don't have that skillset yet, but they're being pushed to use AI because their peers are using it. Where do you draw the line?
What will happen when the current senior developers start retiring? What will happen when a new technology shows up that LLMs don't have human-written code to be trained on? Will pure LLM reasoning and generated agent skills be enough to bridge the gap?
It's all very interesting questions about the future of the development process.
yodsanklai 2 hours ago [-]
> People coming from highly respected universities are doing everything with AI
Nowadays, everybody is doing everything with AI, young and old alike. It's very hard to justify not doing it. That being said, you can produce good code with AI, if you know what it should look like and spend the time to prompt and iterate.
davidajackson 1 hours ago [-]
Many here will be sad but there will be a day when writing code is seen as as antiquated as using a slide rule. It is coming.
kf 4 hours ago [-]
Yes, absolutely, if you don't use AI in coding you will be a legacy developer sooner rather than later.
Everyone seriously doing it has a bunch of agents in a corporate like structure doing code reviews, the bad AI code is when someone is just using a single instance of Claude or Chat, but when you have 50 agents competing to write the best code from a single prompt, it hits differently.
kpbogdan 6 hours ago [-]
Yea, the development process is changing rapidly now.
We are in this transitional period.
I have not idea where we will end up but it will be different place vs were we were like 1 year ago.
sfmz 3 hours ago [-]
Meta/Google/Anthropic report 75%+ of coding is now AI. For every engineer orchestrating AIs -- X will be let go -- but at what ratio 3:1? 5:1? 10:1? Seems like its at least 3.
MichaelRazum 2 hours ago [-]
Actually not worried about unemployment. This is an awesome development thing - called technological progress.
PS: Compare Assembly with Python - for sure the ration is more then 10x. Still we need much more devs compared to early days. For me the question is what the future software dev looks like (if the job still exists).
decasteve 5 hours ago [-]
Reviewing code becomes more arduous. Not only are the pull requests more bloated, the developer who pushed them doesn't always understand the implications of their changes. It's harder to maintain and track down bugs. I spend way too much time explaining AI generated code to the developer who "wrote" it.
MichaelRazum 4 hours ago [-]
Agree, especially a review is always an knowledge update/exchange and for juniors a learning experience. If it is AI generated, its just not worth the time.
coldtea 5 hours ago [-]
>So what do you guys think? Is this the future?
Yes. The feature is quickly produced slop. Future LLMs will train on it too, getting even more sloppy. And "fresh out of uni juniors" and "outsourced my work to AI" seniors wont know any better.
damnitbuilds 5 hours ago [-]
There seems to be a disconnect, with some people claiming they don't write code any more, only specs, and me trying to get Copilot to fix a stupid sizing bug in our layout engine and it Not Getting It.
Is this because the guys claiming success are working in popular, known, more limited areas like Javascript in web pages, and the people outside those, with more complex systems, don't get the same results ?
I also note that most of the "Don't code any more" guys have AI tools of their own to promote...
MichaelRazum 4 hours ago [-]
Maybe try claude. Also people are orchestrating AI for example with ralph. I think it is possible to write pretty decent, test driven, code with AI
nazgu1 4 hours ago [-]
In my opinion these guys just don't give a sh** on "stupid sizing bugs". Those who cares about how they software behaves and looks like realises after a while that most of AI claims are scam.
foldr 54 minutes ago [-]
AI tools can certainly fail to fix bugs, but if you’re consistently finding them of minimal use for debugging, I’d say that you’re either working in a fairly niche domain or that you’re maybe not fully exploiting the capabilities of the tool.
eudamoniac 58 minutes ago [-]
> Is this because the guys claiming success are working in popular, known, more limited areas like Javascript in web pages
Nope because this is all I do and the AI doesn't do it right either
AI systems look at code on the internet that was written by humans. This is smart, clean code. And they learn from it. What they produce — unreadable spaghetti code — is the maximum they can squeeze out of the best code written by humans.
In the near future, AI-generated code will flood the internet, and AI will start training on its own code. On the other hand, juniors will forget how to write good code.
And when these two factors come together in the near future, I honestly don’t know what will happen to the industry.
We need to remember that the core of what “logic” is can be understood by every human mind, and that it’s our individual responsibility to endeavor to build this understanding, not delegate or hand-wave it. For all of human history, delegating/hand-waving away basic logic that can be understood by actuarial/engineering types has never gone well in the long term.
We had a couple of decades of brilliant engineers working for faang. What did we get as a result? Just crap: twitter, instagram, youtube, facebook. Imagine all those brilliant minds working on something meaningful instead.
Same goes for LLMs
There's always been the need to verify the code matches the business requirement, right? It used to be when you asked someone why they wrote the code the way they did, they'd tell you they thought it was the right way because X or Y. But with AI they can respond saying they actually don't know why they wrote it a certain way. That's just what ChatGPT or Claude told them to do. So, that's the nightmare part that people are experiencing.
Code reviews are important and software architecture skills are just as important now.
I've found experienced developers leverage AI as a force multiplier because they can scrutinize the output, unlike juniors who often just paste and move on. The real skill is becoming an AI orchestrator, prompting effectively, and critically validating the output. Otherwise, if you're just a wrapper for AI, then yes, you become the "legacy developer" you mention because you're adding no critical thinking or value.
Juniors don't have that skillset yet, but they're being pushed to use AI because their peers are using it. Where do you draw the line?
What will happen when the current senior developers start retiring? What will happen when a new technology shows up that LLMs don't have human-written code to be trained on? Will pure LLM reasoning and generated agent skills be enough to bridge the gap?
It's all very interesting questions about the future of the development process.
Nowadays, everybody is doing everything with AI, young and old alike. It's very hard to justify not doing it. That being said, you can produce good code with AI, if you know what it should look like and spend the time to prompt and iterate.
Everyone seriously doing it has a bunch of agents in a corporate like structure doing code reviews, the bad AI code is when someone is just using a single instance of Claude or Chat, but when you have 50 agents competing to write the best code from a single prompt, it hits differently.
PS: Compare Assembly with Python - for sure the ration is more then 10x. Still we need much more devs compared to early days. For me the question is what the future software dev looks like (if the job still exists).
Yes. The feature is quickly produced slop. Future LLMs will train on it too, getting even more sloppy. And "fresh out of uni juniors" and "outsourced my work to AI" seniors wont know any better.
Is this because the guys claiming success are working in popular, known, more limited areas like Javascript in web pages, and the people outside those, with more complex systems, don't get the same results ?
I also note that most of the "Don't code any more" guys have AI tools of their own to promote...
Nope because this is all I do and the AI doesn't do it right either