Rendered at 00:30:37 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
distalx 17 hours ago [-]
This article is spot on. I'm feeling the exact same way watching the industry aggressively promote the idea that it's safe to deploy unverified code just because an AI wrote the tests.
We are playing with fire. If we keep treating "I don't read the code I ship" as a feature rather than a liability, it's going to cause a massive, real-world disaster. The resulting regulation will be so heavy that software engineering will end up needing a Bar Council or Medical Board just to ship a basic feature. We're cheering for a trend that is going to regulate us into a corner.
anonzzzies 14 hours ago [-]
But people code also cause real world disasters; most human programmers are terrible, never held accountable (they usually left a while ago), they cannot read/comprehend code (either) and cannot write tests (either). Only in a echo chamber like HN you can believe that the majority human programmers are any good / better than a 1bit 7B model ; they are not. Go out in the real world; most people are really really bad at what they do, including programmers.
expedition32 10 hours ago [-]
AI is coded by people.
We have not reached the state in which AI creates AI.
anonzzzies 8 hours ago [-]
A few very smart, highest paid people in the world yes. The rest... Well... I did not use All quantifiers in the original post.
bad_username 18 hours ago [-]
> It’s just incapable of the thing that makes a real architect valuable: saying “no.”
I have had Claude many times tell me "not really, here's a better way" when asked "hey Claude, is this a good idea?". Granted, Claude would just do it if I just asked it to just do it, but that's by design. LLMs, at least today, are capable of pushing back, if suitably prompted.
AugSun 19 hours ago [-]
"The craft still matters" - may be, but nobody is paying for it anymore. So, let that Jenga tower wobble ...
matstech 16 hours ago [-]
That's an incredibly thorough analysis. And if we wanted to be even more picky: why on earth would we want to form so many opinions about the world using a single LLM model?
eieiyo 15 hours ago [-]
> I’m not saying don’t use AI agents. I use Claude Code every day.
It shows, man. You even had it write this article for you.
We are playing with fire. If we keep treating "I don't read the code I ship" as a feature rather than a liability, it's going to cause a massive, real-world disaster. The resulting regulation will be so heavy that software engineering will end up needing a Bar Council or Medical Board just to ship a basic feature. We're cheering for a trend that is going to regulate us into a corner.
We have not reached the state in which AI creates AI.
I have had Claude many times tell me "not really, here's a better way" when asked "hey Claude, is this a good idea?". Granted, Claude would just do it if I just asked it to just do it, but that's by design. LLMs, at least today, are capable of pushing back, if suitably prompted.
It shows, man. You even had it write this article for you.