I can agree with all that. It brings to mind Future Shock, a book from 1970. Seems prophetic. (Ironic, when I was just saying we can’t predict the future. )
The documentary based on the book. Will seem silly in some ways now. I’m sure Toffler got a lot wrong.
While is may be true that current college jobs are fucked, the only jobs AI is actually taking at the moment are low-impact blibbety-blab jobs like internal corporate nonsense and marketing materials that no one really reads anyway. I guess maybe translators and tech writing might be impacted too. But I’m sure those LLMs have to be highly supervised in any capacity where being wrong actually matters to your bottom line.
All the stuff about it taking programmers’ jobs is way overblown at the moment. Companies are using AI as an excuse for layoffs that may or may not have needed to happen anyway and the AI-bubble machine is amplifying it to the moon.
The FAANGs hired way too many programmers as part of their growth story to wall st. and to drain the talent pool for potential startup competitors. All the minor league FAANGs followed suit because that’s just what you do.
I’m not saying it won’t get there eventually, but right now LLMs are a useful tool for programmers, not a replacement for even junior programmers.
Apparently, AI agents are mostly marketing nonsense at the moment even for customer service, which should be their wheelhouse, much less letting them run wild and write code.
My experience the last few months is exactly the opposite, at least for programming. I’ve been trying to get this project done and have been pounding on the agentic coding cli things that have come out this year, and they are intensely good, so good you’d be a fool to not be using one of these right now. Simply put I know what I’m doing and I’m at least 25% more productive which is insane. Looking over my old posts ITT makes me laugh. Programming is absolutely fked in the near to medium future as a career, and again, these agentic coding agents aren’t even a year old and they’re this good.
But also like the conventional wisdom of “they aren’t taking jobs, overemployed people are getting laid off” may be true for right now or the next year or two, but there’s many people who’s entire job is basically “I have to generate and deliver this report to my boss every day, every week”. With what I can do right now with just what these coding agents can easily do, there’s zero reason that someone can’t closely establish parameters for exactly what an AI will need to do to deliver that exact report and make it work right now and those people will be no longer need to be employed.
How many of those people do you think we have in the world, right now?
FWIW - I’m basing my opinions on agents on the back-and-forth on HackerNews, so your personal experience probably matters more.
I just know that for the kind of programming I do, I spend about 10% of my time coding, if that, and 90% of the time thinking about things an LLM (at least in their current state) would never be able to fathom - like how does this thing I need to code relate to the other pieces of this app, does its data structure make sense to talk to our other apps, how do I expect this app to evolve in the future and how should I code based on that, should I rely on this AWS feature that seems to be kind of a neglected stepchild, and about a million other aspects like that that I can’t even remember right now.
And it would be pretty similar for even a junior developer where I work now, or any of the corporate jobs I’ve had before this. We almost never get self-contained greenfield apps that aren’t going to be a building block for something bigger. We have to think very carefully about every aspect of the app. For me anyway, claude code just wouldn’t be that useful for the 5-10% of the time I spend actually typing out code.
LLMs have been insanely helpful in researching how to do some weird thing with AWS, or tailwind, or bash, or selenium, or any other tech I have to work with that I just want to get something done and not have to become an expert to do it. But even then, about 10-20% of the time it sends me down a wrong path that is really frustrating and eats up a lot of time. Overall it’s still worth it, but the hallucinations drag down that productivity a lot.
I agree people who write up TPS reports all day are fucked.
My limited use of ChatGPT with fixing my SQL mimics your experience. I imagine that AI is more likely to increase productivity which will reduce the need for more developers, which isn’t quite the same as saying it is going to “replace” developers, but the outcome is probably the same. Fewer openings. The problems will start when people who knew how to code without AI start retiring and are replaced with folks who have used it as a crutch and don’t understand why the prompts where written the way they were. Maybe by then AIs will be better at working with more ambiguous prompts?
One possibility of where this ultimately leads is to where AI code is just a big ever-evolving organic blob that is indecipherable to humans and eventually gets it right 99+% of the time.
Right now LLMs are mimicking a computer language that was optimized for human cognition. LLMs may not even need a language. Just give them machine code and endless cycles of training until they get to an acceptable level of expected outputs. This may never work for medical devices or bank software. But for most applications, it might be good enough.
This is not true. They are absolutely capable of making massive connections between code (context), figuring out things, planning for things, seeing things that might happen, all of these things they can do right now and uncover things you completely didn’t think about. They are not perfect but they’re far beyond what you’re saying they can’t do.
I’m also writing 10% of the code that is being produced in my greenfield project btw, but I’m (we’re) kicking out feature after feature that is being done right because I have an agentic AI flow that not only writes code virtually exactly the same way I would manually (after many, many prompts to update the AGENTS.md file), but I’m confident in it due to the review flow itself. It just works fine, and as far as I can tell, its not the Max Power way at all.
Can’t remember how to multiquote but I am using OpenAI’s for their codex cli tool which is my main one, but also Claude Code almost as much - the extra veracity you get by having 2 frontier products that can review themselves is probably worth it. And I use Gemini free tier for its extra review some times. Oh and have vs code copilot for its AI autocomplete. So I found the business model guys, drain lake mead.
You should try using the AI like a sparring partner rather than an assistant. Co create by asking it to criticize your own thoughts and identify your blind spots while you come up with the ideas. Through this method you can use it to get to the same answers for these type of questions quicker than just thinking through them on your own. Software architecture is not some incomprehensibly complex domain that AI can’t understand.
Lots of different ways, codex and gemini have built in slash commands for code reviews that are really good, claude doesn’t (its is somehow only for actual PRs) but has the same thing with just text i.e. “review whats staged in git vs main, pr-style” is fine. But a) the github @codex review command, which is completely separate, and has separate token buckets from the normal use, is REALLY good and finds deep logic problems not even related to the PR you’re trying, but b) the real value is that you can have them review their own reviews (from the other agents) because they are good at and bad at different things. A typical pattern is “hey I received this review, can you look into this, verify, and sketch out a plan to resolve it if you think this is a real problem. review: control+v” and off we go.
So I do a lot of this for every major PR and since one PR can take my entire week of codex code review tokens I do one big PR a week and it finds a ton of crap and its great. Hot tipz from an OpenAI Wrapped 2025 top 5% user yeah.
Jobs in entry level or low paid white collar categories like data entry, customer service rep, medical coding, etc are in trouble. I think with software development and other higher end, higher paid white collar jobs, there’s a lot of potential for the productivity gains and reduced cost of programming or other outputs to create more demand and more jobs than eliminate them.
For IAC, which i spend a lot of time in, and especially terraform, I find the tools mentioned here lacking. A constantly shifting version matrix (terraform client, provider version, cloud resource versions) can quickly confuse a coding agent. I do find them useful for spitting out tons of (mostly correct) boiler plate and big refactors involving moving a lot of files around I normally couldn’t do without a lot of tedium and a day set aside of my week. I’d say we probably save a new hire because of it on average every 2 years like in the beforetimes.
For coding-coding that I do, yea, for at least what I write, it trivializes it. I am glad I never much cared for programming as a profession because I do suspect if your focus was code writing and that alone I’d start to worry a bit.
I don’t know what it means for the industry. I just need 20 more years. I doubt I will be completely replaced. the job market got flooded with people looking to glue react components together and pull down 200k TC right out of school. I expect those things will become difficult to make a career from. My job’s a lot more like suzzer’s than anyone here i think and I suspect those roles will be fine, but I just don’t like to think about what will happen to the broader job industry and what the ghouls on top will think they can get away with.
I’m shifting slowly into more security ops stuff because I think AI is creating big opportunities there and likely will be terrible at it for a long time.