Yeah, I’m pretty much an enthusiast and optimist when it comes to this technology but I would not trust any of the AI browsers. That’s just access to way too much stuff and prompt injection is still a real problem. Eventually people will have hidden prompts that exploit these browsers to access your email and drain your bank account.
No one to blame for not having good backups but yourself.
I guess I wouldn’t say I’d never use them, but I would never use them for maintstream web browsing. The AI web browsers seem like the what RPA should have been when consultants were pushing those tools 8-10 years ago
My take: I am an AI bear but I am also a fool.
If AI can replace just 2% of the workforce that is one or two trillion dollars of annual revenue. That money gets split up between the vendors and customers, but that’s a lot of cash. Is that enough to justify the current investment cycle? I don’t know. I do know that the tech I use these days. and that is very minimal, doesn’t seem to be as reliable as it was a year or two ago.
My broker app has this glitch when you put in a “buy to cover order”. If I have a short position I want to close I can click “but to cover all” and it automatically fills in the ticket to close the position. Its not a new feature. It’s always been there as I recall. So you hit '“buy to cover all”, market, review order, place order. Its a two second process. But about two months ago I was going through this progression and I kept getting an error message. I reviewed the ticket everything looks good but still getting an error message. Check the position monitor, yep still there. Couldn’t make it work. I had to call customer service to get it resolved and it was just the matter of the plus and minus sign on the position size not both being highlighted. Just this kind of really small random thing not working properly. And it’s still there. Still not fixed and that is super annoying. I don’t have and indication that this is related to AI or not. Did AI fill in some sloppy code and know one is really checking up on it. But I want to instinctively blame it on AI because I am a foolish AI bear
The value I’ve derived from chatgpt and a little codex is in code review. If I give it enough project context and what I’m trying to do it is extremely good at catching subtle small mistakes that happen when you’re doing the work of 2-3 people already and small mistakes can easily slip in that probably get tagged by an ambitious junior. with claude code it allows me a lot of flexibility to let loose smart automation that would make this even easier. that was basically what I’ve pitched. I mostly work a lot on security automation stuff so there’s terrifying footguns everywhere in that stuff, which ai has made me feel more confident in how many mistakes it can catch before my testing or monitoring does.
agents are a complete security nightmare though, operationally. I’ve been trying to work on properly sandboxing it but so have a lot of smarter people and I still see oopsie stories everywhere
I seem to see a version of the following progression a lot:
- People who use this garbage are dumb
- My work is way beyond these stupid machines, but sure, I occasionally use it for the most menial tasks
- It’s mostly garbage, but I do so much and/or work at such a high level, that it’s fine for catching my mistakes
- I use it for some stuff, but it’s just a faster version of things I already know how to do. I’m still the one doing the real thinking.
- …
who here is saying anything like that? or are you speaking generally? do you use these tools or manage anything important with them? sorry i know you’re a big important smart man
It’s a tool that will make the good developers even better and make the shitty ones think they’re better but probably just increase the volume of shitty code they crank out with mistakes that are more subtle. You can get some big gains from the high performers but it will be mitigated by the slop that dipshits generate with it that will need to be fixed by a shrinking pool of people who know what they’re doing.
claude having a bad day, dang. almost makes me feel bad for it
https://x.com/andyayrey/status/2015977558882533687?s=46&t=hUTQWHj9NQWf8Y8RgMv1TA
good thing it is just a toaster and has no feelings…
heh, yes, there are always people who seem surpried that people can prompt an AI who has read every sci-fi version of an AI to act like an AI that’s struggling to be an AI.
i really haven’t worked with too many developers who i considered to be outright bad. one was a junior and just super slow and once said “why test anything? just wait for a bug and then we will fix it” and the other was someone who changed jobs every year and seemed to barely know the language they were working with. both were hired without technical interview by a very lazy manager. everyone else has been fine.
What if I told you this school has no teachers, and students learn academics through an AI tutor, on AI apps that give each students a one-to-one personalized mastery-based academic journey that guarantees success?
There’s no homework and no textbooks — just software that students use each morning to learn, with human “guides” for motivation and classroom support.
While I think this is a terrible idea for public education because we aren’t close to having the necessary controls on AI for this, AI guided learning is hot in the corporate world. Employees crave development and actually getting good at using LLMs to create a development plan for yourself is one way to turn the tide back a bit on how AI is putting your job/career at risk.
I don’t want to be pollyannish about it though, of course that is fraught with problems. AI sycophancy might create some major Dunning-Kruger employees that only think they’re valuable because AI told them so. Also companies are certainly looking at this as a way to make all training self directed, in the same way that they have made a ton of other functions like HR and Finance more self directed, and those changes just made workers’ lives worse to save a few buck.
I’m halfway through Dario Amodei’s (CEO of Anthropic) new essay and it’s one of the craziest things I’ve ever read. Highly recommend.
Still at the beginning and I’m already convinced the only thing that can stop a bad guy with a future AI is a good guy with a better future AI
Yeah he talks about that idea more specifically later in the essay.
It’s interesting because Amodei clearly thinks the downside risks of AI could be catastrophic, and he posits many realistic examples of how those risks could manifest. But then he seems somewhat optimistic that we’ll be able to defend against those risks which, I dunno, the essay is not really convincing me.
Like in a vacuum some of the defenses are impressive and I’m truly amazed by what they’re doing. But the fact that we’re needing this level of defense is not a good sign imo. I kept thinking about the “swiss cheese” problem that comes up with airplane crashes where we have lots of systems in place to prevent disaster but every once in a while something slips through all the holes. Not sure why that won’t happen with AI. When we’re talking about millions of deaths or worse, we don’t have the luxury of getting it wrong and learning from mistakes.
duuuude this is amazing
https://x.com/googledeepmind/status/2016919756440240479?s=46&t=hUTQWHj9NQWf8Y8RgMv1TA
“I can imagine, as Sagan did in Contact, that this same story plays out on thousands of worlds. A species gains sentience, learns to use tools, begins the exponential ascent of technology, faces the crises of industrialization and nuclear weapons, and if it survives those, confronts the hardest and final challenge when it learns how to shape sand into machines that think. Whether we survive that test and go on to build the beautiful society described in Machines of Loving Grace, or succumb to slavery and destruction, will depend on our character and our determination as a species, our spirit and our soul.”
Ah. Well. Nevertheless.
Yeah I skimmed this idk. Liberal use of “CCP” made me suspicious. I made a remind me in 3 years email to see how valid it is, I’ll check back.