The Coming Hackastrophe
For years, cybersecurity experts have been warning about the chaos that highly capable hacking bots could usher in. … Claude Mythos Preview appears to represent not an incremental change but the beginning of a paradigm shift. … Perhaps more concerning than the reported capabilities of Mythos Preview is that other companies are not far behind. (More)
Finding bugs was also hard, so the worst flaws stayed hidden, sometimes for decades. It wasn’t a great system. But the difficulty on both sides created a kind of détente that held. Now, thanks to new A.I. tools, anyone can write code. Soon, bad actors could use those same tools to find out what’s wrong with code. The détente is over. (more)
Use strong passwords that are unique across every site, preferably through a trusted password manager. Better yet, when a site offers a passkey, take it. … For accounts without passkeys, use an authenticator app for two-factor authentication, not text messages. Always keep all your software up to date, and uninstall unnecessary apps. (more)
OK, I’m a few weeks late to this party, but not too late to give many of you news: We may soon face a period (a few years?) of greatly reduced software availability.
For many decades, we have known how to write pretty secure software. It takes a bit longer, and security considerations must be central to early design efforts, but it is possible. However, developers have usually been in too much of a rush to market to do this. So most software systems today are riddled with security holes. What has saved them so far is that it takes humans a lot of work to find and exploit such holes.
However, there now exist powerful AI systems that are far better at finding and using such holes. Soon (within a year or two?) many AI firms will have such tools, and they will spread to be widely available. Yes, such AI systems can also work to patch such holes, but computer security experts tell me that the nature of insecure systems is to make it much easier to find and use than to patch such holes. Attack beats defense.
Software firms would then more eagerly rewrite their code to use more secure designs, and AI could help them to do this. But this takes time, and as there isn’t a lot of secure software out there now, AI hasn’t had big datasets ready to help them learn how to do this well. So it will take some time to replace weak with strong software.
So there may soon be a period, starting within a few years, maybe lasting a few years, when most actual software systems can cheaply be hacked. This will make such software firms vulnerable to ransomware, and make customers wary of using their products. Customers, firms, and App stores, will respond by cutting back on what software systems they offer, and by simplifying them by dropping many features.
As our world has come to rely on software for a great many things, it seems quite concerning that we might soon have to make do with substantially less software. How vulnerable are crucial systems like electricity, cars, traffic lights, voting systems, and payment systems? I don’t think we know. Beware the coming Hackastrophe.
Note: such an event would likely make the public much more willing to regulate AI.


s/reduce/reduced. Our firm is making an emergency plan to keep the business going assuming all Internet-connected PCs may be down for months. We are putting some machines aside offline.
I've been looking forward to this for a while. Historically many companies have been lax about security but AI hacking tools – especially ones that can run locally – will change that.
We could also do a lot for security if we decriminalized certain forms of hacking. It's hard to get a politician to understand that such activities make us stronger, not weaker. An analogy is TSA metal detectors at airports: You want red-team agents trying their best to find gaps and sneak weapons through.