7 Comments

Thanks. I think I understand. To summarize, the crucial difference seems to be that security design has to contend with adversarial inputs while normal program design has to deal only with neutral inputs.

Expand full comment

Security is like certain sorts of correctness, and unlike most other features, in that it's a universal property rather than an existential one.

"There should be a way for a user to export their data to CSV" is an existential requirement, that can be fulfilled in an existing system by adding and integrating a module.

"There should not be any sequence of inputs that crashes the program" is a universal requirement (or negated existential, if you prefer), and code nearly anywhere in the system could potentially violate it. These kinds of properties get harder to check or to add as the system gets bigger.

In a distributed system, security properties for each component tend to be of the form "No [communication from other nodes] can cause me to do [undesirable or restricted behavior] unless they [prove it's authorized]." Showing that the program can't be convinced to do the thing through unorthodox means (e.g. buffer overruns) requires wide-ranging scrutiny and possible changes to the program itself. Providing appropriate and verifiable proofs of authorization requires accommodation from the network protocol (so everyone agrees on how to send the proofs) and from all other distributed nodes (so they actually send the necessary and sufficient authorization proofs), so it's a very costly change at the level of the network even if authorization can be a self-contained module in each network component.

In the case of the cell carriers, they formulated the wrong security property when they created the system: something like "Only actual cell carriers can connect to the SS7 network," instead of "Don't release information about a phone to anyone but the carrier it belongs to." This requires a change to the protocol (difficult or impossible to do piecemeal) and to all the different pieces of software that constitute the network.

Expand full comment

Why is security so different from other aspects of system design?

Expand full comment

Using fixed security standards during development would help a lot, although that's often not even the problem, it's more often something really stupid and simple like not installing an update or not watching out for SQL-injections. The fixes are out there and manageable but don't get done because that might mean the CEO only gets a $5 million bonus instead of $6 million and we can't have that...

Security is usually not a priority, well, at least so long as nothing major has gone wrong yet, and when it does, the people who made security "not a priority" are rarely the ones who suffer the consequences.

Expand full comment

I completely agree.

Expand full comment

"So they usually ignore security at first, and then later address it as an afterthought"

Much of computer science research reveals this approach to be self-defeating. Unlike many other aspects of program design, security is highly coupled to the underlying data structures, algorithms and protocols used throughout the system. Considerations like UI, persistence and even platform can usually be abstracted away into loosely coupled sub-components. They can easily be implemented quickly and subsequently upgraded on future iterations.

This doesn't work for security because it's so tightly integrated into core program design. A system not designed with security in mind from day one, will likely never be secure. This is the problem with SS7, it's an attempt to hoist a data network on top of the POTS.

There are some clear advantages to hacking existing networks into taking on new roles. It avoids the switching costs and path dependency of trying to build new networks from scratch. But security's always going to suffer in these instances.

Expand full comment

100% watertight security is of course impossible (especially because of the fact you can't secure against social hacking) but there's trillions of tonnes of low-hanging fruit (most "hacks" are simple, known exploits, many could even be fixed by installing free updates or by not using a 10-year old version of IE anymore) that's left hanging simply because executives barely know where the on/off button of a computer is and because executive bonus packages and PR for the firm get higher priorities. To top it off businesses have successfully sued employees who warned the public when data was stolen.

Expand full comment