Security Has Costs

Technical systems are often insecure, in that they allow unauthorized access and control. While strong security is usually feasible if designed in carefully from the start, such systems are usually made fast on the cheap. So they usually ignore security at first, and then later address it as an afterthought, which as a result becomes a crude ongoing struggle to patch holes as fast as holes are made or discovered.

The more complex a system is, the more different other systems it is adapted to, the more different organizations that share a system, and the more that such systems are pushed to the edge of technical or financial feasibility, the more likely that related security is full of holes.

A dramatic example of this is cell phone security. Most anyone in the world can use your cell phone to find out where your phone is, and hence where you are. And there’s not much anyone is going to do about this anytime soon. From today’s Post:

The tracking technology takes advantage of the lax security of SS7, a global network that cellular carriers use to communicate with one another when directing calls, texts and Internet data.

The system was built decades ago, when only a few large carriers controlled the bulk of global phone traffic. Now thousands of companies use SS7 to provide services to billions of phones and other mobile devices, security experts say. All of these companies have access to the network and can send queries to other companies on the SS7 system, making the entire network more vulnerable to exploitation. Any one of these companies could share its access with others, including makers of surveillance systems.

The tracking systems use queries sent over the SS7 network to ask carriers what cell tower a customer has used most recently. Carriers configure their systems to transmit such information only to trusted companies that need it to direct calls or other telecommunications services to customers. But the protections against unintended access are weak and easily defeated. …

Carriers can attempt to block these SS7 queries but rarely do so successfully, experts say, amid the massive data exchanges coursing through global telecommunications networks. P1 Security, a research firm in Paris, has been testing one query commonly used for surveillance, called an “Any Time Interrogation” query, that prompts a carrier to report the location of an individual customer. Of the carriers tested so far, 75 percent responded to “Any Time Interrogation” queries by providing location data on their customers. …

The GSMA, a London-based trade group that represents carriers and equipment manufacturers, said it was not aware of the existence of tracking systems that use SS7 queries, but it acknowledged serious security issues with the network, which is slated to be gradually replaced over the next decade because of a growing list of security and technical shortcomings.

As some carriers tightened their defenses, surveillance industry researchers developed even more effective ways to collect data from SS7 networks. The advanced systems now being marketed offer more-precise location information on targets and are harder for carriers to detect or defeat.

Telecommunications experts say networks have become so complex that implementing new security measures to defend against these surveillance systems could cost billions of dollars and hurt the functioning of basic services, such as routing calls, texts and Internet to customers. “These systems are massive. And they’re running close to capacity all the time, and to make changes to how they interact with hundreds or thousands of phones is really risky.” …

Companies that market SS7 tracking systems recommend using them in tandem with “IMSI catchers,” increasingly common surveillance devices that use cellular signals collected directly from the air to intercept calls and Internet traffic, send fake texts, install spyware on a phone, and determine precise locations. IMSI catchers … can home in on somebody a mile or two away but are useless if a target’s general location is not known. SS7 tracking systems solve that problem by locating the general area of a target so that IMSI catchers can be deployed effectively. (more)

GD Star Rating
Tagged as: , ,
Trackback URL:

    100% watertight security is of course impossible (especially because of the fact you can’t secure against social hacking) but there’s trillions of tonnes of low-hanging fruit (most “hacks” are simple, known exploits, many could even be fixed by installing free updates or by not using a 10-year old version of IE anymore) that’s left hanging simply because executives barely know where the on/off button of a computer is and because executive bonus packages and PR for the firm get higher priorities. To top it off businesses have successfully sued employees who warned the public when data was stolen.

  • Doug

    “So they usually ignore security at first, and then later address it as an afterthought”

    Much of computer science research reveals this approach to be self-defeating. Unlike many other aspects of program design, security is highly coupled to the underlying data structures, algorithms and protocols used throughout the system. Considerations like UI, persistence and even platform can usually be abstracted away into loosely coupled sub-components. They can easily be implemented quickly and subsequently upgraded on future iterations.

    This doesn’t work for security because it’s so tightly integrated into core program design. A system not designed with security in mind from day one, will likely never be secure. This is the problem with SS7, it’s an attempt to hoist a data network on top of the POTS.

    There are some clear advantages to hacking existing networks into taking on new roles. It avoids the switching costs and path dependency of trying to build new networks from scratch. But security’s always going to suffer in these instances.

    • RobinHanson

      I completely agree.

    • IMASBA

      Using fixed security standards during development would help a lot, although that’s often not even the problem, it’s more often something really stupid and simple like not installing an update or not watching out for SQL-injections. The fixes are out there and manageable but don’t get done because that might mean the CEO only gets a $5 million bonus instead of $6 million and we can’t have that…

      Security is usually not a priority, well, at least so long as nothing major has gone wrong yet, and when it does, the people who made security “not a priority” are rarely the ones who suffer the consequences.

    • Sid K

      Why is security so different from other aspects of system design?

      • DL

        Security is like certain sorts of correctness, and unlike most other features, in that it’s a universal property rather than an existential one.

        “There should be a way for a user to export their data to CSV” is an existential requirement, that can be fulfilled in an existing system by adding and integrating a module.

        “There should not be any sequence of inputs that crashes the program” is a universal requirement (or negated existential, if you prefer), and code nearly anywhere in the system could potentially violate it. These kinds of properties get harder to check or to add as the system gets bigger.

        In a distributed system, security properties for each component tend to be of the form “No [communication from other nodes] can cause me to do [undesirable or restricted behavior] unless they [prove it’s authorized].” Showing that the program can’t be convinced to do the thing through unorthodox means (e.g. buffer overruns) requires wide-ranging scrutiny and possible changes to the program itself. Providing appropriate and verifiable proofs of authorization requires accommodation from the network protocol (so everyone agrees on how to send the proofs) and from all other distributed nodes (so they actually send the necessary and sufficient authorization proofs), so it’s a very costly change at the level of the network even if authorization can be a self-contained module in each network component.

        In the case of the cell carriers, they formulated the wrong security property when they created the system: something like “Only actual cell carriers can connect to the SS7 network,” instead of “Don’t release information about a phone to anyone but the carrier it belongs to.” This requires a change to the protocol (difficult or impossible to do piecemeal) and to all the different pieces of software that constitute the network.

      • Sid K

        Thanks. I think I understand. To summarize, the crucial difference seems to be that security design has to contend with adversarial inputs while normal program design has to deal only with neutral inputs.