Tag Archives: Privacy

Subpoena Futures

A subpoena … is a writ issued by a government agency, most often a court, to compel testimony by a witness or production of evidence under a penalty for failure. … party being subpoenaed has the right to object to the issuance of the subpoena, if it is for an improper purpose, such as subpoenaing records that have no relevance to the proceedings, or subpoenaing persons who would have no evidence to present, or subpoenaing records or testimony that is confidential or privileged. (More)

Parties may obtain discovery regarding any nonprivileged matter that is relevant to any party’s claim or defense and proportional to the needs of the case, considering the importance of the issues at stake in the action, the amount in controversy, the parties’ relative access to relevant information, the parties’ resources, the importance of the discovery in resolving the issues, and whether the burden or expense of the proposed discovery outweighs its likely benefit. (More)

Exceptions are quite limited: self-incrimination, illegally-obtained info, and privileges of spouses, priests, doctors, diplomats, and lawyers. The remarkable fact is that the law has little general respect for privacy. Unless you can invoke one of these specific privileges, you must publicly report to the court any info that it thinks sufficiently relevant to a current court case. You simply have no general right to or expectation of privacy re stuff a court wants to know. Courts don’t even compensate you for your costs to collect evidence or appear to testify.

And yet. Consider what I wrote March 5:

The straightforward legal remedy for [pandemic] externalities is to let people sue others for infecting them. In the past this remedy has seemed inadequate for two reasons:

1. It has often been expensive and hard to learn and prove who infected who, and
2. … most folks just can’t pay large legal debts.
The vouching system directly solves (2), … And the key to (1) is ensuring that the right info is collected and saved.

First, consider some new rules that would limit people’s freedoms in some ways. Imagine people were required to keep an RFID tag (or visible QR code) on their person when not at home, and also to save a sample of their spit or skin once a week? Then phones could remember a history of the tags of people near that phone, and lawsuits could subpoena to get surveillance records of possible infection events, and to see if spit/skin samples on nearby dates contain a particular pathogen, and its genetic code if present. We might also adopt a gambled lawsuit system to make it easier to sue for small harms. (More)

Here, to help law deal with pandemics, I was tempted to propose specific rules re info that people must collect and preserve. Yet if courts can get any info they think relevant, why is there ever a problem with courts lacking info to deal with key harms, such as pandemic infection?

The answer is that current law allows a huge exception to its subpoena power. Courts can force you to reveal info that you have already collected, on paper, a computer, in your head, or in your physical objects. But you usually have no obligation to collect and save info now that the court might want later. As a result, many people and orgs go out of their way to not save incriminating info. For example, firms do key discussions verbally, not recorded, rather than via email. Thus you have no obligation to save spit samples or detailed records of where your phone goes, to help with future pandemic infection lawsuits.

This seems a huge and inconsistent loophole. I could understand if the law wanted to respect a more general right to privacy. Then the court might weigh the value of some info in helping court cases against the social harm from forcing its publication via a subpoena. As a result, it might sometimes block a subpoena even when the info collected would be relevant to a court case.

But I can’t see a reason to eagerly insist on access to info that seems relevant to a court case, and yet put no effort into inducing people to collect and preserve such info beforehand. So I propose that we create a legal process by which legal judgements are made on, if collected and saved, how likely some info would be to be subpoenaed, and how valuable it would be in that case.

When info would be valuable enough if collected and saved, then the court should require this. I don’t have a strong opinion on who exactly should bring a suit asking that such info be saved, or who should represent the many who would have to save that info. But one obvious system that occurs to me is to just have courts usually make ex post estimates of info value by the end of each court case, and then use “subpoena futures” prediction markets to make an estimate of that value ahead of time. (And make it legal and cheap to start such markets.)

So, if a subpoena futures market on a type of info estimates its expected court value to be above a standard threshold, then by law that info must be collected and saved. These prediction markets needn’t be huge in number, if they could estimate the average value of such info collect over a large group, which would then justify requiring that entire group collect the info. Such as everyone in an area who might infect others with a pandemic. If some subgroup wanted to claim that such info wasn’t less valuable regarding them, and so they should be excused, why they’d have to create different prediction markets to justify their different estimates.

For example, when a pandemic appears, if those who might infect others are likely vouched, then those who might be infected would want to require that first group to collect and save info that could be used later to prove who infected who. So they’d create prediction markets on the likely court value of spit samples and phone location records, and use market estimates to get courts to require the collection of that info.

Compared to my prior suggestion of just having the law directly require that such info be collected, this subpoena futures approach seems more flexible and general. What other harms that we do each other could be better addressed by lawsuits if we could require that relevant info be collected and saved?

(Btw, courts need not estimate info value in money terms. They might instead express the value of each piece of info in terms of its multiple of a “min info unit”, i.e., the value of info where they’d be just on the border of allowing it to be subpoenaed for a particular case.)

Added 7a: As mentioned in this comment, we now have this related legal concept:

Spoliation of evidence is the intentional, reckless, or negligent withholding, hiding, altering, fabricating, or destroying of evidence relevant to a legal proceeding …The spoliation inference is a negative evidentiary inference that a finder of fact can draw from a party’s destruction of a document or thing that is relevant to an ongoing or reasonably foreseeable civil or criminal proceeding.

My proposal can be seen as expanding this concept to allow a much weaker standard of “foreseeable”. And instead of allowing a presumption at trial, we just require the evidence to actually be collected.

GD Star Rating
loading...
Tagged as: , , ,

Why Not RFID Tag Humans?

Today, across a wide range of contexts, we consistently have rules that say that if you have a thing out there in the world that can move around and do stuff, you need to give it a visible identifier so that folks near that thing can see that identifier, look it up in a registry, and find out who owns it. That identifier might be a visible tag or ID number, it might be an RFID that responds to radio signals with its ID, or it might be capable of more complex talk protocols. We have such rules for pets, cars, trucks, boats, planes, and most recently have added such rules for drones. Most phones and tablets and other devices that communicate electronically also have such identifiers. And few seem to object to more systematic collection of ID info, such as via tag readers.

The reasoning is simple and robust. When a thing gets lost, identifiers help us get it back to its owner. If a thing might bother or hurt someone around it, we want the owner to know that we can hold them responsible for such effects. Yes, there are costs to creating and maintaining IDs and registries (RFID tags today cost ~$0.15). Also, such IDs can empower those who are hostile to you and your things (including governments) to find them and you, and to hurt you both. But we have consistently seen these costs as worth the benefits, especially as device costs have fallen dramatically over the decades.

But when it comes to your personal body, public opinion seems to quite strongly opposed:


My 14 law&econ undergrads all agreed when I assigned this topic on their final exam today. People oppose requiring identifiers, and as face readers are now on the verge of making a new ID system, many want to legally ensure a right to wear masks to thwart it.

Yet the tradeoffs seem quite similar to me; it is just the scale of the stakes that rise. When we are talking about your body, as opposed to your car, pet, or drone, you can both do more to hurt others, and folks hostile to you might try to do more to you via knowing where you are. But if the ratio of these costs and benefits favor IDs in the other cases, I find it hard to see why that ratio would switch when we get to bodies.

Added 5Mar2020: The number you get from an RFID tag need not directly tell you the public name or location of the person behind it. You might instead need a subpoena to get that from the number.

GD Star Rating
loading...
Tagged as: ,

Why Not Hi-Tech Forms?

A half century ago, when people tried to imagine a future full of computers, I’m sure one of the most obvious predictions they made is that we today wouldn’t have to work so hard to fill out forms. Filling out forms seemed then to be a very mechanical task, based on explicit mechanical rules. So once computers had enough space to store the relevant data, and enough computing power to execute those rules, we should not longer need to fill out most tedious parts of forms.

Oh sure, you might need to write an essay for a school application, or make a design for the shed when you ask your homeowner’s association permission to build a shed. But all that other usual tedious detail, no.

Now this has in fact happened for businesses, at least for standard forms and for big business. In fact, this happened many decades ago. Most of them wrote or bought programs to fill out standard forms that they use to talk to customers, to suppliers, and to government. But for ordinary people, this mostly just hasn’t happened. Oh sure, maybe your web browser now fills in an address or a credit card number on a web form. (Though it mostly gets that wrong when I try it.) But not all the other detail. Why not?

Many poor people have to fill out a lot of forms to apply for many kinds of assistance. Roughly once a year I’m told, at least. They see many of these forms as so hard to fill our that many of them just don’t bother unless they get help from someone like a social worker. So a lot of programs to help the poor don’t actually help many of those who are eligible, because they don’t fill out the forms.

So why doesn’t some tech company offer a form app, where you give all your personal info to the form and it fills out most parts of most forms for you? You just have to do the unusual parts. And they could have a separate app to give to orgs that create forms, so they can help make it easier for their forms to get filled out. Yes, much of the effort to make this work is more in standardization than in abstract computer algorithms. But still, why doesn’t some big firm do it?

I suggested all this to a social worker I know, who was aghast; she didn’t want this tech firm knowing all these details, like her social security number. But if you fill out all these forms by hand today, you are telling it all to one new org per year. Adding one firm to the list to make it all much easier doesn’t seem like such a high cost to me.

But maybe this is all about the optics; tech firms fear looking like big brother if they know all this stuff about you. Or many legal liability would fall on these tech firms if the form had any mistakes. Or maybe privacy laws prevent them from even asking for the key info. And so we all suffer with forms, and poor folks don’t get the assistance offer to them. And we all lose, though those of us who are better at filling out forms lose less.

GD Star Rating
loading...
Tagged as: , ,

Signaling Gains From Transparency

I said back in February:

For millennia, we humans have shown off our intelligence via complicated arguments and large vocabularies, health via sport achievement, heavy drink, and long hours, and wealth via expensive clothes, houses, trips, etc. Today we appear to have the more efficient signaling substitutes, such as IQ tests, medical health tests, and bank statements. Yet we continue to show off in the old ways, and rarely substitute such new ways. Why?

One explanation is inertia. Signaling equilibria require complex coordination, and those who try to change it via deviations can seem non-conformist and socially clueless. Another explanation is hypocrisy. As we discuss in our new book, The Elephant in the Brain, ancient and continuing norms against bragging push us to find plausible deniability for our brags. We can pretend that big vocabularies help us convey info, that sports are just fun, and that expensive clothes, etc. are prettier or more comfortable. It is much harder to find excuses to waive around your IQ test or bank statement for others to see.

It recently occurred to me that a sufficient lack of privacy would be an obvious fix for this problem. Imagine that it were easy to use face recognition to find someone’s official records, and from there to find out their net worth, IQ scores, and health test scores. In that case, observers could more cheaply acquire the same info that we are now try to show off in deniable ways.

Yes, we say to want to keep such info private, but the big efforts most of us go through to show off our smarts, health, and wealth suggests that we doth protest too much there. And as usual, it is less that we don’t know what policies would make us better off, and more than we don’t much care about that when we choose our political efforts.

Added 7a: Of course there may also be big disadvantages to losing privacy, and our evolved preferences may be tied more to particular surface behaviors and cues than to their general underlying signaling functions.

GD Star Rating
loading...
Tagged as: ,

Security Has Costs

Technical systems are often insecure, in that they allow unauthorized access and control. While strong security is usually feasible if designed in carefully from the start, such systems are usually made fast on the cheap. So they usually ignore security at first, and then later address it as an afterthought, which as a result becomes a crude ongoing struggle to patch holes as fast as holes are made or discovered.

The more complex a system is, the more different other systems it is adapted to, the more different organizations that share a system, and the more that such systems are pushed to the edge of technical or financial feasibility, the more likely that related security is full of holes.

A dramatic example of this is cell phone security. Most anyone in the world can use your cell phone to find out where your phone is, and hence where you are. And there’s not much anyone is going to do about this anytime soon. From today’s Post: Continue reading "Security Has Costs" »

GD Star Rating
loading...
Tagged as: , ,

Fair Date Reporting Act

A week ago I heard an NPR radio interview with an FTC representative on web and phone privacy. She said the FTC protects your privacy by making sure firms who collect info on our activities can only use it to sell us stuff, but not to decide on hiring, renting, lending, or insuring. I thought: why is this where we draw our line of “privacy”?

Looking up a recent FTC report (quotes below), I see it goes back to the 1972 Fair Credit Reporting Act (FCRA), which required firms that rate you or collect info on you for hiring, renting, lending, or insuring to show you everything your rating is based on, and let you challenge any part of it. And given how completely infeasible it would be to show you all internet info collected about you, or let you challenge any of it, this law basically says that hiring, renting, lending, and insuring decisions must not benefit from the vast increase in info that web/phone tech now creates.

Adverse selection, where the people you least want are mostly like to apply, can plague hiring, renting, lending, and insuring. This is a big problem, and many regulations are said to be designed to deal with it. Yet the FCRA clearly makes this hidden info problem worse, by greatly limiting the info on which such decisions can be based.

To see how far this can go wrong, imagine a Fair Date Reporting Act, requiring all dating decisions to be made on the basis of documented information that potential dates can inspect and challenge. You couldn’t reject someone for a date unless you could clearly document that they are unsuitable. You’d end up relying heavily on some very crude indicators, like weight, education, income, and hair color, and enjoy your dates a lot less. And then they’d probably pass laws prohibiting some indicators as unfair, such as weight.

So why are we more willing to mess up decisions about hiring, renting, lending, or insuring, relative to dating? Because we see those deciders as dominating, because they choose to accept or reject us, and we see big firms as evil. Why don’t we similarly restrict the info firms can use to try to sell us stuff? Because we see ourselves as doing more of the choosing there, making us the dominant party.

Added 11p: Imagine that you were required by law to score all job offers on objective verifiable criteria, such as salary and location, and had to take the job that scored highest. How close would that be to slavery?

Those promised FTC report quotes: Continue reading "Fair Date Reporting Act" »

GD Star Rating
loading...
Tagged as: , ,