“Your papers, please” (or “Papers, please”) is an expression or trope associated with police state functionaries, allegedly popularized in Hollywood movies featuring Nazi Party officials demanding identification from citizens during random stops or at checkpoints. It is a cultural metaphor for life in a police state. (More)
When we share public spaces with mobile active things like cars, planes, boats, pets, drones, guns, and soon robots, we are vulnerable to being hurt by such things. So most everywhere on Earth, we require most such things to show visible registered identifiers. (Land also requires such registration, and most smart-phones contain them.) This system has some obvious advantages:
If such a thing is actually used to harm us, and if we can remember or record its identifier, then we can look it up to find a human we might hold responsible.
If such a thing does not show a visible identifier, we can immediately suspect it is up to no good, and pull away to limit harms.
If you ever lose a registered thing, its finder can find you via the registration system, and return it.
Registration discourages theft, as fully effective theft then requires also changing a registration entry.
The identifier offers a clear shared unique index or “name” to facilitate records and discussion about the item.
Oddly, humans are the mobile active things to which we are usually the most vulnerable, and yet we don’t require humans to show visible registered identities in shared public spaces. As a result, it is harder to tell if a human is allowed to be where we see them, and if they hurt us and run away, then we face larger risks of not finding someone we can usefully hold responsible. Humans can also as a result be more easily lost or stolen.
Because of this problem, a great many organizations require humans who try to enter their spaces to, at their entrances, show a registered identity. And many of these orgs require that visible ID tags be continually shown within their spaces. Even more orgs (such as stores) require such identifiers on their responsible representatives, even if not on visitors. In fact, most orgs would probably require everyone in their spaces to have IDs this if were cheap; they relent mostly out of fear of extra costs and discouraging visitors.
Now in addition to direct costs to create, maintain, show, read, and record identities, identifiers do seem prone to other disadvantages:
A) If your car’s license plate is visible, then someone could copy it, put it on a similar-looking car, and do bad things while pretending to be you. Similarly, someone might destroy or change your car license plate to make it look like you were trying to hide your identity.
B) If info about your car were posted publicly in association with its identifier, others might learn things about it and therefore you that you’d rather they didn’t know.
C) Visible identifiers make it easier to discretely follow the identified thing around, and collect and share records of where it was when. For example, “tag reader” cameras now regularly record the license plates of cars that pass them, making it easier for police to collect and share records of which cars were where when.
D) Even if only government were allowed to see your registration info, and even if only government could collect and share records of where your identifier had been seen, a corrupt government might use this info against you, and an incompetent government might let others see it.
E) If identifiers could not be easily removed or hidden on special occasions, this might make it harder to mount protests or revolts against a government.
Now for cars, planes, etc. most of the world seems to think that the benefits of identifiers substantial outweigh their costs and disadvantages; this isn’t debated as if it were a close call. The close calls are when the value of the item, or the harm it might do, seem too small to justify identifier costs. Such as with model airplanes or children’s bicycles. And regarding humans, many organizations clearly feel that the benefits of identifiers far outweigh their costs.
Yet in most public spaces in the world, we are reluctant to require visible registered identifiers for humans. Even though the relative sizes of the costs and benefits of identifiers seem similar for humans to those in the other cases we require identifiers.
Now, yes, part of this is that visible ID badges can just look ugly. But surely a bigger reason is a huge negative cultural association with required visible identifiers on humans. We think of tattoos put on Auschwitz inmates, of movies where Nazi officers say “papers, please”, or of the Bible’s forecast of an anti-christ “beast” who requires everyone to display his number to buy or sell. And many action stories feature heroes who are fugitives on the run from authorities, heroes often thwarted by identity systems.
Lately we’ve seen a lot of progress on biometric techs that try to identify people via their faces, gaits, voices, etc. While still expensive and unreliable (error rates of 1-30%), such tech seems likely to get cheaper and more reliable. And laws, such as those against wearing masks, mostly try to support biometrics, instead of getting in their way. So if we don’t official adopt some other human identity system soon, we seem likely to stumble into one built on such biometrics.
Such accidental systems seem likely to be substantially less fair and less reliable than designed system. Even today, facial recognition systems seem less reliable for women, people of color, children, and the elderly. And complex accidental systems seem more likely to let people make them fail just when they are trying to get away with bad things. Such as doing bad things in darker places where facial recognition gets harder. Designed systems have a better chance of remaining reliable in especially important unusual situations.
Given this looming threat, rational policy analysts should try to look past the negative symbolism of human identifiers to ask: would it be better to deliberately create an organized identity system, rather than waiting for the indirect effects of cheaper biometrics? Might a deliberate system reduce the pressures that are driving the development of an accidental biometric system?
Over the years I’ve noticed several policy options (private law, law vouchers) that are made easier via required registered human identifiers. I went looking for deeper analysis of the tradeoffs here, but couldn’t find much in a quick search, so I’ve tried to do a quick analysis myself, which I now present here. I invite those with relevant expertise to correct or refine my efforts.
My tentative conclusion is that there seem to be cheap ways to greatly limit most of the potential disadvantages of human identifiers, if only we can get past their negative cultural associations. So let me first quickly outline what I see as a reasonable proposal, and then go through how it can limit disadvantages.
Proposal
Imagine that a new law required each person to sign up with an identity org. This org issues them RFID tags, and they must have at least one such tag on their person whenever they are outside of their home. Such tags do not have to be put inside their body, though that is allowed. Each tag encodes at least one, and perhaps a great many, N bit strings that identify this person. There is a simple free “public option” identity org, which issues each client many cheap tags per year (via verified in-person meetings, perhaps passive UHF tags, now ~$0.15 ea.). Or you can pay more for fancier tags and orgs. (Tags and tag readers may be included within smart phones. There is a RFID design tradeoff between cheap tags and cheap readers; I’m not clear which is best.)
When a tag reader issues a standard WhoIsHere? signal, all tags that hear this query must respond promptly by broadcasting an HereIAm signal containing one of these N bit strings. This HereIAm signal is audible to all close enough tag readers. Tags need not respond with the same N bit string that they used in response to recent WhoIsHere? queries. (And they need not respond if they have already responded to such a request in the last X seconds. (Unless they have moved more than Y meters?) Some TBD mechanism limits response collisions, such as responding on different frequencies or at random response delays. There are standards on the signal strengths that tags produce and can hear. Currently unused tags are to be kept at home or in a signal-sealed container.)
If cameras or other detectors suggest that there is a human in the local space, and yet no tag at their location responded to a WhoIsHere? query, that is a legal violation and an immediate warning sign. After perhaps a quick check about this sign’s reliability, those who manage that local space may issue a warning to others nearby. And individuals with direct access to this indicator may take immediate precautions. The person without a responding tag may be subject to immediate restraint and serious legal consequences if they did this on purpose or negligently. To encourage detection of such violations, a bounty might be paid to those who discover and announce them.
If a tag reader does get a string from a tag, then it can submit a HowSafe? query to a standard identity name-server. This query includes this string, the time and place of reading, and an identifier of that tag reader. This HowSafe? query is then forwarded to the registration organization who issued that string, who then promptly responds (back through the identity server) with a few bits describing the legal status of that person at that place and time.
Another warning sign is if it seems that two tags on the same person respond with incompatible responses. That also suggests something isn’t right.
What exactly these few bits encode is a key design choice, discussed more below. They might encode if that person has outstanding warrants, or if they are an ex-con. They might encode what legal jurisdiction would cover disputes with that person. For example, is this person a foreigner, or a foreign diplomat immune to many local laws? (In a private law world, what law are they signed up with?) They might also encode how insured is that person to pay damages should a court judge against them. (Do they have a law voucher?) It should probably encode if HowSafe? queries about that person have recently been received form apparently reliable readers at incompatible space-time locations, calling into doubt all of that person’s recent tags.
Identity orgs are required to privately record all their interactions, and to promptly update the info that they use to respond to such queries. They face large penalties if they are ever caught giving false responses. Large bounties to those who prove such violations could encourage detection of such violations.
Other queries besides HowSafe?, asking for more info or for agreement to some offer, might be sent with the N bit string to the identity org, who could then forward them to the person for approval. For example, a query may ask for a facial recognition code to check against a just-obtained facial image of the person there. Or a query may ask for payment to allow their entry into some space.
Finally, in response to a lawsuit or crime, a court might order the identity org to reveal more info, which may allow the questioning, investigation, or even arrest of that person. The fact of this future possibility can be key to making people feel safer around tagged strangers nearby.
Disadvantages
Now let’s go through the above listed possible disadvantages of identity systems, to see how well they can be limited within this proposal.
A) If someone tries to reuse one of the identifiers that your tag has previously given out, but uses it in a time or place incompatible with the plan that your identifier has arranged, or your org can be immediately flagged as probably false. Your identity org can also flag it as at substantial risk of being false if the time and place of a resulting HowSafe? query disagrees with other recent HowSafe? queries, or with where you have been reporting yourself to be. (Tag thefts must be reported promptly.) All of which makes it hard for someone to fake being you.
Simple passive RFID tags cannot be hacked without great difficulty and direct physical contact. But more complex tags might be easier to hack. Someone might also steal or break your RFID tag, or plant a bad RFID tag on your person, putting you at risk of being temporarily flagged as suspicious. You might perhaps put a spare RFID tag on your person that is usually off but turns on if it has been too long since it heard from your main one.
B) If you only ever use each of your N bit strings once, and if observers can’t discern the code that your identity org uses to generate them, then observers can’t match them against each other or other data. And even if each string is used a few times, that still may say little about their owner; they don’t directly leak much info about you.
Each identity org will need to have pool of N bit strings from which they can issue, and we’d like it to be hard to infer much about the org or person from just looking at a string. The identity name server will have to know at least an identity org to which each string maps, and any who compromise its security would gain that clue. But identity orgs might privately exchange strings, and forward query requests to each other. And getting further clues would require compromising the security of your identity org, and such orgs could compete on making that seem hard and unlikely.
C) Observers may try to coordinate to collect and share sequential HereIAm tag responses with camera footage and other data, in an attempt to identify your path in space-time. If HowSafe? queries contained very few bits on average, if the X second duration required between consecutive responses were long, and if people were closely spaced and moving around substantially, then tracking via HereIAm responses which each gave a new unique N bit string would by itself typically quickly fail, and lose the trail. However, if these conditions did not hold, or if this data could be combined with other surveillance data, longer trails might be more reliably identified.
Such trails could probably be identified anyway with sufficient shared other surveillance data, even in the absense of any HereIAm responses. So a key design consideration for this whole system is how much it is worth extending the X duration, and reducing the average bits per HowSafe? responses, in order to increase the chances that identified trails are broken into more smaller chunks. Such efforts only make sense of they are sufficiently pivotal; they are less worth the bother if trials are either very likely, or very unlikely, to be identified without such efforts.
D) In this system design, unless you choose a government managed public-option identity org, the government does not have access to your key identity info, and cannot get that info without a court order. Yes the government might pressure them in private to give up your info in private. But that will long remain true for all organizations that hold info on you.
E) If people want to mount a protest, or attempt a coordinated revolt, they can just choose to stop using tags during crucial periods. Even implanted tags might be made inoperable if covered when they sit in coverable parts of the body. Then the government might see that a crowd of people is protesting or revolting, but could not use tags to identify them.
So in sum, if we don’t want the expensive unreliable unfair identity system that will result accidentally from falling costs of biometrics, we should consider creating a deliberate system of visible registered identifiers on humans. Such a system might still protect a lot of privacy, when that’s otherwise possible, while achieving most of the gains that identity systems have long provided for cars, planes, pets, etc.
inertia, not-invented-here and divergent interests is sufficient to radically slow disparate systems.
I don't understand your second question. I think private enforcement of local norms isn't hampered by lack of data sharing. How would private actors benefit from hiding what rules are applicable in a private space? How would that increase the danger of privacy invasion compared to the common, centralized approach you described?
What do you think will prevent the merging of non-govt systems? And what about the potential for private law that is hobbled by a lack of seeing what law people around you have?