Consider a fine-insured-bounty (FIB) crime law system such as I outlined here. All (but one) crime is punished officially by fines, everyone is fully insured to pay large fines, and bounty hunters detect and prosecute each crime. In a FIB system, we collectively decide the fine and bounty level for each crime, and manage a judicial system which decides individual cases.
Yes, and I think it very much does imposing substantial harms (e.g. I think the criminalization of many types of drug use was largely the result of this kind of effect). However, I think this effect would be substantially amplified under your proposal.
Under the current system when I vote against imposing a huge punishment for, say, sexual harassment I can say "while I'd never do that I'm not sure people who make that mistake should be punished more than such and such"
However, in a system where I'm only setting the consequences for myself any reluctance to set those penalties reflects directly on my belief that I might commit those offenses (or act in ways that make it plausible I did commit them) which makes the signal hugely less noisy.
It seems to me the superiority of this system (in a utility sense) is based on an implicit assumption of rationality by individuals and accurate reporting of their likely future behaviors. Yet most crime, especially crime driven by drug an alcohol addiction occurs as a result of severe failure of rationality, e.g., people doing stupid things that they have good reason to believe are net harmful. As such it seems quite plausible that this system would end up imposing a substantial utility cost as people agree to quite severe penalties to gain relatively small savings in insurance costs.
I'm particularly worried about the signalling implications of this system. For instance, I can easily see a race to the extremes as people agree to absurd penalties to signal that they aren't the kind of person who is likely to engage in that kind of behavior (especially for sexually related crimes).
You're right; I misread this as though insurers would be limited to the sorts of enforcement actions that a normal contract would entail (ie, upper-bounded by bankruptcy). I still have some reservations about whether this idea works, but the specific objection I gave was incorrect.
You may be thinking of what happens if someone is "fully ensured". Here, the insurer-client contract can specify any of a wide range of negative consequences for the client should the insurer have to pay out on their behalf.
You can't have insurance against criminal conviction, because that would remove your incentive to not commit crimes. This whole system is predicated on that, so it doesn't work.
Yes, and I think it very much does imposing substantial harms (e.g. I think the criminalization of many types of drug use was largely the result of this kind of effect). However, I think this effect would be substantially amplified under your proposal.
Under the current system when I vote against imposing a huge punishment for, say, sexual harassment I can say "while I'd never do that I'm not sure people who make that mistake should be punished more than such and such"
However, in a system where I'm only setting the consequences for myself any reluctance to set those penalties reflects directly on my belief that I might commit those offenses (or act in ways that make it plausible I did commit them) which makes the signal hugely less noisy.
> This somewhat reverses my prior stance on blackmail.
Wha...? Do you take back the checkmate?
Wouldn't your same postulated signaling motive induce them to lobby for severe punishment of criminals under our current system?
It seems to me the superiority of this system (in a utility sense) is based on an implicit assumption of rationality by individuals and accurate reporting of their likely future behaviors. Yet most crime, especially crime driven by drug an alcohol addiction occurs as a result of severe failure of rationality, e.g., people doing stupid things that they have good reason to believe are net harmful. As such it seems quite plausible that this system would end up imposing a substantial utility cost as people agree to quite severe penalties to gain relatively small savings in insurance costs.
I'm particularly worried about the signalling implications of this system. For instance, I can easily see a race to the extremes as people agree to absurd penalties to signal that they aren't the kind of person who is likely to engage in that kind of behavior (especially for sexually related crimes).
You're right; I misread this as though insurers would be limited to the sorts of enforcement actions that a normal contract would entail (ie, upper-bounded by bankruptcy). I still have some reservations about whether this idea works, but the specific objection I gave was incorrect.
You may be thinking of what happens if someone is "fully ensured". Here, the insurer-client contract can specify any of a wide range of negative consequences for the client should the insurer have to pay out on their behalf.
You can't have insurance against criminal conviction, because that would remove your incentive to not commit crimes. This whole system is predicated on that, so it doesn't work.