Suppose you tell a close friend a secret. You consider them trustworthy, and don’t fear for its release. Suppose they request to tell the secret to a friend of theirs who you don’t know. They claim this person is also highly trustworthy. I think most people would feel significantly less secure agreeing to that.
If you don't trust your friends suggestion that you should trust his friend, then it seems you think your friend lacks this insight.
Yes, you're right. (I think I slighted your stipulation that your friend recommends the further disclosure because in my experience, that doesn't much happen. But I trust your account of social practices more than mine, as you seem a more social being.
I think the solution goes back to construal-level theory (or do I just bring everything back to that?) A modal mismatch ( http://tinyurl.com/6pt9eq5 ) occurs when your friend assesses her friend's concern for your welfare. Since it's her (presumably close) friend, she makes a near-mode assessment. But since your friend's friend isn't your friend, you make a far-mode assessment of the second-order friend.
The far mode assessment yields the "insight," which involves general considerations about loyalties. Your friend is mired in near-mode--it's her close friend--which fails to discern any specific facts which would suggest that the second-order friend would betray.
IT seems to me we tell secrets to our friends because the immediate benefit (a feeling of relief at having shared a feeling out loud) feels greater than the potential risk (that friend will tell others). But there is no benefit to the original secret-teller, if their friend shares the secret with another person. So why would I want my friend to tell my secret? It offers me no additional value, and only risk of harm.
Seems to me this conversation suffers from a lack of a definition of trust. A few things one could mean by trust include:
1) Expectation of honest dealing (they won't stiff you like strangers on ebay might).
2) Expectation of non-betrayl (they won't rat you out to the cops if you buy drugs from them, won't back out of buying a share in a condo you all agree to go in on together)
3) Expectation of not acting to undermine your interests (spreading your gossip, trying to steal your girlfriend).
Seems to me that transitivity decreases as we go down the scale. Trust is fairly transitive for 1 and very non-transitive for 3. I hypothesize this has to do with the difficulty in both checking for compliance (hence incentives for being trustworthy) and the difficulty of coordinating goals over long social networks.
I mean in 1 your friend can easily arbitrate any defection any punish it accordingly so the incentives to trust are high and no conflicting trust issues are likely to come up. On the other hand in 2 and 3 it is much harder to incentivize trust and the interests of friend (rather than a friend's friend) might turn out to conflict with yours in cases 2 and 3.
Realistically, the way we handle conflict of interest in cases 2 and 3 between various people (two people who want the same girl, a sudden need to lend someone else money you have agreed to put in on a condo) is by coordinating with the friends concerned and agreeing on a common solution. If we extended our trust in cases 2 and 3 indefinitely the cost of checking for conflict would be overwhelming.
Summing upthere seems to be at least a plethora of potential causes for subtransitivityof trust-
0) gossipvalue for originator
1) reducedfriendship-signaling benefit for originator
2) Katja’ssampling bias effect
3)exponential explosion of potential betrayers
4) reducedchance of attribution/punishment
5) reducedincentive (and increased cost) of permission-seeking
6) reduced concern for originator's welfare
7) trustingactions vs trusting judgement
Whydoesn’t vouching by a trusted friend eliminate most of these? First, there is point 7. But note that even if we rashly assume thatoriginator trusts friend’s *judgement* as much as her own, 0 & 1still give originator more incentive to share directly than indirectly. And a bit of reflection suggests that 2-6 all describe factors which friends(however *trustworthy*) will tend to *appreciate* less than originators, simply by virtue oftheir role.
Doesn't seem horrible for a rule of thumb.
Two explanations for quadratic scaling in professional contexts come to mind:1. The probability that the nth most trustworthy person in the group spills the secret scales linearly with n.2. The perceived cost/benefit of keeping the secret changes with n in such a way that, to first order, *everyone* in the group is n times as likely to spill the beans.
Note that if both are true simultaneously, you end up with cubic scaling, so if quadratic scaling is observed in practice, at least one of these factors, I'd guess #1, does not really come into play.
Trust is a moralistic concept. Is it? Notionally, yes. Practically, I'm less sure. Perhaps a distinction is needed between trusting someone to act in our own best interests, trusting someone's judgement, and trusting someone to behave in an expected manner. For example, one may trust a bank in a way they would be disinclined to trust a close friend, simply because the bank is bound by laws and the need to stay in business. But one may trust a close friend with a secret because they share the reciprocally reinforced mutuality Katja mentioned. Even though the ability to model expectations of the trusted party plays a major role in both, they are very different.
I'm no longer convinced that trust can be accurately reduced to a single concept. How we trust someone seems as important as if we trust them. Near trust is perhaps dominated by emotion and immediate experience whereas far trust is perhaps dominated by mechanics and social expectations, two different instantiations of the social contract.
As a caveat, I'm no economist and realize I may be using the terms near and far (the technical meanings of which I was unaware of until I began reading this blog) incorrectly. Nevertheless, hopefully I got my meaning across ungarbled.
I feel like if I tell a friend a secret, and let him tell a friend so long as his friend promises to keep it secret, it's highly likely that his friend will see nothing wrong with telling yet another friend so long as he promises to keep it a secret etc. By telling my friend that he can tell others so long as they promise to keep it secret, I'm implying that it's still secret if you do that, so his friends can do it too.
Also, if I tell n friends, then n people know. If I allow each of my friends to tell n friends, n^2 people know. This is substantially more people, and it's proportionally less likely to stay a secret.
I don't know if there's anything but a catechism but I recall from Clancy novels the rule of thumb was the probability of a secret being exposed was proportional to the square of the people who know it.
Define 'friend' as someone we voluntarily and in general offer more information to than is strictly necessary to establish a positive reputation. A social reputation is a probability generalization of good behavior in the future, and thus for novel scenarios (with respect to the friends' shared history).
Define 'others' as those who we only and in general transmit or expose enough information to establish a positive reputation for the purpose of exchanging tradable entities.
So a friend represents an over-supply of information, and therefore an under-pricing of information. The under-pricing implies a consumer surplus, and a producer deficit. A friendship exists when a two-way over-supply of information coincides with a two-way net benefit from this over-supply. How this could be is interesting, but simply because this relationship does not exist between the first person and the friend of a friend, the trust cannot be transitive.
Another explanation comes from looking at the situation in terms of information. What does the friend of a friend receive in pure information terms? A copy of a copy. That is the first persons point-of-view. From the friends PoV it is perceived as only a single copy (that is, they have the original). Therefore the problem is information fidelity, not trust. You can't assume that the content of the secret is identical in the mind of the friend of a friend to that of the first person.
The friend experiences the revealing of the secret not as an info transfer, but as reading from a shared memory space - as though it were public content. For the first person, it is perceived as a transfer from private memory space to another private memory space. As friends, this perception exists both ways. This two-way asymmetric perception communication model is the basis of trust, and hence of friendship.
Also, what establishes trust in the first place? If trust is learnt behavior, rather than rational calculation, then transitive trust is impossible. Transitive trust would be getting too close to the definition of reputation i suggested (above) - generalized probability of pro-social behavior. Friends of friends cannot be considered quasi-friends, only as information consumers (otherwise the outlook is utopian). Learnt behavior implies an energy minimization state. Why is or how can energy be saved by trusting? In other words, what is the point of having friends?
Strictly off the top of my head, is it possible there are two types of trust here? Trusting a friend not to repeat a secret goes to their trustworthiness with respect to respecting your wishes. Trusting them to select a third party who respects your wishes or confidentiality goes to trusting their judgment, not their willingness to respect your wishes. Put a slightly different way, one's trust in a friend is validated by, at base, one's judgment. The friend's assessment of his or her friends is a step removed from one's judgment, and so less valued.
Additionally, your selection of a trustworthy friend and their selection of a third party willing to keep your secrets are very different things. In the first instance, there is a direct relationship. In the second, the relationship, and therefore the responsibility, is derived and diluted. I think most people, when they consider how they would act in a similar circumstance, realize they are more likely to keep a secret of a friend than of a friend's friend.
Finally, consider the cost-beneft analysis here. When you tell a friend something, the risk of their disclosing the confidentiality is mitigated by the satisfaction we receive in the sharing of the secret. There is no related benefit to us when our friend passes the story along.
"One possible explanation is that we generally expect the people we trust to have much worse judgement about who to trust than about the average thing."I think this should be modified to: we trust friends more in actions more than in judgements, which doesn't sound so implausible. Being a friend is mostly about trusting actions (you believe that they will do what they say, in this case not tell people the secret). It's not that judgements are less trustworthy, though: there are just two dimensions. There might be other people who you don't know personally but whose judgements you trust, at least in some domains (a favourite restaurant critic?).
It seems to me that you already answered the last question yourself--friends have the right incentives to cooperate (with you) while friends of friends don't. But friends will want to be more trustworthy than they would be to all their friends, not just you. So they ask for permission because it benefits them, and self-interestedly advise that it's a good idea.
This is an interesting questions, but I worry that the model conflates two ideas. If I retell a secret, that is a bad signal about the importance of the secret itself, irrespective of whether trust is transitive. For example, if I tell two friends independently, I may evaluate the possibility of failure as 1-(.99*.99). But if I also *tell them* that I have told my secret to two people, this percentage goes way down.
To test transitivity, then, we need to eliminate the retelling aspect. How about this way: Your friend recommends someone you don't know as says you can trust them (with a secret, to watch your house, to hold your money, whatever...). How much trust do you place in this third person? Personally, I say quite a lot.
Trust is near?
Is it? I was surprised Katja didn't explore the near-far implications.
I think trust is far; isn't it? 1) We trust people for the long term. 2) Trust is a moralistic concept. 3) Pro in general is far, whereas con is near. ( http://tinyurl.com/7yqe7zp )
On the other hand, we trust those near us and distrust those who are distant.
How to resolve this conflict of "intuition"? I would argue that who we trust, far or near, is beside the point. The criterion is the context in which we typically apply the concepts of trust and distrust . In trust we look to the future; distrust ends the transaction.
In all, I think the established findings for pro and con generally should carry the greatest weight, I think.
Anyway, this is relevant for Katja's explanation. If as I think trust is far and distrust is near, then our bias should be to trust a friend's friend excessively (in comparison to the trust accorded to a friend). This would support my position that friend's don't over-endorse their friends; the opposite, they would underestimate the trustworthiness of their own friends with another friend's secret. (This may seem unintuitive, but remember that this involves abstracting from the incentive-based effects, which predominate.)
In many situations, an important part trusting your friend's friend is who is your friend trying to help, you or their other friend?
For example, your friend says that his friend, the contractor, can do a job for you and that he is very good at what he does. Is your friend doing this for your benefit or the contractor friend who needs a job?
Trusting your friend's friend can ruin even your trust in your friend. If the contractor does a bad job, you lose on that and in addition you will lose at least some of your trust in your friend.