The requirement that all subordinates must publicly support the plan once the decision is made is an entirely logical one since they will be responsible for motivating their own subordinates in actually executing the plan and it would put their subordinates in a lose-lose position if it is known that the Boss and Boss's Boss disagreed. Which is not to say that accurate agreement/disagreement records can't be noted and tracked for outcomes within that higher echelon as an input to performance evaluations, bonuses, and promotions.
I suspect that you're up against an even bigger challenge than is apparent. For example, our elections are secret ballot and clearly the outcomes matter to everyone, so they logically SHOULD be strictly outcome games, yet even a cursory examination of voter behavior (and politician behavior) suggests that both prioritize being "wrong strong" (being seen as part of the majority, even if that majority made a mistake) rather than willing to stake out unpopular positions on the conviction that they'll have the better outcomes, thus making them more resemble consensus games.
The penalty for being responsible for poor outcomes seems less than the penalty for being "not a team player", perhaps because the majority is better able to diffuse responsibility and mutually defend each other from accountability. Even when preference cascades finally result in the majority flipping to the better policy, those who were the early dissidents against the worse policy are rarely rewarded for having been right ahead of the crowd, they're more often still damaged by the retaliation they suffered before the cascade and, at best, they are belatedly restored to good standing, having gained nothing compared to those who supported the worse policy and waited until the shift was clear to flip later.
The diffusion of responsibility, alignment with power, and being in the majority all incentivize people to "go with the flow" rather than optimize for good outcomes. If the decision turns out well, being on the bandwagon accrues an individual and collective benefit. If it's wrong, everyone shares the blame, making it inert. No single individual pays a reputational cost (unless they become a scapegoat).
Even if you had a crystal ball, without sufficient support for your perspective, sometimes there's no incentive to act or speak up. In most organizational settings, there's no way for a person to internalize a reward for being contrarian or unpopular but correct. In fact, being unpopular and correct is a dangerous combination that can leave you much more vulnerable to punishment if your presence threatens those who are popular but incorrect.
It's rare to find the willingness and the resources to run parallel experiments upon discovering significant disagreements about goals and strategies, but it can still be beneficial to surface those disagreements to inform a decision. The problem is, why should anyone register disagreement if they know there's no individual upside, only reputational risk?
An anonymous survey or suggestion box could address pluralistic ignorance, but you'd still need an additional step to transform compelling survey data into actual coordination.
As it happens, I'm building a tool that can surface meaningful preference signals, identify critical masses of support for different ideas and proposals, and enable pluralistic knowledge dissemination inside an organization to facilitate more rational decisions and counter the effects of political or bureaucratic dysfunction. It's called spartacus.app. I'd love to get impressions.
I ran a prediction market within a large investment management firm. Like examples elsewhere, it failed to replace large consensus-building meetings (with their pre-meetings etc, just as brilliantly described by Robin). Real world is much more complex than what can be captured by a simple true/false prediction, even across multiple questions. Most projects are path-dependent: they will fail without a 100% buy-in from the troops, and most will fail even with that support. An organization (or capitalism) succeeds when it generates projects with real support and then relies on selection to leverage and harvest the ones that succeed.
Many years ago when we had Prophit at Google, I became very impressed by the potential for an internal prediction market. But I knew it wouldn't last. There was too much awkward dissonance between company goals and what the market had to say. (Prophit was right, but that's cold comfort when you're being shut down.) I empathized with the senior execs trying to motivate action in the face of what could easily be interpreted as a public vote of no confidence.
I still wonder how the technology will find its path. Option 1 is to say: Let's build a company culture that discourages "cooperation game"-players from shutting down such things. This might be hard to do given human nature. Option 2 is to say: Let's find a way to make it not ruffle quite as many feathers, but still be useful. Robin you're one of the experts on this: Are there any case examples of orgs doing this well?
The tension is when an exec presents at a company all-hands "we will achieve X, Y, Z this quarter", and then the internal prediction market says "no we won't."
The reality is that big goals aren't predestined to succeed or fail. An internal prediction market serves as useful feedback to execs, but it is also feedback to the rest of the company. There is something to be said for maintaining a group fiction that X, Y, Z is in fact possible, because maintaining such "irrational" beliefs is how great things are achieved. Much has been written about the reality distortion fields of such leaders as Steve Jobs and Elon Musk.
I think this does an injustice to 'disagree and commit' rules.
The outcome-oriented reason for 'disagree and commit' rules is: There are two things the firm needs to get right. It needs to make the right decision, and it needs to successfully execute on that decision.
Once a decision has been made, there is substantial risk half-assing execution and 'I don't think this idea is good' is a huge precursor to 'I'm gonna half ass my contribution to making this idea work'.
In most of these “decision meetings” that I have attended, for most of the participants it doesn’t matter which side they support, and so they don’t even take a side. The big boss is just not holding a vote.
Most people are playing something more like a “value-add game”. You have ten people in the meeting and the big boss is going to make a decision. Can you make any sort of comment that the key decision maker references in their decisionmaking. We should pay attention to metric x, we should look at country x as an indicator of aspect y, we should loop in someone from org z as part of the plan, etc.
I guess, but the point is that you don't want people playing either the "consensus game" or the "outcome game". They both seem like dysfunctional environments to me.
When the big boss decides on strategy X, and team member A thought it was a good idea, and team member B thought it was a bad idea, you still want A and B to have aligned incentives. You want them both to gain status when strategy X succeeds and lose status when strategy X fails. This can be a situation like everyone having the same stock options, or just a company culture that says you reward people for being on winning teams and penalize them for being on losing teams.
Whenever anyone on a team has an incentive to make the team fail, it's a bad situation.
"...if the project goes badly, they want to be seen as having opposed approval...."
This is the difficult situation to handle. If you have a team that needs to align on a decision and then execute it, you want maximum debate/dissent/discussion about the decision before it is made, but then maximum cooperation/coordination/team-play after it is made. The only way to achieve this is to have it so the individual team members care much more about the success of the team than about their own status within the team.
Even though political beliefs ARE sincere (they must be to perform their function of adhering people into coalitional groups) a lot of the details about policy position and rhetoric is, I believe, essentially a status game.
That might (partially) explain why elites take such different positions from working class folks and why their patterns of belief and expression seem more conformist, more emotional, and more fickle. Believers THINK they believe things, but they (and everyone else) are unconsciously expressing the values that they think people want to hear. The penalty for doing the opposite can be immense, reputationally, socially, and professionally.
These aren't individual beliefs. They're class-based status markers.
> Good strategies for the outcome game are to study the fundamentals, estimate long term outcomes, and then “plant your flag” via a clear recommendation, preferably in writing. Even better if you can arrange to make (decision-conditional) bets on the outcome.
No? These are decent, but obviously inferior to supporting both sides enough you can claim to have favored whatever the outcome ends up being.
Got nothing much to add except that this is one of your few positions that I support unequivocally, and I think it's very well stated here. "Consensus game" and "outcome game" are nice, evocative labels to bring up in conversation. Any organization would be better off if we organized ourselves in ways that reward honest expression of true beliefs and preferences. We should train ourselves not to demand illusions of consensus but to get along and do the work anyway.
Re. "Plausibly the key strength of capitalism is making outcome games matter more in society. People good at consensus games resent that, and want to cut capitalism to cut outcome game importance." :
This makes sense, but I think I can only "believe" this as modulated by your book /The Elephant in the Brain/, meaning that they want to cut outcome game importance /subconsciously/.
I might say that civilization itself is the process of reframing status games (which we are strongly evolved for, and had long before civilization, and will happily engage in with no education or training) so that we agree on a consensus (a status game within a game!) that the criteria for winning the status game should be “he who best demonstrates that they will spend their status reward on better outcomes”
The requirement that all subordinates must publicly support the plan once the decision is made is an entirely logical one since they will be responsible for motivating their own subordinates in actually executing the plan and it would put their subordinates in a lose-lose position if it is known that the Boss and Boss's Boss disagreed. Which is not to say that accurate agreement/disagreement records can't be noted and tracked for outcomes within that higher echelon as an input to performance evaluations, bonuses, and promotions.
I suspect that you're up against an even bigger challenge than is apparent. For example, our elections are secret ballot and clearly the outcomes matter to everyone, so they logically SHOULD be strictly outcome games, yet even a cursory examination of voter behavior (and politician behavior) suggests that both prioritize being "wrong strong" (being seen as part of the majority, even if that majority made a mistake) rather than willing to stake out unpopular positions on the conviction that they'll have the better outcomes, thus making them more resemble consensus games.
The penalty for being responsible for poor outcomes seems less than the penalty for being "not a team player", perhaps because the majority is better able to diffuse responsibility and mutually defend each other from accountability. Even when preference cascades finally result in the majority flipping to the better policy, those who were the early dissidents against the worse policy are rarely rewarded for having been right ahead of the crowd, they're more often still damaged by the retaliation they suffered before the cascade and, at best, they are belatedly restored to good standing, having gained nothing compared to those who supported the worse policy and waited until the shift was clear to flip later.
The diffusion of responsibility, alignment with power, and being in the majority all incentivize people to "go with the flow" rather than optimize for good outcomes. If the decision turns out well, being on the bandwagon accrues an individual and collective benefit. If it's wrong, everyone shares the blame, making it inert. No single individual pays a reputational cost (unless they become a scapegoat).
Even if you had a crystal ball, without sufficient support for your perspective, sometimes there's no incentive to act or speak up. In most organizational settings, there's no way for a person to internalize a reward for being contrarian or unpopular but correct. In fact, being unpopular and correct is a dangerous combination that can leave you much more vulnerable to punishment if your presence threatens those who are popular but incorrect.
It's rare to find the willingness and the resources to run parallel experiments upon discovering significant disagreements about goals and strategies, but it can still be beneficial to surface those disagreements to inform a decision. The problem is, why should anyone register disagreement if they know there's no individual upside, only reputational risk?
An anonymous survey or suggestion box could address pluralistic ignorance, but you'd still need an additional step to transform compelling survey data into actual coordination.
As it happens, I'm building a tool that can surface meaningful preference signals, identify critical masses of support for different ideas and proposals, and enable pluralistic knowledge dissemination inside an organization to facilitate more rational decisions and counter the effects of political or bureaucratic dysfunction. It's called spartacus.app. I'd love to get impressions.
I ran a prediction market within a large investment management firm. Like examples elsewhere, it failed to replace large consensus-building meetings (with their pre-meetings etc, just as brilliantly described by Robin). Real world is much more complex than what can be captured by a simple true/false prediction, even across multiple questions. Most projects are path-dependent: they will fail without a 100% buy-in from the troops, and most will fail even with that support. An organization (or capitalism) succeeds when it generates projects with real support and then relies on selection to leverage and harvest the ones that succeed.
Many years ago when we had Prophit at Google, I became very impressed by the potential for an internal prediction market. But I knew it wouldn't last. There was too much awkward dissonance between company goals and what the market had to say. (Prophit was right, but that's cold comfort when you're being shut down.) I empathized with the senior execs trying to motivate action in the face of what could easily be interpreted as a public vote of no confidence.
I still wonder how the technology will find its path. Option 1 is to say: Let's build a company culture that discourages "cooperation game"-players from shutting down such things. This might be hard to do given human nature. Option 2 is to say: Let's find a way to make it not ruffle quite as many feathers, but still be useful. Robin you're one of the experts on this: Are there any case examples of orgs doing this well?
Can you share more? About prophit and/or the dissonance with the rest of the company?
This is an accurate overview in my opinion: https://asteriskmag.com/issues/08/the-death-and-life-of-prediction-markets-at-google
The tension is when an exec presents at a company all-hands "we will achieve X, Y, Z this quarter", and then the internal prediction market says "no we won't."
The reality is that big goals aren't predestined to succeed or fail. An internal prediction market serves as useful feedback to execs, but it is also feedback to the rest of the company. There is something to be said for maintaining a group fiction that X, Y, Z is in fact possible, because maintaining such "irrational" beliefs is how great things are achieved. Much has been written about the reality distortion fields of such leaders as Steve Jobs and Elon Musk.
I think this does an injustice to 'disagree and commit' rules.
The outcome-oriented reason for 'disagree and commit' rules is: There are two things the firm needs to get right. It needs to make the right decision, and it needs to successfully execute on that decision.
Once a decision has been made, there is substantial risk half-assing execution and 'I don't think this idea is good' is a huge precursor to 'I'm gonna half ass my contribution to making this idea work'.
In most of these “decision meetings” that I have attended, for most of the participants it doesn’t matter which side they support, and so they don’t even take a side. The big boss is just not holding a vote.
Most people are playing something more like a “value-add game”. You have ten people in the meeting and the big boss is going to make a decision. Can you make any sort of comment that the key decision maker references in their decisionmaking. We should pay attention to metric x, we should look at country x as an indicator of aspect y, we should loop in someone from org z as part of the plan, etc.
If you make a comment that influences the plan, you can be seen as influencing the decision, which is a win in the consensus game.
I guess, but the point is that you don't want people playing either the "consensus game" or the "outcome game". They both seem like dysfunctional environments to me.
When the big boss decides on strategy X, and team member A thought it was a good idea, and team member B thought it was a bad idea, you still want A and B to have aligned incentives. You want them both to gain status when strategy X succeeds and lose status when strategy X fails. This can be a situation like everyone having the same stock options, or just a company culture that says you reward people for being on winning teams and penalize them for being on losing teams.
Whenever anyone on a team has an incentive to make the team fail, it's a bad situation.
"...if the project goes badly, they want to be seen as having opposed approval...."
This is the difficult situation to handle. If you have a team that needs to align on a decision and then execute it, you want maximum debate/dissent/discussion about the decision before it is made, but then maximum cooperation/coordination/team-play after it is made. The only way to achieve this is to have it so the individual team members care much more about the success of the team than about their own status within the team.
Even though political beliefs ARE sincere (they must be to perform their function of adhering people into coalitional groups) a lot of the details about policy position and rhetoric is, I believe, essentially a status game.
That might (partially) explain why elites take such different positions from working class folks and why their patterns of belief and expression seem more conformist, more emotional, and more fickle. Believers THINK they believe things, but they (and everyone else) are unconsciously expressing the values that they think people want to hear. The penalty for doing the opposite can be immense, reputationally, socially, and professionally.
These aren't individual beliefs. They're class-based status markers.
https://jmpolemic.substack.com/p/oppressors
> Good strategies for the outcome game are to study the fundamentals, estimate long term outcomes, and then “plant your flag” via a clear recommendation, preferably in writing. Even better if you can arrange to make (decision-conditional) bets on the outcome.
No? These are decent, but obviously inferior to supporting both sides enough you can claim to have favored whatever the outcome ends up being.
Got nothing much to add except that this is one of your few positions that I support unequivocally, and I think it's very well stated here. "Consensus game" and "outcome game" are nice, evocative labels to bring up in conversation. Any organization would be better off if we organized ourselves in ways that reward honest expression of true beliefs and preferences. We should train ourselves not to demand illusions of consensus but to get along and do the work anyway.
Re. "Plausibly the key strength of capitalism is making outcome games matter more in society. People good at consensus games resent that, and want to cut capitalism to cut outcome game importance." :
This makes sense, but I think I can only "believe" this as modulated by your book /The Elephant in the Brain/, meaning that they want to cut outcome game importance /subconsciously/.
> Long term citations counts for little
Isn't that still another form of the consensus game?
It has a bit more of outcomes mixed in.
I might say that civilization itself is the process of reframing status games (which we are strongly evolved for, and had long before civilization, and will happily engage in with no education or training) so that we agree on a consensus (a status game within a game!) that the criteria for winning the status game should be “he who best demonstrates that they will spend their status reward on better outcomes”