Tag Archives: Info

Are Financial Markets Too Short-Term?

Financial market prices embody info that helps others to make decisions. For example, firms decide activity levels based in part on their stock prices. Thus traders who add info to such markets do a public service, even if they do this for a private profit.

Such traders can choose to focus their info-collection efforts on “slow” info, which stays relevant for a long time, or on “fast” info, which is quickly forgotten. Many have said that such markets focus too much on fast info, relative to slow. In this post I will analyze this question. My tentative conclusion will be: yes, financial markets do indeed seem to focus too much on fast info. But first, let’s review the basics.

Each financial trade has an asset type, a buyer, a seller, a quantity, and a price. Each simple financial market trades one kind of asset, and its sequence of trade prices follows a random walk over time, a walk that reveals info about the value of that asset to observers. The expected price change variance during a time period is proportional to the amount of info revealed in that period.

Each trade happens via one trader first putting an offer into an “order book”, after which the another trader accepts that offer. While the act of posting a book order could reveal info to observers, it usually doesn’t. This is because a trader with substantial info prefers to instead profit from it by accepting a book order. If your info suggests that the price should rise, you buy, and if your info suggests that the price should fall, you sell.

However, the profits of traders who accept book orders come from the traders who posted those orders. So book order traders adjust their book prices to include the average info held by accepting traders. And competition typically moves book prices to where book traders make zero expected profits. There is a “bid-ask spread” between the “bid”, the highest book offer to buy, and the “ask”, the lowest book offer to sell. The size of this spread says how much info is expected to be embodied on average in each accepting trader.

However, some traders have little or no info. They instead want to trade for reasons other than profiting from info. If they could post competitive book orders, they should. But doing that well is hard. (For example, ~95% of book orders are cancelled before being filled.) So most low info traders instead accept book orders. Their trades lower the average info per trade, and thus allow traders with higher than average info to profit from their trades. These “fools” are the engine that drives the whole system.

For any given piece of info that a trader holds, they could profit more by trading a higher quantity at the same price. But those who make book orders foresee this strategy, and so their spread increases with order quantity; larger trades are presumed to carry more info.

As a result, a trader with an unusually big chunk of info prefers to reveal it more slowly over time, via a slower sequence of smaller trades (Vayanos, Kyle). And to avoid other traders noticing a pattern in their trades and jumping ahead to grab their profits, a trader who can find no other trades to hide among may need to make an apparent random walk of trades. For example, N2 trades on both the buy and sell side can hide N trades all on the same side.

So why not spread informed trades out over longer time periods? Because each piece of valuable info comes with a deadline. You can only profit from by telling a market about somethings that it will eventually learn in other ways. However, once many traders all know that many of them all have the same piece of info, then that info should be incorporated into the book order prices. Thus one can only profit by trading on such info before its everyone-knows-it deadline.

This duration-til-deadline varies greatly with info type. For example, slow info on future product fashions, or the success of innovation projects, may take years or decades to be revealed. In contrast, ~20% of trades are by “high frequency traders” (HFT), who typically trade on very fast info re prices in other markets. The deadline for the fastest HFT to arrive at a market with such other-market info is roughly when the second-faster HFT arrives. This is typically ~20-200 ns later for other markets at the same site, and ~50-500 μs for different sites (source: Kelvin Santos).

Thus five-year duration “slow” info is roughly a factor of a trillion to quadrillion times slower than HFT “fast” info. This huge dynamic range for info duration offers a big chance for duration effects to have big impacts. If there are problems with poor incentives re info duration, they could plausibly be really big problems. 

To evaluate whether financial markets focus too much on fast info, we should consider how social value, and also private trader costs and benefits, vary with info duration.

Let’s start with social value. As social value of info revealed to a market comes from its ability to influence decisions, decisions which are typically spread out across time, this value is roughly proportion to info size (i.e, price-change) times info duration. So, for example, if no relevant decisions are made using the market price in the few milliseconds duration of a high frequency trade, then the info in that trade induced zero social value.

Now let’s consider the private net revenue to be gained from a trade. As discussed above, that trade revenue is also proportional to info size times duration, at least for traders who have access to enough capital to support the required trading strategy, whose cost goes roughly as info size times duration.

How about trading costs? While there are fixed costs to design a trading strategy and arrange to implement it, and there can be mechanical marginal costs to execute a trade, the main other marginal cost is the opportunity cost of the assets used to make a trade. Any one asset can’t be simultaneously used to support an arbitrary number of arbitrary trades. The opportunity cost of these assets is also roughly proportional to info size times duration. (Yes, orgs that trade on margin and make many fast independent trades, may seem to face no opportunity costs of assets, but this is an illusion; they just have especially low opportunity costs per trade.)

So far all the factors we’ve considered have depended in the same way on duration; social value, trade revenue, and marginal trading cost all go as info size times duration. But a few considerations remain that depend differently.

For example, traders often do not have sufficient capital to fully profit from info that has a very large size times duration. In addition, long duration info apparently comes in larger chunks, which makes size and duration positively correlated. For example, an insight about whether some product innovation will succeed over the next decade is usually just a much bigger chunk of price-change-times-duration than is the last market price tick typically used by a HFT trader. This effect suggest insufficient attention to long duration info.

Finally, ambitious traders, and the systems that train and select them, prefer that traders show their abilities over many small fast trades, instead of over a few big slow trades. It is just not very useful to prove your trading abilities via finding and trading on info that takes decades to be proven right. This effect also suggests insufficient attention to long duration info.

Bottom line: while social value, trading revenue, and marginal trading cost all scale as price-change times info duration, the existence of large info chunks and the desire to prove trader abilities over career-sized durations suggests that financial markets pay too much attention to fast, relative to slow, info.

In my next related post, I’ll discuss how alternative trading institutions might mitigate this problem.

GD Star Rating
loading...
Tagged as: ,

Why Abstaining Helps

Misunderstandings that I heard in response to these tweets has encouraged me to try to explain more clearly the logic of why most eligible voters should abstain from voting.

Think of each vote cast between two candidates as being either +1 or -1, so that the positive candidate wins if the sum of all votes cast is positive; the negative candidate wins otherwise. Abstaining is then a vote of 0. (If the vote sum is zero, the election is a tie.)

Assume that there is one binary quality variable that expresses which of the two candidates is “better for the world”, that these two options are equally likely, that each voter gets one binary clue correlated with that quality, and that voters vote simultaneously. What we should want is to increase the chance that the better candidate wins.

While all else equal, each voter may prefer a higher quality candidate, they need not be otherwise indifferent. So if, based on other considerations, they have a strong enough preference for one of the candidates, such “partisan” voters will pick that candidate regardless of their clue. Thus their vote will not embody any info about candidate quality. They are so focused on other considerations that they won’t help make for a more informed election, at least not via their vote. The other “informed” voters care enough about quality that their vote will depend on their quality clue.

Thus the total vote will be the sum of the partisan votes plus the informed votes. So the sum of the partisan votes will set a threshold that the informed votes must overcome to tip the election. For example, if the partisan sum is -10, then the informed votes must sum to at least 10 to tip the election toward the positive candidate. For our purposes here it won’t matter if there is uncertainty over this sum of partisan votes or not; all that matters is that the partisan sum sets the threshold that informed votes must overcome.

Now in general we expect competing candidates to position themselves in political and policy spaces so that on average the partisan threshold is not too far from zero. After all, it is quite unusual for everyone to be very confident that one side will win. So I will from here on assume a zero threshold, though my analysis will be robust to modest deviations from that.

Assume for now that the clues of the informed voters are statistically independent of each other, given candidate quality. Then with many informed voters the sum of informed votes will approach a normal distribution, and the chance that the positive candidate wins is near the integral of this normal distribution above the partisan threshold.

Thus all that matters from each individual voter is the mean and variance of their vote. Any small correlation between a voter’s clue and quality will create a small positive correlation between quality and their mean vote. Thus their vote will move the mean of the informed votes in the right direction. Because of this, many say that the more voters the better, no matter how poorly informed is each one.

However, each informed voters adds to both the mean and the variance of the total vote, as shown in this diagram:

What matters is the “z-score” of the informed vote, i.e., the mean divided by its standard deviation. The chance that the better candidate wins is increasing in this z-score. So if a voter adds proportionally more to the standard deviation than they add to the mean, they make the final vote less likely to pick the better candidate, even if their individual contribution to the mean is positive.

This is why poorly informed voters who vote can hurt elections, and it is why the relevant standard is your information compared to that of the other voters who don’t abstain. If you are an informed voter who wants to increase the chance that the better candidate wins, then you should abstain if you are not sufficiently well informed compared to the others who will vote.

In a previous post I considered the optimal choice of when to abstain in two extreme cases: when all other informed voters also abstain optimally, and when no one else abstains but this one voter. Realistic cases should be somewhere between these extremes.

To model inequality in how informed are various voters, I chose a power law dependence of clue correlation relative to voter rank. If the power is high, then info levels fall very quickly as you move down in voter rank from the most informed voter. If the power is low, then info levels fall more slowly, and voters far down in rank may still have a lot of info.

I found that for a power less than 1/2, and ten thousand informed voters, everyone should vote in both extreme cases. That is, when info is distributed equally enough, it really does help to average everyone’s clues via their votes. But for a power of 3/4, more than half should abstain even if no one else abstains, and only 6 of them should vote if all informed voters abstained optimally. For a power of 1 then 80% should abstain even if no one else does, and only 2 of them should vote if all abstain optimally. For higher powers, it gets worse.

My best guess is that a power of one is a reasonable guess, as this is a very common power and also near the middle of the distribution of observed powers. Thus even if everyone else votes, for the purpose of making the total vote have a better chance of picking the better candidate, you should abstain unless you are especially well informed, relative to the others who actually vote. And the more unequal you estimate the distribution of who is how informed, the more reluctant you should be to vote.

Many have claimed that it hurts to tell people about this analysis, as low informed voters will ignore it, and only better informed voters might follow it. But this analysis gives advice to each and every voter, advice that doesn’t depend on who else adopts it; every added person who follows this advice is a net win. Yes, people can be uncertain about how unequal is the info distribution, and about where they rank in this distribution. But that’s no excuse for not trying to make best estimates and act accordingly.

Note that the above analysis ignored the cost of getting informed and voting, and that people seem to in general be overconfident when they estimate their informedness rank. Both of these considerations should make you more willing to abstain.

In the above I assumed voter clues are independent, but what if they are correlated? For the same means, clue correlation increases the variance of the sum of individual votes. So all else equal voters with correlated clues should be more willing to abstain, compared to other voters.

Yes, I’ve used binary clues throughout, and you might claim that all this analysis completely changes for non-binary clues. Possible, but that would surprise me.

Added 7a: Re the fact that it is possible and desirable to tell if you are poorly informed, I love this saying:

If you’re playing a poker game and you look around the table and can’t tell who the sucker is, it’s you.

GD Star Rating
loading...
Tagged as: ,

What Info Is Verifiable?

For econ topics where info is relevant, including key areas of mechanism design, and law & econ, we often make use of a key distinction: verifiable versus unverifiable info. For example, we might say that whether it rains in your city tomorrow is verifiable, but whether you feel discouraged tomorrow is not verifiable. 

Verifiable info can much more easily be the basis of a contract or a legal decision. You can insure yourself against rain, but not discouragement, because insurance contracts can refer to the rain, and courts can enforce those contract terms. And as courts can also enforce bets about rain, prediction markets can incentivize accurate forecasts on rain. Without that, you have to resort to the sort of mechanisms I discussed in my last post. 

Often, traffic police can officially pull over a car only if they have a verifiable reason to think some wrong has been done, but not if they just have a hunch. In the blockchain world, things that are directly visible on the blockchain are seen as verifiable, and thus can be included in smart contracts. However, blockchain folks struggle to make “oracles” that might allow other info to be verifiable, including most info that ordinary courts now consider to be verifiable. 

Wikipedia is a powerful source of organized info, but only info that is pretty directly verifiable, via cites to other sources. The larger world of media and academia can say many more things, via its looser and more inclusive concepts of “verifiable”. Of course once something is said in those worlds, it can then be said on Wikipedia via citing those other sources.

I’m eager to reform many social institutions more in the direction of paying for results. But these efforts are limited by the kinds of results that can be verified, and thus become the basis of pay-for-results contracts. In mechanism design, it is well known that it is much easier to design mechanisms that get people to reveal and act on verifiable info. So the long term potential for dramatic institution gains may depend crucially on how much info can be made verifiable. The coming hypocralypse may result from the potential to make widely available info into verifiable info. More direct mind-reading tech might have a similar effect. 

Given all this reliance on the concept of verifiability, it is worth noting that verifiability seems to be a social construct. Info exists in the universe, and the universe may even be made out of info, but this concept of verifiability seems to be more about when you can get people to agree on a piece of info. When you can reliably ask many difference sources and they will all confidently tell you the same answer, we tend to treat that as verifiable. (Verifiability is related to whether info is “common knowledge” or “common belief”, but the concepts don’t seem to be quite the same.)

It is a deep and difficult question what actually makes info verifiable. Sometimes when we ask the same question to many people, they will coordinate to tell us the answer that we or someone wants to hear, or will punish them for contradicting. But at other times when we ask many people the same question, it seems like their best strategy is just to look directly at the “truth” and report that. Perhaps because they find it too hard to coordinate, or because implicit threats are weak or ambiguous. 

The question of what is verifiable opens an important meta question: how can can we verify claims of verifiability? For example, a totalitarian regime might well insist not only that everyone agree that the regime is fair and kind, a force for good, but that they agree that these facts are clear and verifiable. Most any community with a dogma may be tempted to claim not only that their dogma is true, but also that it is verifiable. This can allow such dogma to be the basis for settling contract disputes or other court rulings, such as re crimes of sedition or treason.

I don’t have a clear theory or hypothesis to offer here, but while this was in my head I wanted to highlight the importance of this topic, and its apparent openness to investigation. While I have no current plans to study this, it seems quite amenable to study now, at least by folks who understand enough of both game theory and a wide range of social phenomena.  

Added 3Dec: Here is a recent paper on how easy mechanisms get when info is verifiable.

GD Star Rating
loading...
Tagged as: , ,

Advice Wiki

People often give advice to others; less often, they request advice from others. And much of this advice is remarkably bad. For example, such as the advice to “never settle” in pursuing your career dreams.

When A takes advice from B, that is often seen as raising the status of B and lowering that of A. As a result, people often resist listening to advice, they ask for advice as a way to flatter and submit, and they give advice as a way to assert their status and goodness. For example, advisors often tell others to do what they did, as a way to affirm that they have good morals, and achieved good outcomes via good choices.

These hidden motives understandably detract from the average quality of advice as a guide to action. And the larger is this quality reduction, the more potential there is for creating value via alternative advice institutions. I’ve previously suggested using decision markets for advice in many contexts. In this post, I want to explore a simpler/cheaper approach: a wiki full of advice polls. (This is like something I proposed in 2013.)

Imagine a website where you could browse a space of decision contexts, connected to each other by the subset relation. For example under “picking a career plan after high school”, there’s “picking a college attendance plan” and under that there’s “picking a college” and “picking a major”. For each decision context, people can submit proposed decision advice, such as “go to the highest ranked college you can get into” for “pick a college”. You and anyone could then vote to say which advice they endorse in which contexts, and you see the current voter distribution over advice opinion.

Assume participants can be anonymous if they so choose, but can also be labelled with their credentials. Assume that they can change their votes at anytime, and that the record of each vote notes which options were available at the time. From such voting records, we might see not just the overall distribution of opinion regarding some kind of decision, but also how that distribution varies with quality indicators, such as how much success a person has achieved in related life areas. One might also see how advice varies with level of abstraction in the decision space; is specific advice different from general advice?

Of course such poll results aren’t plausibly as accurate as those resulting from decision markets, at least given the same level of participation. But they should also be much easier to produce, and so might attract far more participation. The worse are our usual sources of advice, the higher the chance that these polls could offer better advice. Compared to asking your friends and family, these distributions of advice less suffer from particular people pushing particular agenda, and anonymous advice may suffer less from efforts to show off. At least it might be worth a try.

Added 1Aug: Note that decision context can include features of the decision maker, and that decision advice can include decision functions, which map features of the decision context to particular decisions.

GD Star Rating
loading...
Tagged as: , ,

Toward An Honest Consensus

Star Trek original series featured a smart computer that mostly only answered questions; humans made key decisions. Near the start of Nick Chater’s book The Mind Is Flat, which I recently started, he said early AI visions were based on the idea of asking humans questions, and then coding their answers into a computer, which might then answer the same range of questions when asked. But to the surprise of most, typical human beliefs turned out to be much too unstable, unreliable, incoherent, and just plain absent to make this work. So AI research turned to other approaches.

Which makes sense. But I’m still inspired by that ancient vision of an explicit accessible shared repository of what we all know, even if that isn’t based on AI. This is the vision that to varying degrees inspired encyclopedias, libraries, internet search engines, prediction markets, and now, virtual assistants. How can we all coordinate to create and update an accessible shared consensus on important topics?

Yes, today our world contains many social institutions that, while serving other functions, also function to create and update a shared consensus. While we don’t all agree with such consensus, it is available as a decent first estimate for those who do not specialize in a topic, facilitating an intellectual division of labor.

For example: search engines, academia, news media, encyclopedias, courts/agencies, consultants, speculative markets, and polls/elections. In many of these institutions, one can ask questions, find closest existing answers, induce the creation of new answers, induce elaboration or updates of older answers, induce resolution of apparent inconsistencies between existing answers, and challenge existing answers with proposed replacements. Allowed questions often include meta questions such as origins of, translations of, confidence in, and expected future changes in, other questions.

These existing institutions, however, often seem weak and haphazard. They often offer poor and biased incentives, use different methods for rather similar topics, leave a lot of huge holes where no decent consensus is offered, and tolerate many inconsistencies in the answers provided by different parts. Which raises the obvious question: can we understand the advantages and disadvantages of existing methods in different contexts well enough to suggest which ones we should use more or less where, or to design better variations, ones that offer stronger incentives, lower costs, and wider scope and integration?

Of course computers could contribute to such new institutions, but they needn’t be the only or even main parts. And of course the idea here is to come up with design candidates to test first at small scales, scaling up only when results look promising. Design candidates will seem more promising if we can at least imagine using them more widely, and if they are based on theories that plausibly explain failings of existing institutions. And of course I’m not talking about pressuring people to follow a consensus, just to make a consensus available to those who want to use it.

As usual, a design proposal should roughly describe what acts each participant can do when, what they each know about what others have done, and what payoffs they each get for the main possible outcomes of typical actions. All in a way that is physically, computationally, and financially feasible. Of course we’d like a story about why equilibria of such a system are likely to produce accurate answers fast and at low cost, relative to other possible systems. And we may need to also satisfy hidden motives, the unacknowledged reasons for why people actually like existing institutions.

I have lots of ideas for proposals I’d like the world to consider here. But I realized that perhaps I’ve neglected calling attention to the problem itself. So I’ve written this post in the hope of inspiring some of you with a challenge: can you help design (or test) new robust ways to create and update a social consensus?

GD Star Rating
loading...
Tagged as: , ,

Info Cuts Charity

Our culture tends to celebrate the smart, creative, and well-informed. So we tend to be blind to common criticisms of such folks. A few days ago I pointed out that creative folk tend to cheat more. Today I’ll point out that the well-informed tend to donate less to charity:

The best approach for a charity raising money to feed hungry children in Mali, the team found, was to simply show potential donors a photograph of a starving child and tell them her name and age. Donors who were shown more contextual information about famine in Africa — the ones who were essentially given more to think about — were less likely to give. …

Daniel Oppenheimer … found that simply giving people information about a charity’s overhead costs makes them less likely to donate to it. This held true, remarkably, even if the information was positive and indicated and the charity was extremely efficient. …

According to [John] List, thinking about all the people you’re not helping when you donate  …  makes the act of giving a lot less satisfying. (more; HT  Reihan Salam)

GD Star Rating
loading...
Tagged as: ,

Fear Causes Trust, Blindness

Three years ago I reported on psych studies suggesting that we trust because we fear:

High levels of support often observed for governmental and religious systems can be explained, in part, as a means of coping with the threat posed by chronically or situationally fluctuating levels of perceived personal control. (more)

New studies lay out this process in more detail:

In the domain of energy, … when individuals [were made to] feel unknowledgeable about an issue, participants increasingly trusted in the government to manage various environmental technologies, and increasingly supported the status quo in how the government makes decisions regarding the application of those technologies. … When people felt unknowledgeable with social issues, they felt more dependent on the government, which lead to increased trust.

When they feel unknowledgeable about a threatening social issue, … [people] also appear motivated to avoid learning new information about it. … In the context of an imminent oil shortage—as opposed to a distant one—participants who felt that the issue was “above their heads” reported an increased desire to adopt an “ignorance is bliss” mentality toward that issue, relative to those who saw oil management as a relatively simple issue.

This effect … is at least partly due to participants’ desire to protect their faith in the capable hands of the government. Among those who felt more affected by the recession, experimentally increasing domain complexity eliminated the tendency to seek out information. These individuals avoided not only negative information but also vague information, that is, the types of information that held the potential (according to pretesting) to challenge the idea that the government can manage the economy. Positive information was not avoided in the same way. (more)

I (again) suspect we act similarly toward medicine, law, and other authorities: we trust them more when we feel vulnerable to them, and we then avoid info that might undermine such trust. It is extremely important that we understand how this works, so that we can find ways around it. This is my guess for humanity’s biggest failing.

GD Star Rating
loading...
Tagged as: , ,