Economic Singularity Review

The Economic Singularity: Artificial intelligence and the death of capitalism .. This new book from best-selling AI writer Calum Chace argues that within a few decades, most humans will not be able to work for money.

A strong claim! This book mentions me by name 15 times, especially on my review of Martin Ford’s Rise of the Robots, wherein I complain that Ford’s main evidence for saying “this time is different” is all the impressive demos he’s seen lately. Even though this was the main reason given in each previous automation boom for saying “this time is different.” This seems to be Chace’s main evidence as well:

Faster computers, the availability of large data sets, and the persistence of pioneering researchers have finally rendered [deep learning] effective this decade, leading to “all the impressive computing demos” referred to by Robin Hanson in chapter 3.3, along with some early applications. But the major applications are still waiting in the wings, poised to take the stage. ..

It’s time to answer the question: is it really different this time? Will machine intelligence automate most human jobs within the next few decades, and leave a large minority of people – perhaps a majority – unable to gain paid employment? It seems to me that you have to accept that this proposition is at least possible if you admit the following three premises: 1. It is possible to automate the cognitive and manual tasks that we carry out to do our jobs. 2. Machine intelligence is approaching or overtaking our ability to ingest, process and pass on data presented in visual form and in natural language. 3. Machine intelligence is improving at an exponential rate. This rate may or may not slow a little in the coming years, but it will continue to be very fast. No doubt it is still possible to reject one or more of these premises, but for me, the evidence assembled in this chapter makes that hard.

Well of course it is possible for this time to be different. But, um, why can’t these three statements have been true for centuries? It will eventually be possible to automate tasks, and we have been slowly but exponentially “approaching” that future point for centuries. And so we may still have centuries to go. As I recently explained, exponential tech growth is consistent with a relatively constant rate at which jobs are displaced by automation.

Chace makes a specific claim that seems to me quite wrong.

Geoff Hinton – the man whose team won the landmark 2012 ImageNet competition – went further. In May 2015 he said that he expects machines to demonstrate common sense within a decade. .. Facebook has declared its ambition to make Hinton’s prediction come true. To this end, it established a basic research unit in 2013 called Facebook Artificial Intelligence Research (FAIR) with 50 employees, separate from the 100 people in its Applied Machine Learning team. So within a decade, machines are likely to be better than humans at recognising faces and other images, better at understanding and responding to human speech, and may even be possessed of common sense. And they will be getting faster and cheaper all the time. It is hard to believe that this will not have a profound impact on the job market.

I’ll give 50-1 odds against full human level common sense AI with a decade! Chace, I offer my $5,000 against your $100. Also happy to bet on “profound” job market impact, as I mentioned in my review of Ford. Chace, to his credit, sees value in such bets:

The economist Robin Hanson thinks that machines will eventually render most humans unemployed, but that it will not happen for many decades, probably centuries. Despite this scepticism, he proposes an interesting way to watch out for the eventuality: prediction markets. People make their best estimates when they have some skin in the forecasting game. Offering people the opportunity to bet real money on when they see their own jobs or other peoples’ jobs being automated may be an effective way to improve our forecasting.

Finally, Chace repeats Ford’s error in claiming economic collapse if median wages fall:

But as more and more people become unemployed, the consequent fall in demand will overtake the price reductions enabled by greater efficiency. Economic contraction is pretty much inevitable, and it will get so serious that something will have to be done. .. A modern developed society is not sustainable if a majority of its citizens are on the bread line.

Really, an economy can do fine if average demand is high and growing, even if median demand falls. It might be ethically lamentable, and the political system may have problems, but markets can do just fine.

GD Star Rating
loading...
Tagged as: ,

My Caplan Turing Test

At lunch today Bryan Caplan and I dug a bit into our disagreement, and now I’ll try to summarize his point of view. He can of course correct me.

Bryan sees sympathy feelings as huge influences on social outcomes. Not just feelings between people who know each other well, but also distant feelings between people who have never met. For example, if not for feelings of sympathy:

  1. Law and courts would often favor different disputants.
  2. Free workers would more often face harsh evaluations, punishments, and firing.
  3. Firm owners and managers would know much better which workers were doing good jobs.
  4. The US would invade and enslave Canada tomorrow.
  5. At the end of most wars, the victors would enslave the losers.
  6. Modern slaves would earn their owners much more than they would have as free workers.
  7. In the past, domestic, artisan, and city slaves, who were treated better than field slaves, would have been treated much more harshly.
  8. The slave population would have fallen less via gifts or purchase of freedom.
  9. Thus most of the world population today would be slaves.

These views are, to me, surprisingly different from the impression I get from reading related economics literatures. Bryan says I may be reading the wrong ones, but he hasn’t yet pointed me to the correct ones. As I read them, these usual economics literatures give different impressions:

  • Law and economics literature suggests efficiency usual decides who wins, with sympathy distortions having a real but minor influence.
  • Organization theory literature suggests far more difficulties in motivating workers and measuring their performance.
  • Slavery literature suggests slaves doing complex jobs were treated less harshly for incentive reasons, and would not have earned much more if treated more harshly. Thus modern slaves would also not earn much more as slaves.

Of course even if Bryan were right about all these claims, he needn’t be right in his confident opinion that the vast majority of biological humans will have about as much sympathy for ems as they do for mammals, and thus treat ems as harshly as we treat most mammals.

This sympathy-driven view doesn’t by itself predict Caplan’s strong (and not much explained) view that ems would also be very robot-like. But perhaps we might add to it a passion for domination – people driven by feelings to treat nicely creatures they respect might also be driven by feelings to dominate creatures they do not respect. Such a passion for dominance might induce biological humans to force ems to into ultra docility, even if that came at a productivity cost.

GD Star Rating
loading...
Tagged as: , ,

Oarsman Pay Parable

Imagine an ancient oarsman, rowing in a galley boat. Rowing takes effort, and risks personal injury, so all else equal an oarsman would rather not row, or row only weakly. How can his boss induce effort?

One simple approach is to offer a very direct and immediate incentive. Use slaves as rowers, and have a boss watch them, whipping any who aren’t rowing as hard as sustainably possible. This actually didn’t happen much in the ancient world; galley slaves weren’t common until the 1500s. But the idea is simple. And of course the same system could also work with cash; usually make positive payments for work, but sometimes fine those you discover aren’t working hard enough. Of course the boss can’t watch everyone all the time. But with a big enough penalty when caught, it might work.

Now imagine that the boss can’t watch each individual oarsman, but can only see the overall speed of the ship. Now the entire crew must be punished together, all or none of them. The boss might try to improve the situation by empowering oarsmen to punish each other for not rowing hard enough, and that might help, but rowers would also use that power for other ends, creating costs.

An even worse case is where the boss can only see how long it takes for the boat to reach its destination. Here the boss might reward the crew for a short trip, and punish them for a long one, but a great many other random factors will influence the length of the trip. Why bother to work hard, if it makes little difference to your chance of reward or punishment?

There is a general principle here. As we add more noise to the measurement of relevant outcomes visible to the ultimate boss, the harder it is to use incentives tied to such outcomes to incentivize rowers. This is true regardless of the type of incentives used. Yes, the lower the worst outcome, and the higher the best outcome, that the boss can impose, the stronger incentives can be. But even the strongest possible incentives can fail when noise is high.

Yes, one can create layers of bosses, with the lowest bosses able to see specifics best. But it can be hard to give lower bosses good incentives, if higher bosses can’t see well.

Another problem is if the boss doesn’t know just how hard each oarsman is capable of rowing. In this case most oarsmen get some slack, so that they aren’t punished for not doing more than they can. This is just one example of an “information rent”. In general, such rents come from any work-relevant info that the worker has that the boss can’t see. If rowers need to synchronize their actions with each other or with waves or wind or time of day. If a ship captain needs to choose the ship’s route based info on weather and pirates. If a captain needs to treat different cargo differently in different conditions. If a captain need to make judgements about whether to wait longer in port for more cargo.

In general, when you want a worker to see some local condition, and then take an action that depends on that condition, you must pay some extra rent. So the more relevant info that workers get, the more choices they make, and the more that rides on those choices, the more workers gain in info rents.

A related issue is the scope for sabotage. Angry resentful workers can seek hidden ways to hurt their bosses and ventures. So the more hard-to-detect ways workers have to hurt things, the more bosses want to treat them well enough to avoid anger and resentment. Pained, sullen, or depressed workers can also hurt the mood of co-workers, suppliers, customers, and investors whom they contact. And the threat of pain can stress workers, making it harder for them to think clearly and well. These issues tend to argue against often using beatings and pain for motivation, even if such things allow stronger incentives by expanding the range of possible outcomes.

Overall, these issues are bigger for more “complex” work, i.e., for more cognitive work, work that adapts more to diverse and new local conditions, and work in larger organizations. In the modern world, jobs have been getting more complex in these ways, and the organization and work literature I’ve read suggests that finding good work incentives is a central problem in modern organizations, and that more complex work is a big reason why modern workplaces substitute broad incentives and good treatment for the detailed and harsh rules and monitoring more common in past eras.

The literature I’ve read on the economics of slavery also uses job complexity to explain the severity of treatment of slaves. Slaves in artisan jobs, in cities, and in households were treated better than field slaves, arguably because of job complexity. They were beaten less, and paid more, and might eventually buy their own freedom.

Bryan Caplan has argued that ems would be treated harshly as slaves: Continue reading "Oarsman Pay Parable" »

GD Star Rating
loading...
Tagged as: , ,

Caplan Debate Status

In this post I summarize my recent disagreement with Bryan Caplan. In the next post, I’ll dive into details of what I see as the key issue.

I recently said:

If you imagine religions, governments, and criminals not getting too far out of control, and a basically capitalist world, then your main future fears are probably going to be about for-profit firms, especially regarding how they treat workers. You’ll fear firms enslaving workers, or drugging them into submission, or just tricking them with ideology.

Because of this, I’m not so surprised by the deep terror many non-economists hold of future competition. For example, Scott Alexander (see also his review):

I agree with Robin Hanson. This is the dream time .. where we are unusually safe from multipolar traps, and as such weird things like art and science and philosophy and love can flourish. As technological advance increases, .. new opportunities to throw values under the bus for increased competitiveness will arise. .. Capitalism and democracy, previously our protectors, will figure out ways to route around their inconvenient dependence on human values. And our coordination power will not be nearly up to the task, assuming something much more powerful than all of us combined doesn’t show up and crush our combined efforts with a wave of its paw.

But I was honestly surprised to see my libertarian economist colleague Bryan Caplan also holding a similarly dark view of competition. As you may recall, Caplan had many complaints about my language and emphasis in my book, but in terms of the key evaluation criteria that I care about, namely how well I applied standard academic consensus to my scenario assumptions, he had three main points.

First, he called my estimate of an em economic growth doubling time of one month my “single craziest claim.” He seems to agree that standard economic growth models can predict far faster growth when substitutes for human labor can be made in factories, and that we have twice before seen economic growth rates jump by more than a factor of fifty in a less than previous doubling time. Even so, he can’t see economic growth rates even doubling, because of “bottlenecks”:

Politically, something as simple as zoning could do the trick. .. the most favorable political environments on earth still have plenty of regulatory hurdles .. we should expect bottlenecks for key natural resources, location, and so on. .. Personally, I’d be amazed if an em economy doubled the global economy’s annual growth rate.

His other two points are that competition would lead to ems being very docile slaves. I responded that slavery has been rare in history, and that docility and slavery aren’t especially productive today. But he called the example of Soviet nuclear scientists “powerful” even though “Soviet and Nazi slaves’ productivity was normally low.” He rejected the relevance of our large literatures on productivity correlates and how to motive workers, as little of that explicitly includes slaves. He concluded:

If, as I’ve argued, we would copy the most robot-like people and treat them as slaves, at least 90% of Robin’s details are wrong.

As I didn’t think the docility of ems mattered that much for most of my book, I challenged him to audit five random pages. He reported “Robin’s only 80% wrong”, though I count only 63% from his particulars, and half of those come from his seeing ems as very literally “robot-like”. For example, he says ems are not disturbed by “life events”, only by disappointing their masters. They only group, identify, and organize as commanded, not as they prefer or choose. They have no personality “in a human sense.” They never disagree with each other, and never need to make excuses for anything.

Caplan offered no citations with specific support for these claims, instead pointing me to the literature on the economics of slavery. So I took the time to read up on that and posted a 1600 summary, concluding:

I still can’t find a rationale for Bryan Caplan’s claim that all ems would be fully slaves. .. even less .. that they would be so docile and “robot-like” as to not even have human-like personalities.

Yesterday, he briefly “clarified” his reasoning. He says ems would start out as slaves since few humans see them as having moral value:

1. Most human beings wouldn’t see ems as “human,” so neither would their legal systems. .. 2. At the dawn of the Age of Em, humans will initially control (a) which brains they copy, and (b) the circumstances into which these copies emerge. In the absence of moral or legal barriers, pure self-interest will guide creators’ choices – and slavery will be an available option.

Now I’ve repeatedly pointed out that the first scans would be destructive, so either the first scanned humans see ems as “human” and expect to not be treated badly, or they are killed against their will. But I want to focus instead on the core issue: like Scott Alexander and many others, Caplan sees a robust tendency of future competition to devolve into hell, held at bay only by contingent circumstances such as strong moral feelings. Today the very limited supply of substitutes for human workers keeps wages high, but if that supply were to greatly increase then Caplan expects that without strong moral resistance capitalist competition eventually turns everyone into docile inhuman slaves, because that arrangment robustly wins productivity competitions.

In my next post I’ll address that productivity issue.

GD Star Rating
loading...
Tagged as: , ,

World Basic Income

Joseph said .. Let Pharaoh .. appoint officers over the land, and take up the fifth part of the land of Egypt in the seven plenteous years. .. And that food shall be for store to the land against the seven years of famine, which shall be in the land of Egypt; that the land perish not through the famine. And the thing was good in the eyes of Pharaoh. (Genesis 38)

[Medieval Europe] public authorities were doubly interested in the problem of food supplies; first, for humanitarian reasons and for good administration; second, for reasons of political stability because hunger was the most frequent cause of popular revolts and insurrections. In 1549 the Venetian officer Bernardo Navagero wrote to the Venetian senate: “I do not esteem that there is anything more important to the government of cities than this, namely the stocking of grains, because fortresses cannot be held if there are not victuals and because most revolts and seditions originate from hunger. (p42, Cipolla, Before the Industrial Revolution)

63% of Americans don’t have enough saved to cover even a $500 financial setback. (more)

Even in traditional societies with small governments, protecting citizens from starvation was considered a proper of role of the state. Both to improve welfare, and to prevent revolt. Today it could be more efficient if people used modern insurance institutions to protect themselves. But I can see many failing to do that, and so can see governments trying to insure their citizens against big disasters.

Of course rich nations today face little risk of famine. But as I discuss in my book, eventually when human level artificial intelligence (HLAI) can do almost all tasks cheaper, biological humans will lose pretty much all their jobs, and be forced to retire. While collectively humans will start out owning almost all the robot economy, and thus get rich fast, many individuals may own so little as to be at risk of starving, if not for individual or collective charity.

Yes, this sort of transition is a long way off; “this time isn’t different” yet. There may be centuries still to go. And if we first achieve HLAI via the relatively steady accumulation of better software, as we have been doing for seventy years, we may get plenty of warning about such a transition. However, if we instead first achieve HLAI via ems, as elaborated in my book, we may get much less warning; only five years might elapse between seeing visible effects and all jobs lost. Given how slowly our political systems typically changes state redistribution and insurance arrangements, it might be wiser to just set up a system far in advance that could deal with such problems if and when they appear. (A system also flexible enough to last over this long time scale.)

The ideal solution is global insurance. Buy insurance for citizens that pays off only when most biological humans lose their jobs, and have this insurance pay enough so these people don’t starve. Pay premiums well in advance, and use a stable insurance supplier with sufficient reinsurance. Don’t trust local assets to be sufficient to support local self-insurance; the economic gains from an HLAI economy may be very concentrated in a few dense cities of unknown locations.

Alas, political systems are even worse at preparing for problems that seem unlikely anytime soon. Which raises the question: should those who want to push for state HLAI insurance ally with folks focused on other issues? And that brings us to “universal basic income” (UBI), a topic in the news lately, and about which many have asked me in relation to my book.

Yes, there are many difficult issues with UBI, such as how strongly the public would favor it relative to traditional poverty programs, whether it would replace or add onto those other programs, and if replacing how much that could cut administrative costs and reduce poverty targeting. But in this post, I want to focus on how UBI might help to insure against job loss from relatively sudden unexpected HLAI.

Imagine a small “demonstration level” UBI, just big enough to one side to say “okay we started a UBI, now it is your turn to lower other poverty programs, before we raise UBI more.” Even such a small UBI might be enough to deal with HLAI, if its basic income level were tied to the average income level. After all, an HLAI economy could grow very fast, allowing very fast growth in the incomes that biological human gain from owning most of the capital in this new economy. Soon only a small fraction of that income could cover a low but starvation-averting UBI.

For example, a UBI set to x% of average income can be funded via a less than x% tax on all income over this UBI level. Since average US income per person is now $50K, a 10% version gives a UBI of $5K. While this might not let one live in an expensive city, a year ago I visited a 90-adult rural Virginia commune where this was actually their average income. Once freed from regulations, we might see more innovations like this in how to spend UBI.

However, I do see one big problem. Most UBI proposals are funded out of local general tax revenue, while the income of a HLAI economy might be quite unevenly distributed around the globe. The smaller the political unit considering a UBI, the worse this problem gets. Better insurance would come from a UBI that is funded out of a diversified global investment portfolio. But that isn’t usually how governments fund things. What to do?

A solution that occurs to me is to push for a World Basic Income (WBI). That is, try to create and grow a coalition of nations that implement a common basic income level, supported by a shared set of assets and contributions. I’m not sure how to set up the details, but citizens in any of these nations should get the same untaxed basic income, even if they face differing taxes on incomes above this level. And this alliance of nations would commit somehow to sharing some pool of assets and revenue to pay for this common basic income, so that everyone could expect to continue to receive their WBI even after an uneven disruptive HLAI revolution.

Yes, richer member nations of this alliance could achieve less local poverty reduction, as the shared WBI level couldn’t be above what the poor member nations could afford. But a common basic income should make it easier to let citizens move within this set of nations. You’d less have to worry about poor folks moving to your nation to take advantage of your poverty programs. And the more that poverty reduction were implemented via WBI, the bigger would be this advantage.

Yes, this seems a tall order, probably too tall. Probably nations won’t prepare, and will then respond to a HLAI transition slowly, and only with what ever resources they have at their disposal, which in some places will be too little. Which is why I recommend that individuals and smaller groups try to arrange their own assets, insurance, and sharing. Yes, it won’t be needed for a while, but if you wait until the signs of something big soon are clear, it might then be too late.

GD Star Rating
loading...
Tagged as: , ,

Community Watchers

In my youth, I was skeptical of things I could not see. Like community social health. Not just physical health, but social health, and not just of individuals, but of communities. But now that I am older and can see more, I am convinced: communities exist, and matter. Not just very visible things like jobs, parks, houses, and stores. But harder to see coalitions, cultures, and norms that influence how people feel about and treat each other.

In some places people more often see when someone is hurting, and try to help. Or stop predators on the prowl. Or see other big changes for mutual gain, and coordinate to achieve them. In other places, these happen less. This sort of community health varies not just from city to city, or firm to firm, but from block to block, and from one cubicle row to cubicle row.

If you live in a place for a while, and you are mature enough to see the local social fabric, then you may see your local social health. And while you might want government to help with this, distant government officials managed by and via formal rules can’t do much. Sincere competent local community activists can do more. But while some can choose to become these, it can be hard for others to tell who they are, to support them. What else can we do?

Many people like to travel, and wish somehow to combine travel with doing good. Many also like the idea of secret societies, especially ones devoted to noble causes. I see an opening here for a secret society of travelers devoted to improving community social health.

The idea is simple: a secret society evaluates the local health of communities they visit, and combines these ratings into a public map. If this map came to be seen as reliable, it could shame poor communities into doing more to improve their health. With residents preferring to move to better communities, land owners would gain stronger incentives to promote improvements.

This would not be easy. Society members must be socially perceptive, stay long enough at each place to evaluate well, overcome temptations to push various other agendas and biases in their evaluations, and avoid detection. And they must find ways to collect new similarly virtuous members, even after their society becomes prestigious. This is a tall order.

But the payoff could be huge: healthier communities. If you try to create this, my only advice is: first collect a big enough map in secret and then test it in many ways for accuracy before going public. It isn’t enough that you hope you will be able to do this; wait until you have actually done it.

From a July 14 conversation with Pete Bertine and Andrew Lockhart.

GD Star Rating
loading...
Tagged as:

Grace-Hanson Podcasts

Katja Grace and I recorded two more podcasts:

This adds to our nine previous podcasts:

GD Star Rating
loading...
Tagged as:

AI As Software Grant

While I’ve been part of grants before, and had research support, I’ve never had support for my futurist work, including the years I spent writing Age of Em. That now changes:

The Open Philanthropy Project awarded a grant of $264,525 over three years to Robin Hanson (Associate Professor of Economics, George Mason University) to analyze potential scenarios in the future development of artificial intelligence (AI). Professor Hanson plans to focus on scenarios in which AI is developed through the steady accumulation of individual pieces of software and leads to a “multipolar” outcome. .. This grant falls within our work on potential risks from advanced artificial intelligence, one of our focus areas within global catastrophic risks. (more)

Who is Open Philanthropy? From their summary:

Good Ventures is a philanthropic foundation whose mission is to help humanity thrive. Good Ventures was created by Dustin Moskovitz (co-founder of Facebook and Asana) and Cari Tuna, who have pledged to give the majority of their wealth to charity. .. GiveWell is a nonprofit that finds outstanding giving opportunities and publishes the full details of its analysis to help donors decide where to give. .. The Open Philanthropy Project is a collaboration between Good Ventures and GiveWell in which we identify outstanding giving opportunities, make grants, follow the results, and publish our findings.

A key paragraph from my proposal:

Robin Hanson proposes to take three years to conduct a broad positive analysis of the multipolar scenario wherein AI results from relatively steady accumulation of software tools. That is, he proposes to assume that human level AI will result mainly from the continued accumulation of software tools and packages, with distributions of cost and value correlations similar to those seen so far in software practice, in an environment where no one actor dominates the process of creating or fielding such software. He will attempt a mostly positive analysis of the social consequences of these assumptions, both during and after a transition to a world dominated by AI. While this is hardly the universe of all desired analyses, it does seem to cover a non-trivial fraction of interesting cases.

I and they see value in such an analysis even if AI software ends up differing systematically from the software we’ve seen so far:

While we do not believe that the class of scenarios that Professor Hanson will be analyzing is necessarily the most likely way for future AI development to play out, we expect his research to contribute a significant amount of useful data collection and analysis that might be valuable to our thinking about AI more generally, as well as provide a model for other people to follow when performing similar analyses of other AI scenarios of interest.

My idea is to extract from our decades of experience with software a more detailed description of the basic economics of software production and use. To distinguish, as time allows, many different kinds of inputs to production, styles of production, parts of produced products, and types of uses. And then to sketch out different rough “production functions” appropriate to different cases. That is, to begin to translate basic software engineering insight into economics language.

The simple assumption that software doesn’t fundamentally change in the future is the baseline scenario, to be fed into standard economic models to see what happens when such a more richly described software sector slowly grows to take over the economy. But a richer more detailed description of software economics can also give people a vocabulary for describing their alternative hypotheses about how software will change. And then this analysis framework can be adjusted to explore such alternative hypotheses.

So right from the start I’d like to offer this challenge:

Do you believe that the software that will let machines eventually do pretty much all jobs better than humans (or ems) will differ in foreseeable systematic ways from the software we have seen in the last seventy years of software practice? If so, please express your difference hypothesis as clearly as possible in terminology that would be understandable and familiar to software engineers and/or economists.

I will try to stretch the economic descriptions of software that I develop in the direction of encompassing the most common such hypotheses I find.

GD Star Rating
loading...
Tagged as: , , ,

Merkle’s Futarchy

My futarchy paper, Shall We Vote on Values But Bet on Beliefs?, made public in 2000 but officially “published” in 2013, has gotten more attention lately as some folks talk about using it to govern blockchain organizations. In particular, Ralph Merkle (co-inventor of public key cryptography) has a recent paper on using futarchy within “Decentralized Autonomous Organizations.”

I tried to design my proposal carefully to avoid many potential problems. But Merkle seems to have thrown many of my cautions to the wind. So let me explain my concerns with his variations.

First, I had conservatively left existing institutions intact for Vote on Values; we’d elect representatives to oversee the definition and measurement of a value metric. Merkle instead has each citizen each year report a number in [0,1] saying how well their life has gone that year:

Annually, all citizens are asked to rank the year just passed between 0 and 1 (inclusive). .. it is intended to provide information about one person’s state of satisfaction with the year that has just passed. .. Summed over all citizens and divided by the number of citizens, this gives us an annual numerical metric between 0 and 1 inclusive. .. An appropriately weighted sum of annual collective welfares, also extending indefinitely into the future, would then give us a “democratic collective welfare” metric. .. adopting a discount rate seems like at least a plausible heuristic. .. To treat their death: .. ask the person who died .. ask before they die. .. [this] eliminates the need to evaluate issues and candidates. The individual citizen is called upon only to determine whether the year has been good or bad for themselves. .. We’ve solved .. the need to wade through deceptive misinformation.

Yes, it could be easy to decide how your last year has gone, even if it is harder to put that on a scale from worst to best possible. But reporting that number is not your best move here! Your optimal strategy here is almost surely “bang-bang”, i.e., reporting either 0 or 1. And you’ll probably want to usually give the same consistent answer year after year. So this is basically a vote, except on “was this last year a good or a bad year?”, which in practice becomes a vote on “has my life been good or bad over the last decades.” Each voter must pick a threshold where they switch their vote from good to bad, a big binary choice that seems ripe for strong emotional distortions. That might work, but it is pretty far from what voters have done before, so a lot of voter learning is needed.

I’m much more comfortable with futarchy that uses value metrics tied to the reason an organization exists. Such as using the market price of investment to manage an investment, attendance to manage a conference, or people helped (& how much) to manage a charity.

If there are too many bills on the table at anyone one time for speculators to consider, many bad ones can slip through and have effects before bills to reverse them can be proposed and adopted. So I suggested starting with a high bar for bills, but allowing new bills to lower the bar. Merkle instead starts with a very low bar that could be raised, and I worry about all the crazy bills that might pass before the bar rises:

Initially, anyone can propose a bill. It can be submitted at any time. .. At any time, anyone can propose a new method of adopting a bill. It is evaluated and put into effect using the existing methods. .. Suppose we decided that it would improve the stability of the system if all bills had a mandatory minimum consideration period of three months before they could be adopted. Then we would pass a bill modifying the DAO to include this provision.

I worried that the basic betting process could bias the basic rules, so I set basic voting and process rules off limits from bet changes, and set an independent judiciary to judge if rules are followed. Merkle instead allows this basic bet process to change all the rules, and all the judges, which seems to me to risk self-supporting rule changes:

How the survey is conducted, and what instructions are provided, and the surrounding publicity and environment, will all have a great impact on the answer. .. The integrity of the annual polls would be protected only if, as a consequence, it threatened the lives or the well-being of the citizens. .. The simplest approach would be to appoint, as President, that person the prediction market said had the highest positive impact on the collective welfare if appointed as President. .. Similar methods could be adopted to appoint the members of the Supreme Court.

Finally, I said explicitly that when the value formula changes then all the previous definitions must continue to be calculated to pay off past bets. It isn’t clear to me that Merkle adopts this, or if he allows the bet process to change value definitions, which also seems to me to risk self-supporting changes:

We leave the policy with respect to new members, and to births, to our prediction market. .. difficult to see how we could justify refusing to adopt a policy that accepts some person, or a new born child, as a member, if the prediction market says the collective welfare of existing members will be improved by adopting such a policy. .. Of greater concern are changes to the Democratic Collective Welfare metric. Yet even here, if the conclusion reached by the prediction market is that some modification of the metric will better maximize the original metric, then it is difficult to make a case that such a change should be banned.

I’m happy to see the new interest in futarchy, but I’m also worried that sloppy design may cause failures that are blamed on the overall concept instead of on implementation details. As recently happened to the DAO concept.

GD Star Rating
loading...
Tagged as: , ,

Me Soon In Bay Area, DC, NYC

Folks near New York City, Washington DC, or the California Bay Area, consider seeing an upcoming Age of Em talk. (I’ll add more specific links as I get them.)

CA Bay Area

July 9, 10a-7p, Oakland, BIL Oakland
Aug 1, 1p, Mountain View, Benghazi Tech Talk, Google
Aug 2, 5p, Mountain View, RethinkDB
Aug 3, 7p, Oakland, Oakland Futurists
Aug 5-7, Berkeley, Effective Altruism Global
Aug 8, 7p, Palo Alto, Stanford Effective Altruism

Washington DC

July 23, 8a, World Future Society
July 26, 6p, Prosperity Caucus

New York City

July 12, 4:35p, Brooklyn, TTI/Vanguard
July 13, 7p, Brooklyn, Loft67

GD Star Rating
loading...
Tagged as: