Tag Archives: Tech

Radical Markets

In 1997, I got my Ph.D. in social science from Caltech. The topic that drew me into grad school, and much of what I studied, was mechanism and institution design: how to redesign social practices and institutions. Economists and related scholars know a lot about this, much of which is useful for reforming many areas of life. Alas, the world shows little interest in these reforms, and I’ve offered our book The Elephant in the Brain: Hidden Motives in Everyday Life, as a partial explanation: most reforms are designed to give us more of what we say we want, and at some level we know we really want something else. While social design scholars would do better to work more on satisfying hidden motives, there’s still much useful in what they’ve already learned.

Oddly, most people who say they are interested in radical social change don’t study this literature much, and people in this area don’t much consider radical change. Which seems a shame; these tools are a good foundation for such efforts, and the topic of radical change has long attracted wide interest. I’ve tried to apply these tools to consider big change, such as with my futarchy proposal.

I’m pleased to report that two experts in social design have a new book, Radical Markets: Uprooting Capitalism and Democracy for a Just Society:

The book reveals bold new ways to organize markets for the good of everyone. It shows how the emancipatory force of genuinely open, free, and competitive markets can reawaken the dormant nineteenth-century spirit of liberal reform and lead to greater equality, prosperity, and cooperation. … Only by radically expanding the scope of markets can we reduce inequality, restore robust economic growth, and resolve political conflicts. But to do that, we must replace our most sacred institutions with truly free and open competition—Radical Markets shows how.

While I applaud the ambition of the book, and hope to see more like it, the five big proposals of the book vary widely in quality. They put their best feet forward, and it goes downhill from there. Continue reading "Radical Markets" »

GD Star Rating
loading...
Tagged as: , , ,

Prediction Machines

One of my favorite books of the dotcom era was Information Rules, by Shapiro and Varian in 1998. At the time, tech boosters were saying that all the old business rules were obsolete, and anyone who disagreed “just doesn’t get it.” But Shapiro and Varian showed in detail how to understand the new internet economy in terms of standard economic concepts. They were mostly right, and Varian went on to become Google’s chief economist.

Today many tout a brave new AI-driven economic revolution, with some touting radical change. For example, a widely cited 2013 paper said:

47% of total US employment is in the high risk category … potentially automatable over … perhaps a decade or two.

Five years later, we haven’t yet seen changes remotely this big. And a new book is now a worthy successor to Information Rules:

In Prediction Machines, three eminent economists recast the rise of AI as a drop in the cost of prediction. With this single, masterful stroke, they lift the curtain on the AI-is-magic hype and show how basic tools from economics provide clarity about the AI revolution and a basis for action by CEOs, managers, policy makers, investors, and entrepreneurs.

As with Information Rules, these authors mostly focus on guessing the qualitative implications of such prediction machines. That is, they don’t say much about likely rates or magnitudes of change, but instead use basic economic analysis to guess likely directions of change. (Many example quotes below.) And I can heartily endorse almost all of these good solid guesses about change directions. A change in the cost of prediction is a fine way to frame recent tech advances, and if you want to figure out what they imply for your line of business, this is the book for you.

However, the book does at times go beyond estimating impact directions. It says “this time is different”, suggests “extraordinary changes over the next few years”, says an AI-induced recession might result from a burst of new tech, and the eventual impact of this tech will be similar to that of computers in general so far:

Everyone has had or will soon have an AI moment. We are accustomed to a media saturated with stories of new technologies that will change our lives. … Almost all of us are so used the the constant drumbeat of technology news that we numbly recite that the only thing immune to change is change itself. Until have our AI moment. Then we realize that this technology is different. p.2

In various ways, prediction machines can “use language, form abstractions and concepts, solve the kinds of problem now [as of 1955] reserve for humans, and improve themselves.” We do not speculate on whether this process heralds the arrival of general artificial intelligence, “the Singularity”, or Skynet. However, as you will see, this narrower focus on prediction still suggests extraordinary changes over the next few years. Just as cheap arithmetic enabled by computers proved powerful in using in dramatic change in business and personal lives, similar transformations will occur due to cheap prediction. p.39

Once an AI is better than humans at a particular task, job losses well happen quickly. We can be confident that new jobs will arise with a few ears and people will have something to do, but that will be little comfort for those looking for work and waiting for those new jobs to appear. An AI-induced recession is not out of the question. p.212

And they offer a motivating example that would require pretty advanced tech:

At some point, as it turns the knob, the AI’s prediction accuracy crosses a threshold, changing Amazon’s business model. The prediction becomes sufficiently accurate that it becomes more profitable for Amazon to ship you the goods that it predicts you will want rather than wait for you to order them. p.16

I can’t endorse any of these suggestions about magnitudes and rates of change. I estimate much smaller and slower change. But the book doesn’t argue for any of these claims, it more assumes them, and so I won’t bother to argue the topic here either. The book only mentions radical scenarios a few more times:

But is this time different? Hawking’s concern, shared by many, is that this time might be unusual because AI may squeeze out the last remaining advantages humans have over machines. How might an economist approach this question? … If you favor free trade between countries, then you … support developing AI, even if it replaces some jobs. Decades of research into the effect of trade show that other jobs will appear, and overall employment will not plummet. p.211

For years, economists have faced criticism that the agents on which we see our theories are hyper-rational and unrealistic models of human behavior. True enough, but when it comes to superintelligence, that means we have glen on the right track. … Thus economics provides a powerful way to understand how a society of superintelligent AIs will evolve. p.222

Yes, research is underway to make prediction machines work in broader settings, but the break-through that will give rise to general artificial intelligence remains undiscovered. Some believe that AGI is so far out that we should not spend cycles worrying about it. … As with many AI-related issues, the future is highly uncertain. Is this the end of the world as we know it? not yet, but it is the end of this book. Companies are deploying AIs right now. In applying the simple economics that underpin lower-cost prediction and higher-value complements to prediction, your business can make ROI-optimizing choices and strategic decision with regard to AI. When we move beyond prediction machines to general artificial intelligence or even superintelligence, whatever that may be, then we will be at a different AI moment. That is something everyone agrees upon. p.223

As you can see, they don’t see radical scenarios as coming soon, nor see much urgency regarding them. A stance I’m happy to endorse. And I also endorse all those insightful qualitative change estimates, as illustrated by these samples: Continue reading "Prediction Machines" »

GD Star Rating
loading...
Tagged as: , ,

Hazlett’s Political Spectrum

I just read The Political Spectrum by Tom Hazlett, which took me back to my roots. Well over three decades ago, I was inspired by Technologies of Freedom by Ithiel de Sola Pool. He made the case both that great things were possible with tech, and that the FCC has mismanaged the spectrum. In grad school twenty years ago, I worked on FCC auctions, and saw mismanagement behind the scenes.

When I don’t look much at the details of regulation, I can sort of think that some of it goes too far, and some not far enough; what else should you expect from a noisy process? But reading Hazlett I’m just overwhelmed by just how consistently terrible is spectrum regulation. Not only would everything have been much better without FCC regulation, it actually was much better before the FCC! Herbert Hoover, who was head of the US Commerce Department at the time, broke the spectrum in order to then “save” it, a move that probably helped him rise to the presidency:

“Before 1927,” wrote the U.S. Supreme Court, “the allocation of frequencies was left entirely to the private sector . . . and the result was chaos.” The physics of radio frequencies and the dire consequences of interference in early broadcasts made an ordinary marketplace impossible, and radio regulation under central administrative direction was the only feasible path. “Without government control, the medium would be of little use because of the cacaphony [sic] of competing voices.”

This narrative has enabled the state to pervasively manage wireless markets, directing not only technology choices and business decisions but licensees’ speech. Yet it is not just the spelling of cacophony that the Supreme Court got wrong. Each of its assertions about the origins of broadcast regulation is demonstrably false. ..

The chaos and confusion that supposedly made strict regulation necessary were limited to a specific interval—July 9, 1926, to February 23, 1927. They were triggered by Hoover’s own actions and formed a key part of his legislative quest. In effect, he created a problem in order to solve it. ..

Radio broadcasting began its meteoric rise in 1920–1926 under common-law property rules .. defined and enforced by the U.S. Department of Commerce, operating under the Radio Act of 1912. They supported the creation of hundreds of stations, encouraged millions of households to buy (or build) expensive radio receivers. .. The Commerce Department .. designated bands for radio broadcasting. .. In 1923, .. [it] expanded the number of frequencies to seventy, and in 1924, to eighty-nine channels .. [Its] second policy was a priority-in-use rule for license assignments. The Commerce Department gave preference to stations that had been broadcasting the longest. This reflected a well-established principle of common law. ..

Hoover sought to leverage the government’s traffic cop role to obtain political control. .. In July 1926, .. Hoover announced that he would .. abandon Commerce’s powers. .. Commerce issued a well-publicized statement that it could no longer police the airwaves. .. The roughly 550 stations on the air were soon joined by 200 more. Many jumped channels. Conflicts spread, annoying listeners. Meanwhile, Commerce did nothing. ..

Now Congress acted. An emergency measure .. mandated that all wireless operators immediately waive any vested rights in frequencies ..  the Radio Act … provided for allocation of wireless licenses according to “public interest”.  .. With the advent of the Federal Radio Commission in 1927, the growth of radio stations—otherwise accommodated by the rush of technology and the wild embrace of a receptive public—was halted. The official determination was that less broadcasting competition was demanded, not more.

That was just the beginning. The book documents so so much more that has gone very wrong. Even today, vast valuable spectrum is wasted broadcasting TV signals that almost no one uses, as most everyone gets cable TV. In addition,

The White House estimates that nearly 60 percent of prime spectrum is set aside for federal government use .. [this] substantially understates the amount of spectrum it consumes.

Sometimes people argue that we need an FCC to say who can use which spectrum because some public uses are needed. After all, not all land can be private, as we need public parks. Hazlett says we don’t use a federal agency to tell everyone who gets which land. Instead the public buys general land to create parks. Similarly, if the government needs spectrum, it can buy it just like everyone else. Then we’d know a lot better how much any given government action that uses spectrum is actually costing us.

Is the terrible regulation of spectrum an unusual case, or is most regulation that bad? One plausible theory is that we are more willing to believe that a strange complex tech needs regulating, and so such things tend to be regulated worse. This fits with nuclear power and genetically modified food, as far as I understand them. Social media has so far escaped regulation because it doesn’t seem strange – it seems simple and easy to understand. It has complexities of course, but behind the scenes.

GD Star Rating
loading...
Tagged as: ,

This AI Boom Will Also Bust

Imagine an innovation in pipes. If this innovation were general, something that made all kinds of pipes cheaper to build and maintain, the total benefits could be large, perhaps even comparable to the total amount we spend on pipes today. (Or even much larger.) And if most of the value of pipe use were in many small uses, then that is where most of these economic gains would be found.

In contrast, consider an innovation that only improved the very largest pipes. This innovation might, for example, cost a lot to use per meter of pipe, and so only make sense for the largest pipes. Such an innovation might make for very dramatic demonstrations, with huge vivid pipes, and so get media coverage. But the total economic gains here will probably be smaller; as most of pipe value is found in small pipes, gains to the few biggest pipes can only do so much.

Now consider my most viral tweet so far:

This got almost universal agreement from those who see such issues play out behind the scenes. And by analogy with the pipe innovation case, this fact tells us something about the potential near-term economic impact of recent innovations in Machine Learning. Let me explain.

Most firms have piles of data they aren’t doing much with, and far more data that they could collect at a modest cost. Sometimes they use some of this data to predict a few things of interest. Sometimes this creates substantial business value. Most of this value is achieved, as usual, in the simplest applications, where simple prediction methods are applied to simple small datasets. And the total value achieved is only a small fraction of the world economy, at least as measured by income received by workers and firms who specialize in predicting from data.

Many obstacles limit such applications. For example, the value of better predictions for related decisions may be low, data may be in a form poorly suited to informing predictions, making good use of predictions might require larger reorganizations, and organizations that hold parts of the data may not want to lose control of that data. Available personnel may lack sufficient skills to apply the most effective approaches for data cleaning, merging, analysis, and application.

No doubt many errors are made in choices of when to analyze what data how much and by whom. Sometimes they will do too much prediction, and sometimes too little. When tech changes, orgs will sometimes wait too long to try new tech, and sometimes will not wait long enough for tech to mature. But in ordinary times, when the relevant technologies improve at steady known rates, we have no strong reason to expect these choices to be greatly wrong on average.

In the last few years, new “deep machine learning” prediction methods are “hot.” In some widely publicized demonstrations, they seem to allow substantially more accurate predictions from data. Since they shine more when data is plentiful, and they need more skilled personnel, these methods are most promising for the largest prediction problems. Because of this new fashion, at many firms those who don’t understand these issues well are pushing subordinates to seek local applications of these new methods. Those subordinates comply, at least in appearance, in part to help they and their organization appear more skilled.

One result of this new fashion is that a few big new applications are being explored, in places with enough data and potential prediction value to make them decent candidates. But another result is the one described in my tweet above: fashion-induced overuse of more expensive new methods on smaller problems to which they are poorly matched. We should expect this second result to produce a net loss on average. The size of this loss could be enough to outweigh all the gains from the few big new applications; after all, most value is usually achieved in many small problems.

But I don’t want to draw a conclusion here about the net gain or loss. I instead want to consider the potential for this new prediction tech to have an overwhelming impact on the world economy. Some see this new fashion as just first swell of a tsunami that will soon swallow the world. For example, in 2013 Frey and Osborne famously estimated:

About 47 percent of total US employment is at risk .. to computerisation .. perhaps over the next decade or two.

If new prediction techs induced a change that big, they would be creating a value that is a substantial fraction of the world economy, and so consume a similar fraction of world income. If so, the prediction industry would in a short time become vastly larger than it is today. If today’s fashion were the start of that vast growth, we should not only see an increase in prediction activity, we should also see an awe-inspiring rate of success within that activity. The application of these new methods should be enabling huge new revenue streams, across a very wide range of possible application areas. (Added: And the prospect of that should be increasing stock values in this area far more than we’ve seen.)

But I instead hear that within the areas where most prediction value lies, most attempts to apply this new tech actually produce less net value than would be achieved with old tech. I hear that prediction analysis tech is usually not the most important part the process, and that recently obsession with showing proficiency in this new analysis tech has led to neglect of the more important and basic issues of thinking carefully about what you might want to predict with what data, and then carefully cleaning and merging your data into a more useful form.

Yes, there must be exceptions, and some of those may be big. So a few big applications may enable big value. And self-driving cars seem a plausible candidate, a case where prediction is ready to give large value, high enough to justify using the most advanced prediction tech, and where lots of the right sort of data is available. But even if self-driving vehicles displace most drivers within a few decades, that rate of job automation wouldn’t be out of the range of our historical record of job automation. So it wouldn’t show that “this time is different.” To be clearly out of that range, we’d need another ten jobs that big also displaced in the same period. And even that isn’t enough to automate half of all jobs in two decades.

The bottom line here is that while some see this new prediction tech as like a new pipe tech that could improve all pipes, no matter their size, it is actually more like a tech only useful on very large pipes. Just as it would be a waste to force a pipe tech only useful for big pipes onto all pipes, it can be a waste to push advanced prediction tech onto typical prediction tasks. And the fact that this new tech is mainly only useful on rare big problems suggests that its total impact will be limited. It just isn’t the sort of thing that can remake the world economy in two decades. To the extend that the current boom is based on such grand homes, this boom must soon bust.

GD Star Rating
loading...
Tagged as: , , ,

Merkle’s Futarchy

My futarchy paper, Shall We Vote on Values But Bet on Beliefs?, made public in 2000 but officially “published” in 2013, has gotten more attention lately as some folks talk about using it to govern blockchain organizations. In particular, Ralph Merkle (co-inventor of public key cryptography) has a recent paper on using futarchy within “Decentralized Autonomous Organizations.”

I tried to design my proposal carefully to avoid many potential problems. But Merkle seems to have thrown many of my cautions to the wind. So let me explain my concerns with his variations.

First, I had conservatively left existing institutions intact for Vote on Values; we’d elect representatives to oversee the definition and measurement of a value metric. Merkle instead has each citizen each year report a number in [0,1] saying how well their life has gone that year:

Annually, all citizens are asked to rank the year just passed between 0 and 1 (inclusive). .. it is intended to provide information about one person’s state of satisfaction with the year that has just passed. .. Summed over all citizens and divided by the number of citizens, this gives us an annual numerical metric between 0 and 1 inclusive. .. An appropriately weighted sum of annual collective welfares, also extending indefinitely into the future, would then give us a “democratic collective welfare” metric. .. adopting a discount rate seems like at least a plausible heuristic. .. To treat their death: .. ask the person who died .. ask before they die. .. [this] eliminates the need to evaluate issues and candidates. The individual citizen is called upon only to determine whether the year has been good or bad for themselves. .. We’ve solved .. the need to wade through deceptive misinformation.

Yes, it could be easy to decide how your last year has gone, even if it is harder to put that on a scale from worst to best possible. But reporting that number is not your best move here! Your optimal strategy here is almost surely “bang-bang”, i.e., reporting either 0 or 1. And you’ll probably want to usually give the same consistent answer year after year. So this is basically a vote, except on “was this last year a good or a bad year?”, which in practice becomes a vote on “has my life been good or bad over the last decades.” Each voter must pick a threshold where they switch their vote from good to bad, a big binary choice that seems ripe for strong emotional distortions. That might work, but it is pretty far from what voters have done before, so a lot of voter learning is needed.

I’m much more comfortable with futarchy that uses value metrics tied to the reason an organization exists. Such as using the market price of investment to manage an investment, attendance to manage a conference, or people helped (& how much) to manage a charity.

If there are too many bills on the table at anyone one time for speculators to consider, many bad ones can slip through and have effects before bills to reverse them can be proposed and adopted. So I suggested starting with a high bar for bills, but allowing new bills to lower the bar. Merkle instead starts with a very low bar that could be raised, and I worry about all the crazy bills that might pass before the bar rises:

Initially, anyone can propose a bill. It can be submitted at any time. .. At any time, anyone can propose a new method of adopting a bill. It is evaluated and put into effect using the existing methods. .. Suppose we decided that it would improve the stability of the system if all bills had a mandatory minimum consideration period of three months before they could be adopted. Then we would pass a bill modifying the DAO to include this provision.

I worried that the basic betting process could bias the basic rules, so I set basic voting and process rules off limits from bet changes, and set an independent judiciary to judge if rules are followed. Merkle instead allows this basic bet process to change all the rules, and all the judges, which seems to me to risk self-supporting rule changes:

How the survey is conducted, and what instructions are provided, and the surrounding publicity and environment, will all have a great impact on the answer. .. The integrity of the annual polls would be protected only if, as a consequence, it threatened the lives or the well-being of the citizens. .. The simplest approach would be to appoint, as President, that person the prediction market said had the highest positive impact on the collective welfare if appointed as President. .. Similar methods could be adopted to appoint the members of the Supreme Court.

Finally, I said explicitly that when the value formula changes then all the previous definitions must continue to be calculated to pay off past bets. It isn’t clear to me that Merkle adopts this, or if he allows the bet process to change value definitions, which also seems to me to risk self-supporting changes:

We leave the policy with respect to new members, and to births, to our prediction market. .. difficult to see how we could justify refusing to adopt a policy that accepts some person, or a new born child, as a member, if the prediction market says the collective welfare of existing members will be improved by adopting such a policy. .. Of greater concern are changes to the Democratic Collective Welfare metric. Yet even here, if the conclusion reached by the prediction market is that some modification of the metric will better maximize the original metric, then it is difficult to make a case that such a change should be banned.

I’m happy to see the new interest in futarchy, but I’m also worried that sloppy design may cause failures that are blamed on the overall concept instead of on implementation details. As recently happened to the DAO concept.

GD Star Rating
loading...
Tagged as: , ,

Lognormal Jobs

I often meet people who think that because computer tech is improving exponentially, its social impact must also be exponential. So as soon as we see any substantial social impact, watch out, because a tsunami is about to hit. But it is quite plausible to have exponential tech gains translate into only linear social impact. All we need is a lognormal distribution, as in this diagram:

LogNormalJobs

Imagine that each kind of jobs that humans do requires a particular level of computing power in order for computers to replace humans on that job. And imagine that these job power levels are distributed lognormally.

In this case an exponential growth in computing power will translate into a linear rate at which computers displace humans on jobs. Of course jobs may clump along this log-computing-power axis, giving rise to bursts and lulls in the rate at which computers displace jobs. But over the long run we could see a relatively steady rate of job displacement even with exponential tech gains. Which I’d say is roughly what we do see.

Added 3am: Many things are distributed lognormally.

GD Star Rating
loading...
Tagged as: , ,

Investors Not Barking

Detective: “Is there any other point to which you would wish to draw my attention?”

Holmes: “To the curious incident of the dog in the night-time.”

Detective: “The dog did nothing in the night-time.”

Holmes: “That was the curious incident.”

We’ve seen several centuries of continuing economic growth enabled by improving tech (broadly conceived). Some of that tech can be seen as “automation” where machines displace humans on valued tasks.

The economy has consistently found new tasks for humans, to make up for displaced tasks. But while the rate of overall economic growth has be relatively steady, we have seen fluctuations in the degree of automation displacement in any given industry and region. This has often led to local anxiety about whether we are seeing the start of a big trend deviation – are machines about to suddenly take over most human jobs fast?

Of course so far such fears have not yet been realized. But around the year 2000, near the peak of the dotcom tech boom, we arguably did see substantial evidence of investors suspecting a big trend-deviating disruption. During a big burst of computer-assisted task displacement, the tech sector should soon see a big increase in revenue. So anticipating a substantial chance of such a burst justifies bigger stock values for related firms. And this graph of the sector breakdown of the S&P500 over the last few decades shows that investors then put their money where their mouths were regarding such a possible big burst:

S&P500breakdown

In the last few years, we’ve heard another burst of anxiety about an upcoming big burst of automation displacing humans on tasks. It is one of our anxieties du jour. But if you look at the right side of the graph above you’ll note that are not now seeing a boom in the relative value of tech sector stocks.

We see the same signal if we look at majors chosen by college graduates. A big burst of automation not only justifies bigger tech stock values, it also justifies more students majoring in tech. And during the dotcom boom we did see a big increase in students choosing to major in computer science. But we have not seen such an increase during the last decade.

So the actions of both stock investors and college students suggest that they do not believe we are at substantial risk of a big burst of automation soon. These dogs are not barking. Even if robots taking jobs is what lots of talking heads are talking about. Because talking heads aren’t putting their money, or their time, where their mouths are.

GD Star Rating
loading...
Tagged as: , ,

Old Prof Vices, Virtues

Tyler on “How bad is age discrimination in academia?”:

I believe it is very bad, although I do not have data.

I started my Ph.D. at the age of 34, and Tyler hired me here at GMU at the age of 40. So by my lights Tyler deserves credit for overcoming the age bias. Tyler doesn’t discuss why this bias might exist, but a Stanford history prof explained his theory to me when I was in my early 30s talking to him about a possible PhD. He said that older students are known for working harder and better, but also for being less pliable: they have more of their own ideas about what is interesting and important.

I think that fits with what I’ve heard from others, and have seen for myself, including in myself. People complain that academia builds too little on “real world” experience, and that disciplines are too insular. And older students help with that. But in fact the incentive for each prof in picking students isn’t to solve the wider problems with academia. It is instead to expand an empire by creating intellectual clones of him or herself. And for that selfish goal, older students are worse. My mentors likely feel this way about me, that I worked hard and did interesting stuff, but I was not a good investment for expanding their legacy.

Interestingly this explanation is somewhat the opposite of the usual excuses for age bias in Silicon Valley. There the usual story is that older people won’t take as many risks, and that they aren’t as creative. But the complaint about older Ph.D.s is exactly that they take too many risks, and that they are too creative. If only they would just do what they are told, and copy their mentors, then their hard work and experience could be more valued.

I find it hard to believe that older workers change their nature this much between tech and academia. Something doesn’t add up here. And for what its worth, I’ve been personally far more impressed by the tech startups I’ve known that are staffed by older folks.

GD Star Rating
loading...
Tagged as: , ,

Blockchain Bingo

Two weeks ago I was on a three person half hour panel on “Bitcoin and the Future” at an O’Reilly Radar Summit on Bitcoin & the Blockchain. I was honored to be invited, but worried as I had not been tracking the field much. I read up a bit, and listened carefully to previous sessions. And I’ve been continuing to ponder and read for the last two weeks. There are many technical details here, and they matter. Even so, it seems I should try to say something; here goes.

A possible conversation between a blockchain enthusiast and newbie:

“Bitcoin is electronic money! It is made from blockchains, which are electronic ledgers that can also support many kinds of electronic contracts and trades.”

“But we already have money, and ledgers. And electronic versions. In fact, bank ledgers were one of the first computer applications.”

“Yes, but blockchain ledgers are decentralized. Sure, compared to ordinary computer ledgers, blockchain ledgers take millions or more times the computing power. But blockchains have no central org to trust. Instead, you trust the whole system.”

“Is this whole system in fact more more trustworthy that the usual bank ledger system today?”

“Not in practice so far, at least not for most people. But it might be in the future, if we experiment with enough different approaches, and if enough people use the better approaches, to induce enough supporting infrastructure efforts.”

“If someone steals my credit card today, a central org of a credit card firm usually takes responsibility and fixes that. Here I’d be on my own, right?”

“Yes, but credit card firms charge you way too much for such services.”

“And without central orgs, doesn’t it get much harder to regulate financial services?”

“Yes, but you don’t want all those regulations. For example, blockchains make anonymous money holdings and contracts easier. So you could evade taxes, and laws that restrict bets and drug buys.”

“Couldn’t we just pass new laws to allow such evasions, if we didn’t want the social protections they provide? And couldn’t we just buy cheaper financial services, if we didn’t want the private protections that standard services now provide?”

“You’re talking as if government and financial service markets are efficient. They aren’t. Financial firms have a chokehold on finance, and they squeeze us for their gain, not ours. They have captured government regulators, who mostly work to tighten the noose, instead of helping the rest of us.”

“OK, imagine we do create cheaper decentralized systems of finance where evasion of regulation is easier. If this system is used in ways we don’t like, we won’t be able to do much to stop that besides informal social pressure, or trying to crudely shut down the whole system, right? There’d be no one driving the train.”

“Yes, exactly! That is the dream, and it might just be possible, if enough of us work for it.”

“But even if I want change, shouldn’t I be scared of change this lumpy? This is all or nothing. We don’t get to see the all before we try, and once we get it then its mostly too late to reverse.”

“Yes, but the powers-that-be can and do block most incremental changes. It is disruptive revolution, or nothing. To the barricades!”

I see five main issues regarding blockchain enthusiasm:

  • Technical Obstacles. Many technical obstacles remain, to designing systems that are general, cheap, secure, robust, and scaleable. You are more enthusiastic if you think these obstacles can be more easily overcome.
  • Bad Finance & Regulation. The more corrupt and wasteful you think that finance and financial regulation are today, the more you’ll want to throw the dice to get something new.
  • Lumpy Change. The more you want change, but would rather go slow and gradual, so we can back off if we don’t like what we see, the less you’ll want to throw these lumpy dice.
  • Standards Coordination. Many equilibria are possible here, depending on exactly which technical features are in the main standards. The worse you think we are at such coordination, the less you want to roll these dice.
  • Risk Aversion. The more you think regulations protect us from terrible dark demons waiting in the shadows, the less you’ll want a big unknown hard-to-change-or-regulate world.

Me, I’d throw the dice. But then I’d really like more bets to be feasible, and I’ve known some people working in this area for decades. However, I can’t at all see blaming you if you feel different; this really is a tough call.

GD Star Rating
loading...
Tagged as: ,

Em Software Results

After requesting your help, I should tell you what it added up to. The following is an excerpt from my book draft, illustrated by this diagram:

SoftwareIntensity

In our world, the cost of computing hardware has been falling rapidly for decades. This fall has forced most computer projects to be short term, so that products can be used before they are made obsolete. The increasing quantity of software purchased has also led to larger software projects, which involve more engineers. This has shifted the emphasis toward more communication and negotiation, and also more modularity and standardization in software styles.

The cost of hiring human software engineers has not fallen much in decades. The increasing divergence between the cost of engineers and the cost of hardware has also lead to a decreased emphasis on raw performance, and increased emphasis on tools and habits that can quickly generate correct if inefficient performance. This has led to an increased emphasis on modularity, abstraction, and on high-level operating systems and languages. High level tools insulate engineers more from the details of hardware, and from distracting tasks like type checking and garbage collection. As a result, software is less efficient and well-adapted to context, but more valuable overall. An increasing focus on niche products has also increased the emphasis on modularity and abstraction.

Em software engineers would be selected for very high productivity, and use the tools and styles preferred by the highest productivity engineers. There would be little interest in tools and methods specialized to be useful “for dummies.” Since em computers would tend to be more reversible and error-prone, em software would be more focused on those cases as well. Because the em economy would be larger, its software industry would be larger as well, supporting more specialization.

The transition to an em economy would greatly lower wages, thus inducing a big one-time shift back toward an emphasis on raw context-dependent performance, relative to abstraction and easier modifiability. The move away from niche products would add to this tendency, as would the ability to save copies of the engineer who just wrote the software, to help later with modifying it. On the other hand, a move toward larger software projects could favor more abstraction and modularity.

After the em transition, the cost of em hardware would fall at about the same speed as the cost of other computer hardware. Because of this, the tradeoff between performance and other considerations would change much less as the cost of hardware fell. This should greatly extend the useful lifetime of programming languages, tools, and habits matched to particular performance tradeoff choices.

After an initial period of large rapid gains, the software and hardware designs for implementing brain emulations would probably reach diminishing returns, after which there would only be minor improvements. In contrast, non-em software will probably improve about as fast as computer hardware improves, since algorithm gains in many areas of computer science have for many decades typically remained close to hardware gains. Thus after ems appear, em software engineering and other computer-based work would slowly get more tool-intensive, with a larger fraction of value added by tools. However, for non-computer-based tools (e.g., bulldozers) their intensity of use and the fraction of value added by such tools would probably fall, since those tools probably improve less quickly than would em hardware.

For over a decade now, the speed of fast computer processors has increased at a much lower rate than the cost of computer hardware has fallen. We expect this trend to continue long into the future. In contrast, the em hardware cost will fall with the cost of computer hardware overall, because the emulation of brains is a very parallel task. Thus ems would see an increasing sluggishness of software that has a large serial component, i.e., which requires many steps to be taken one after the other, relative to more parallel software. This sluggishness would directly reduce the value of such software, and also make such software harder to write.

Thus over time serial software will become less valuable, relative to ems and parallel software. Em software engineers would come to rely less on software tools with a big serial component, and would instead emphasize parallel software, and tools that support that emphasis. Tools like automated type checking and garbage collection would tend to be done in parallel, or not at all. And if it ends up being too hard to write parallel software, then the value of software more generally may be reduced relative to the value of having ems do tasks without software assistance.

For tasks where parallel software and tools suffice, and where the software doesn’t need to interact with slower physical systems, em software engineers could be productive even when sped up to the top cheap speed. This would often make it feasible to avoid the costs of coordinating across engineers, by having a single engineer spend an entire subjective career creating a large software system. For an example, an engineer that spent a subjective century at one million times human speed would be done in less than one objective hour. When such a short delay is acceptable, parallel software could be written by a single engineer taking a subjective lifetime.

When software can be written quickly via very fast software engineers, product development could happen quickly, even when very large sums were spent. While today investors may spend most of their time tracking current software development projects, those who invest in em software projects of this sort might spend most of their time deciding when is the right time to initiate such a project. A software development race, with more than one team trying to get to market first, would only happen if the same sharp event triggered more than one development effort.

A single software engineer working for a lifetime on a project could still have troubles remembering software that he or she wrote decades before. Because of this, shorter-term copies of this engineer might help him or her to be more productive. For example, short-term em copies might search for and repair bugs, and end or retire once they have explained their work to the main copy. Short-term copies could also search among many possible designs for a module, and end or retire after reporting on their best design choice, to be re-implemented by the main copy. In addition, longer-term copies could be created to specialize in whole subsystems, and younger copies could be revived to continue the project when older copies reached the end of their productive lifetime. These approaches should allow single em software engineers to create far larger and more coherent software systems within a subjective lifetime.

Fast software engineers who focus on taking a lifetime to build a large software project, perhaps with the help of copies of themselves, would likely develop more personal and elaborate software styles and tools, and rely less on tools and approaches that help them to coordinate with other engineers with differing styles and uncertain quality. Such lone fast engineers would require local caches of relevant software libraries. When in distantly separated locations, such caches could get out of synch. Local copies of library software authors, available to update their contributions, might help reduce this problem. Out of synch libraries would increase the tendency toward divergent personal software styles.

When different parts of a project require different skills, a lone software engineer might have different young copies trained with different skills. Similarly, young copies could be trained in the subject areas where some software is to be applied, so that they can better understand what variations will have value there.

However, when a project requires different skills and expertise that is best matched to different temperaments and minds, then it may be worth paying extra costs of communication to allow different ems to work together on a project. In this case, such engineers would likely promote communication via more abstraction, modularity, and higher level languages and module interfaces. Such approaches also become more attractive when outsiders must test and validate software, to certify its appropriateness to customers. Enormous software systems could be created with modest sized teams working at the top cheap speed, with the assistance of many spurs. There may not be much need for even larger software teams.

The competition for higher status among ems would tend to encourage faster speeds than would otherwise be efficient. This tendency of fast ems to be high status would tend to raise the status of software engineers.

GD Star Rating
loading...
Tagged as: , ,