This is our monthly place to discuss relevant topics that have not appeared in recent posts.
This is our monthly place to discuss relevant topics that have not appeared in recent posts.
Urban economics studies the spatial distribution of activity. In most urban econ models, the reason that cities aren’t taller is that, per square meter of useable space, taller buildings cost more to physically make. (Supporting quotes below.) According to this usual theory, buildings only get taller when something else compensates for these costs, like a scarce ocean view, or higher status or land prices.
Knowing this, and wondering how tall future cities might get, I went looking for data on just how fast building cost rises with height. And I was surprised to learn: within most of the usual range, taller buildings cost less per square meter to build. For example, for office buildings across 26 US cities, 11-20 stories tend to be cheaper than 5-10 stories, which are cheaper than 2-4 stories (quote below). I also found data on two sets of Chinese residential buildings. Here is cost to build per square meter (on Y axis) vs. height in meters (on X axis) for 24 buildings 3 to 39 stories tall, built in Hong Kong in the early 1990s:
Here are 36 buildings 2 to 37 stories tall, built in Shanghai between 2000 and 2007:
The Shanghai buildings don’t get more expensive till after about 20 stories, while Hong Kong buildings are still cheap at 40 stories.
Now I have no doubt that some elements of cost, like structural mass, rise with height, and that there is some height where such costs dominate. But since there are scale economies in making bigger buildings, it isn’t obvious theoretically where rising structure costs overwhelm scale economies.
Perhaps the above figures are misleading somehow. But we know that taking land prices, higher status, and better views into account would push for even taller buildings. And a big part of higher costs for heights that are rarely used could just be from less local experience with such heights. So why aren’t most buildings at least 20 stories tall?
Perhaps tall buildings have only been cheaper recently. But the Hong Kong data is from twenty years ago, and most buildings made in the last years are not at least 20 stories tall. In fact, in Manhattan new residential buildings have actually gotten shorter. Perhaps capital markets fail to concentrate enough capital in builders’ hands to enable big buildings. But this seems hard to believe.
Perhaps trying to build high makes you a magnet for litigation, envy, and corrupt regulators. Your ambition suggests that you have deeper pockets to tax, and other tall buildings nearby that would lose status and local market share have many ways to veto you. Maybe since most tall buildings are prevented local builders have less experience with them, and thus have higher costs to make them. And many few local builders are up to the task, so they have market power to demand higher prices.
Maybe local governments usually can’t coordinate well to build supporting infrastructure, like roads, schools, power, sewers, etc., to match taller buildings. So they veto them instead. Or maybe local non-property-owning voters believe that more tall buildings will hurt them personally. (The big city nearest me actually has a law against buildings over 40 meters tall.)
Note that most of these explanations are variations on the same theme: local governments fail to coordinate to enable tall buildings. Which is in fact my favored explanation. City density, and hence city size, is mainly limited by the abilities of the conflicting elements that influence local governments to coordinate to enable taller buildings.
Remember those futurist images of dense tall cities scraping the skies? The engineers have done their job to make it possible. It is politics that isn’t yet up to the task.
Those promised quotes: Continue reading "Why Aren’t Cities Taller?" »
Six in 10 Americans … say Snowden’s actions harmed U.S. security, increasing 11 percentage points from July. … Clear majorities of Democrats, Republicans and independents believe disclosures have harmed national security. … More than half of poll respondents — 52 percent — say he should be charged with a crime. … And 55 percent say he was wrong to expose the NSA’s intelligence-gathering efforts. … Most poll respondents think the NSA’s surveillance program intrudes on some Americans’ privacy rights — 68 percent say this — while 54 percent see intrusions on their own privacy, 49 percent count foreign governments as victims and 48 percent say this of foreign citizens. Among those who say surveillance programs intrude on their privacy rights or those of other Americans, a clear majority say such actions are unjustified. (more)
Though several legislative efforts are underway to curb the NSA’s surveillance powers, the wholesale move by private companies to expand the use of encryption technology may prove to be the most tangible outcome of months of revelations based on documents that Snowden provided to The Post and Britain’s Guardian newspaper. In another major shift, the companies also are explicitly building defenses against U.S. government surveillance programs in addition to combating hackers, criminals or foreign intelligence services. (more)
The most limited estimates say that only 1% of the files that Snowden downloaded have been released publicly so far. At the other end of the spectrum, we may only have seen .25% of the files get released. The worst secrets may yet come forward in time. (more)
Overall, we Americans have a stronger attachment to U.S. dominance than to fair play or anyone’s rights. Yeah the NSA lied, went beyond its authority, and hurt us and others. But, we say, the guy who exposed that should be punished for making us look bad. Even though he acted alone, seems personally beyond reproach, suffered substantially and gained little, carefully minimized incidental harm, and showed great competence and self-control in the process.
Geez. I gotta say that Edward Snowden seems one of the best candidates for a classic hero that I’ve seen in a long time. Six years ago I wrote:
In a park near my home is a plaque that reads:
We honor all those who fought for our community.
There is probably a similar plaque near you. I would be more proud to live in a community with a plaque that read:
We honor those who fought against our community when it was wrong.
The Snowden story isn’t over, and maybe it will all look very different later. But for now, he sure looks like someone who such a plaque would rightly honor. Edward, my hat is way way off to you sir.
I’ll speak twice soon at New York University in Abu Dhabi, UAE. Here are locations & abstract:
The Age Of Em: Imagining A Future Of Emulated Minds
The three most disruptive transitions in history were the introduction of humans, farming, and industry. If another transition lies ahead, a good guess for its source is artificial intelligence in the form of whole brain emulations, or “ems,” sometime in the next century. I attempt a broad synthesis of standard academic science, including in business and social science, in order to outline a baseline scenario set modestly far into a post-em-transition world. I consider computer architecture, energy conservation, cooling infrastructure, mind speeds, body sizes, security strategies, virtual reality conventions, labor market organization, management focus, job training, career paths, wage competition, identity, retirement, life cycles, reproduction, mating, conversation habits, wealth inequality, city sizes, growth rates, coalition politics, governance, law, and war.
From the Nov. ’13 Review of Income and Wealth:
The top two lines show total world inequality over time as estimated by this paper and by another previous paper. Both agree that worldwide income inequality has been falling consistently over four decades, especially in the last decade.
Of course this ignores non-financial inequality and inequality across time.
Orgs coordinate activity. And if coordination is hard, we should expect orgs to only barely accomplish this task. That is, we should expect org decisions to be dominated by coalition politics. Orgs that face competitive pressures, like firms, would slowly get more efficient, and thus larger, as we slowly found and spread org innovations to better channel coalition politics efforts in productive directions.
If coalition politics dominates org decisions, then the obvious career strategy advice is to make good alliances. Pick allies valued by strong coalitions who are likely to stay loyal to you, and offer such allies your loyalty as well as efforts and abilities valuable to them. That is, look for pair-wise win-win gains between you and potential allies. You don’t have to like them, and they don’t have to like you.
We often hear other advice, like: seek associates you are comfortable with, or who have things in common with you, or who can give you good advice. Or that you should focus on showing your value to your org as a whole. But these seem to me to be the usual fig leaf excuses. That is, these are things one can admit doing openly without violating the standard forager norms against overt coalition politics.
What smart folks probably really mean when they suggest that you get a mentor, is that you get a powerful ally. And while allies in high places can be especially valuable to you, to make it a win-win relation you are going to have to offer them a lot of value in return. You will even have to figure out how you can help them, and help them first; they don’t have the time, and don’t trust you yet. And when you succeed in finding such a powerful ally, you will submit and they will dominate. That doesn’t sound nearly as nice to say, however.
But sometimes people do say it, out loud and everything: Continue reading "Careers Need Allies" »
Relatively minor technological change can move the balance of power between values that already fight within each human. [For example,] Beeminder empowers a person’s explicit, considered values over their visceral urges. … In the spontaneous urges vs. explicit values conflict …, I think technology should generally tend to push in one direction. … I’d weakly guess that explicit values will win the war. (more)
The goals we humans tend to explicitly and consciously endorse tend to be more idealistic than the goals that our unconscious actions try to achieve. So one might expect or hope that tech that empowers conscious mind parts, relative to other parts, would result in more idealistic behavior.
A relevant test of this idea may be found in the behavior of human orgs, such as firms or nations. Like humans, orgs emphasize more idealistic goals in their more explicit communications. So if we can identify the parts of orgs that are most like the conscious parts of human minds, and if we can imagine ways to increase the resources or capacities of those org parts, then we can ask if increasing such capacities would move orgs to more idealistic behavior.
A standard story is that human consciousness functions primarily to manage the image we present to the world. Conscious minds are aware of the actions we may need to explain to others, and are good at spinning good-looking explanations for our own behavior, and bad-looking explanations for the behavior of rivals.
Marketing, public relation, legal, and diplomatic departments seem to be analogous parts of orgs. They attend more to how the org is seen by others, and to managing org actions that are especially influential to such appearances. If so, our test question becomes: if the relative resources and capacities of these org parts were increased, would such orgs act more idealistically? For example, would a nation live up to its self-proclaimed ideals more if the budget of its diplomatic corps were doubled?
I’d guess that such changes would tend to make org actions more consistent, but not more idealistic. That is, the mean level of idealism would stay about the same, but inconsistencies would be reduced and deviations of unusually idealistic or non-idealistic actions would move toward the mean. Similarly, I suspect humans with more empowered conscious minds do not on average act more idealistically.
But that is just my guess. Does anyone know better how the behavior of real orgs would change under this hypothetical?
Back in March I wrote:
Somewhere around 2035 or so … the (free) energy used per [computer] gate operation will fall to the level thermodynamics says is required to [logically] erase a bit of information. After this point, the energy cost per computation can only fall by switching to “reversible” computing designs, that only rarely [logically] erase bits. … Computer gates … today … in effect irreversibly erase many bits per gate operation. To erase fewer bits instead, gates must be run “adiabatically,” i.e., slow enough so key parameters can change smoothly. In this case, the rate of bit erasure per operation is proportional to speed; run a gate twice as slow, and it erases only half as many bits per operation. Once reversible computing is the norm, gains in making more smaller faster gates will have to be split, some going to let gates run more slowly, and the rest going to more operations. (more)
The future of computing, after about 2035, is adiabatic reservable hardware. When such hardware runs at a cost-minimizing speed, half of the total budget is spent on computer hardware, and the other half is spent on energy and cooling for that hardware. Thus after 2035 or so, about as much will be spent on computer hardware and a physical space to place it as will be spent on hardware and space for systems to generate and transport energy into the computers, and to absorb and transport heat away from those computers. So if you seek a career for a futuristic world dominated by computers, note that a career making or maintaining energy or cooling systems may be just as promising as a career making or maintaining computing hardware.
We can imagine lots of futuristic ways to cheaply and compactly make and transport energy. These include thorium reactors and superconducting power cables. It is harder to imagine futuristic ways to absorb and transport heat. So we are likely to stay stuck with existing approaches to cooling. And the best of these, at least on large scales, is to just push cool fluids past the hardware. And the main expense in this approach is for the pipes to transport those fluids, and the space to hold those pipes.
Thus in future cities crammed with computer hardware, roughly half of the volume is likely to be taken up by pipes that move cooling fluids in and out. And the tech for such pipes will probably be more stable than tech for energy or computers. So if you want a stable career managing something that will stay very valuable for a long time, consider plumbing.
Will this focus on cooling limit city sizes? After all, the surface area of a city, where cooling fluids can go in and out, goes as the square of city scale , while the volume to be cooled goes as the cube of city scale. The ratio of volume to surface area is thus linear in city scale. So does our ability to cool cities fall inversely with city scale?
Actually, no. We have good fractal pipe designs to efficiently import fluids like air or water from outside a city to near every point in that city, and to then export hot fluids from near every point to outside the city. These fractal designs require cost overheads that are only logarithmic in the total size of the city. That is, when you double the city size, such overheads increase by only a constant amount, instead of doubling.
For example, there is a fractal design for piping both smoothly flowing and turbulent cooling fluids where, holding constant the fluid temperature and pressure as well as the cooling required per unit volume, the fraction of city volume devoted to cooling pipes goes as the logarithm of the city’s volume. That is, every time the total city volume doubles, the same additional fraction of that volume must be devoted to a new kind of pipe to handle the larger scale. The pressure drop across such pipes also goes as the logarithm of city volume.
The economic value produced in a city is often modeled as a low power (greater than one) of the economic activity enclosed in that city. Since mathematically, for a large enough volume a power of volume will grow faster than the logarithm of volume, the greater value produced in larger cities can easily pay for their larger costs of cooling. Cooling does not seem to limit feasible city size. At least when there are big reservoirs of cool fluids like air or water around.
I don’t know if the future is still plastics. But I do know that a big chuck of it will be pipes.
Added 10Nov 4p: Proof of “When such hardware runs …” : V = value, C = cost, N = # processors, s = speed run them at, p,q = prices. V = N*s, C = p*N + q*N*s2. So C/V = p/s + q*s. Pick s to min C/V gives p = q*s2, so two parts of cost C are equal. Also, C/s = 2*sqrt(p*q).
It’s perhaps no great surprise that we haven’t embraced Hanson’s “futarchy.” Our current political system resists dramatic change, and has resisted it for 237 years. More traditional modes of prediction have proved astonishingly bad, yet they continue to run our economic and political worlds, often straight into the ground. Bubbles do occur, and we can all point to examples of markets getting blindsided. But if prediction markets are on balance more accurate and unbiased, they should still be an attractive policy tool, rather than a discarded idea tainted with the odor of unseemliness. As Hanson asks, “Who wouldn’t want a more accurate source?”
Maybe most people. What motivates us to vote, opine, and prognosticate is often not the desire for efficacy or accuracy in worldly affairs—the things that prediction markets deliver—but instead the desire to send signals to each other about who we are. Humans remain intensely tribal. We choose groups to associate with, and we try hard to show everybody which groups we belong to. We don’t join the Tea Party because we have exhaustively studied and rejected monetarism, and we don’t pay extra for organic food because we have made a careful cost-benefit analysis based on research about its relative safety. We do these things because doing so says something that we want to convey to others. Nor does the accuracy of our favorite talking heads matter that much to us. More than we like accuracy, we like listening to talkers on our side, and identifying them as being on our team—the right team.
“We continue to have consistent results and evidence that markets are accurate,” Hanson says. “If the question is, ‘Do these things predict well?,’ we have an answer: They do. But that story has to be put up against the idea that people never really wanted more accurate sources.”
On this theory, the techno-libertarian enthusiasts got the technology right, and the humanity wrong. Whenever John Delaney showed up on CNBC, hawking his Intrade numbers and describing them as the most accurate and impartial around, he was also selling a future that people fundamentally weren’t interested in buying. (more)
I don’t much disagree — I raised these issues with Wood when he interviewed me. As usual, our hopes for idealistic outcomes mostly depend on finding ways to shame people into actually supporting what they pretend to support, by making the difference too obvious to ignore.
More specifically, I hope prediction markets within firms may someday gain a status like cost accounting today. In a world were no one else did cost accounting, proposing that your firm do it would basically suggest that someone was stealing there. Which would look bad. But in a world where everyone else does cost accounting, suggesting that your firm not do it would suggest that you want to steal from it. Which also looks bad.
Similarly, in a world where few other firms use prediction markets, suggesting that your firm use them on your project suggests that your project has an unusual problem in getting people to tell the truth about it via the usual channels. Which looks bad. But in a world where most firms use prediction markets on most projects, suggesting that your project not use prediction markets would suggest you wamt to hide something. That is, you don’t want a market to predict if your project will make its deadline because you don’t want others to see that it won’t make the deadline. Which would look bad.
Once prediction markets were a standard accepted practice within firms, it would be much easier to convince people to use them in government as well.