Tag Archives: Tech

Old Prof Vices, Virtues

Tyler on “How bad is age discrimination in academia?”:

I believe it is very bad, although I do not have data.

I started my Ph.D. at the age of 34, and Tyler hired me here at GMU at the age of 40. So by my lights Tyler deserves credit for overcoming the age bias. Tyler doesn’t discuss why this bias might exist, but a Stanford history prof explained his theory to me when I was in my early 30s talking to him about a possible PhD. He said that older students are known for working harder and better, but also for being less pliable: they have more of their own ideas about what is interesting and important.

I think that fits with what I’ve heard from others, and have seen for myself, including in myself. People complain that academia builds too little on “real world” experience, and that disciplines are too insular. And older students help with that. But in fact the incentive for each prof in picking students isn’t to solve the wider problems with academia. It is instead to expand an empire by creating intellectual clones of him or herself. And for that selfish goal, older students are worse. My mentors likely feel this way about me, that I worked hard and did interesting stuff, but I was not a good investment for expanding their legacy.

Interestingly this explanation is somewhat the opposite of the usual excuses for age bias in Silicon Valley. There the usual story is that older people won’t take as many risks, and that they aren’t as creative. But the complaint about older Ph.D.s is exactly that they take too many risks, and that they are too creative. If only they would just do what they are told, and copy their mentors, then their hard work and experience could be more valued.

I find it hard to believe that older workers change their nature this much between tech and academia. Something doesn’t add up here. And for what its worth, I’ve been personally far more impressed by the tech startups I’ve known that are staffed by older folks.

GD Star Rating
Tagged as: , ,

Blockchain Bingo

Two weeks ago I was on a three person half hour panel on “Bitcoin and the Future” at an O’Reilly Radar Summit on Bitcoin & the Blockchain. I was honored to be invited, but worried as I had not been tracking the field much. I read up a bit, and listened carefully to previous sessions. And I’ve been continuing to ponder and read for the last two weeks. There are many technical details here, and they matter. Even so, it seems I should try to say something; here goes.

A possible conversation between a blockchain enthusiast and newbie:

“Bitcoin is electronic money! It is made from blockchains, which are electronic ledgers that can also support many kinds of electronic contracts and trades.”

“But we already have money, and ledgers. And electronic versions. In fact, bank ledgers were one of the first computer applications.”

“Yes, but blockchain ledgers are decentralized. Sure, compared to ordinary computer ledgers, blockchain ledgers take millions or more times the computing power. But blockchains have no central org to trust. Instead, you trust the whole system.”

“Is this whole system in fact more more trustworthy that the usual bank ledger system today?”

“Not in practice so far, at least not for most people. But it might be in the future, if we experiment with enough different approaches, and if enough people use the better approaches, to induce enough supporting infrastructure efforts.”

“If someone steals my credit card today, a central org of a credit card firm usually takes responsibility and fixes that. Here I’d be on my own, right?”

“Yes, but credit card firms charge you way too much for such services.”

“And without central orgs, doesn’t it get much harder to regulate financial services?”

“Yes, but you don’t want all those regulations. For example, blockchains make anonymous money holdings and contracts easier. So you could evade taxes, and laws that restrict bets and drug buys.”

“Couldn’t we just pass new laws to allow such evasions, if we didn’t want the social protections they provide? And couldn’t we just buy cheaper financial services, if we didn’t want the private protections that standard services now provide?”

“You’re talking as if government and financial service markets are efficient. They aren’t. Financial firms have a chokehold on finance, and they squeeze us for their gain, not ours. They have captured government regulators, who mostly work to tighten the noose, instead of helping the rest of us.”

“OK, imagine we do create cheaper decentralized systems of finance where evasion of regulation is easier. If this system is used in ways we don’t like, we won’t be able to do much to stop that besides informal social pressure, or trying to crudely shut down the whole system, right? There’d be no one driving the train.”

“Yes, exactly! That is the dream, and it might just be possible, if enough of us work for it.”

“But even if I want change, shouldn’t I be scared of change this lumpy? This is all or nothing. We don’t get to see the all before we try, and once we get it then its mostly too late to reverse.”

“Yes, but the powers-that-be can and do block most incremental changes. It is disruptive revolution, or nothing. To the barricades!”

I see five main issues regarding blockchain enthusiasm:

  • Technical Obstacles. Many technical obstacles remain, to designing systems that are general, cheap, secure, robust, and scaleable. You are more enthusiastic if you think these obstacles can be more easily overcome.
  • Bad Finance & Regulation. The more corrupt and wasteful you think that finance and financial regulation are today, the more you’ll want to throw the dice to get something new.
  • Lumpy Change. The more you want change, but would rather go slow and gradual, so we can back off if we don’t like what we see, the less you’ll want to throw these lumpy dice.
  • Standards Coordination. Many equilibria are possible here, depending on exactly which technical features are in the main standards. The worse you think we are at such coordination, the less you want to roll these dice.
  • Risk Aversion. The more you think regulations protect us from terrible dark demons waiting in the shadows, the less you’ll want a big unknown hard-to-change-or-regulate world.

Me, I’d throw the dice. But then I’d really like more bets to be feasible, and I’ve known some people working in this area for decades. However, I can’t at all see blaming you if you feel different; this really is a tough call.

GD Star Rating
Tagged as: ,

Em Software Results

After requesting your help, I should tell you what it added up to. The following is an excerpt from my book draft, illustrated by this diagram:


In our world, the cost of computing hardware has been falling rapidly for decades. This fall has forced most computer projects to be short term, so that products can be used before they are made obsolete. The increasing quantity of software purchased has also led to larger software projects, which involve more engineers. This has shifted the emphasis toward more communication and negotiation, and also more modularity and standardization in software styles.

The cost of hiring human software engineers has not fallen much in decades. The increasing divergence between the cost of engineers and the cost of hardware has also lead to a decreased emphasis on raw performance, and increased emphasis on tools and habits that can quickly generate correct if inefficient performance. This has led to an increased emphasis on modularity, abstraction, and on high-level operating systems and languages. High level tools insulate engineers more from the details of hardware, and from distracting tasks like type checking and garbage collection. As a result, software is less efficient and well-adapted to context, but more valuable overall. An increasing focus on niche products has also increased the emphasis on modularity and abstraction.

Em software engineers would be selected for very high productivity, and use the tools and styles preferred by the highest productivity engineers. There would be little interest in tools and methods specialized to be useful “for dummies.” Since em computers would tend to be more reversible and error-prone, em software would be more focused on those cases as well. Because the em economy would be larger, its software industry would be larger as well, supporting more specialization.

The transition to an em economy would greatly lower wages, thus inducing a big one-time shift back toward an emphasis on raw context-dependent performance, relative to abstraction and easier modifiability. The move away from niche products would add to this tendency, as would the ability to save copies of the engineer who just wrote the software, to help later with modifying it. On the other hand, a move toward larger software projects could favor more abstraction and modularity.

After the em transition, the cost of em hardware would fall at about the same speed as the cost of other computer hardware. Because of this, the tradeoff between performance and other considerations would change much less as the cost of hardware fell. This should greatly extend the useful lifetime of programming languages, tools, and habits matched to particular performance tradeoff choices.

After an initial period of large rapid gains, the software and hardware designs for implementing brain emulations would probably reach diminishing returns, after which there would only be minor improvements. In contrast, non-em software will probably improve about as fast as computer hardware improves, since algorithm gains in many areas of computer science have for many decades typically remained close to hardware gains. Thus after ems appear, em software engineering and other computer-based work would slowly get more tool-intensive, with a larger fraction of value added by tools. However, for non-computer-based tools (e.g., bulldozers) their intensity of use and the fraction of value added by such tools would probably fall, since those tools probably improve less quickly than would em hardware.

For over a decade now, the speed of fast computer processors has increased at a much lower rate than the cost of computer hardware has fallen. We expect this trend to continue long into the future. In contrast, the em hardware cost will fall with the cost of computer hardware overall, because the emulation of brains is a very parallel task. Thus ems would see an increasing sluggishness of software that has a large serial component, i.e., which requires many steps to be taken one after the other, relative to more parallel software. This sluggishness would directly reduce the value of such software, and also make such software harder to write.

Thus over time serial software will become less valuable, relative to ems and parallel software. Em software engineers would come to rely less on software tools with a big serial component, and would instead emphasize parallel software, and tools that support that emphasis. Tools like automated type checking and garbage collection would tend to be done in parallel, or not at all. And if it ends up being too hard to write parallel software, then the value of software more generally may be reduced relative to the value of having ems do tasks without software assistance.

For tasks where parallel software and tools suffice, and where the software doesn’t need to interact with slower physical systems, em software engineers could be productive even when sped up to the top cheap speed. This would often make it feasible to avoid the costs of coordinating across engineers, by having a single engineer spend an entire subjective career creating a large software system. For an example, an engineer that spent a subjective century at one million times human speed would be done in less than one objective hour. When such a short delay is acceptable, parallel software could be written by a single engineer taking a subjective lifetime.

When software can be written quickly via very fast software engineers, product development could happen quickly, even when very large sums were spent. While today investors may spend most of their time tracking current software development projects, those who invest in em software projects of this sort might spend most of their time deciding when is the right time to initiate such a project. A software development race, with more than one team trying to get to market first, would only happen if the same sharp event triggered more than one development effort.

A single software engineer working for a lifetime on a project could still have troubles remembering software that he or she wrote decades before. Because of this, shorter-term copies of this engineer might help him or her to be more productive. For example, short-term em copies might search for and repair bugs, and end or retire once they have explained their work to the main copy. Short-term copies could also search among many possible designs for a module, and end or retire after reporting on their best design choice, to be re-implemented by the main copy. In addition, longer-term copies could be created to specialize in whole subsystems, and younger copies could be revived to continue the project when older copies reached the end of their productive lifetime. These approaches should allow single em software engineers to create far larger and more coherent software systems within a subjective lifetime.

Fast software engineers who focus on taking a lifetime to build a large software project, perhaps with the help of copies of themselves, would likely develop more personal and elaborate software styles and tools, and rely less on tools and approaches that help them to coordinate with other engineers with differing styles and uncertain quality. Such lone fast engineers would require local caches of relevant software libraries. When in distantly separated locations, such caches could get out of synch. Local copies of library software authors, available to update their contributions, might help reduce this problem. Out of synch libraries would increase the tendency toward divergent personal software styles.

When different parts of a project require different skills, a lone software engineer might have different young copies trained with different skills. Similarly, young copies could be trained in the subject areas where some software is to be applied, so that they can better understand what variations will have value there.

However, when a project requires different skills and expertise that is best matched to different temperaments and minds, then it may be worth paying extra costs of communication to allow different ems to work together on a project. In this case, such engineers would likely promote communication via more abstraction, modularity, and higher level languages and module interfaces. Such approaches also become more attractive when outsiders must test and validate software, to certify its appropriateness to customers. Enormous software systems could be created with modest sized teams working at the top cheap speed, with the assistance of many spurs. There may not be much need for even larger software teams.

The competition for higher status among ems would tend to encourage faster speeds than would otherwise be efficient. This tendency of fast ems to be high status would tend to raise the status of software engineers.

GD Star Rating
Tagged as: , ,

Auto-Auto Deadline Looms

It is well-known that while electricity led to big gains in factory productivity, few gains were realized until factories were reorganized to take full advantage of the new possibilities which electric motors allowed. Similarly, computers didn’t create big productivity gains in offices until work flow and tasks were reorganized to take full advantage.

Auto autos, i.e., self-driving cars, seem similar: while there could be modest immediate gains from reducing accident rates and lost productive time commuting, the biggest gains should come from reorganizing our cities to match them. Self-driving cars could drive fast close together to increase road throughput, and be shared to eliminate the need for parking. This should allow for larger higher-density cities. For example, four times bigger cities could plausibly be twenty-five percent more productive.

But to achieve most of these gain, we must make new buildings with matching heights and locations. And this requires that self-driving cars make their appearance before we stop making so many new buildings. Let me explain.

Since buildings tend to last for many decades, one of the main reasons that cities have been adding many new buildings is that they have had more people who need buildings in which to live and work. But world population growth is slowing down, and may peak around 2055. It should peak earlier in rich nations, and later in poor nations.

Cities with stable or declining population build a lot fewer buildings; it would take them a lot longer to change city organization to take advantage of self-driving cars. So the main hope for rapidly achieving big gains would be in rapidly growing cities. What we need is for self-driving cars to become available and cheap enough in cities that are still growing fast enough, and which have legal and political support for driving such cars fast close together, so they can achieve high throughput. That is, people need to be sufficiently rewarded for using cars in ways that allow more road throughput. And then economic activity needs to move from old cities to the new more efficient cities.

This actually seems like a pretty challenging goal. China and India are making lots of buildings today, but those buildings are not well-matched to self-driving cars. Self-driving cars aren’t about to explode there, and by the time they are cheap the building boom may be over. Google announced its self-driving car program almost four years ago, and that hasn’t exactly sparked a tidal wave of change. Furthermore, even if self-driving cars arrive soon enough, city-region politics may well not be up to the task of coordinating to encourage such cars to drive fast close together. And national borders, regulation, etc. may not let larger economies be flexible enough to move much activity to the new cities who manage to support auto autos well.

Alas, overall it is hard to be very optimistic here. I have hopes, but only weak hopes.

GD Star Rating
Tagged as: , , ,

Tech Regs Are Coming

Over world history, we have seen a lot of things regulated. We can see patterns in these regulations, and we understand many of them – it isn’t all a mystery.

As far as I can tell, these patterns suggest that recent tech like operating systems, search engines, social networks, and IM systems are likely to be substantially regulated. For example, these systems have large network effects and economies of scale and scope. Yet they are now almost entirely unregulated. Why?

Some obvious explanations, fitting with previous patterns of regulation, are that these techs are high status, new, and changing fast. But these explanations suggest that low regulation is temporary. As they age, these systems will change less, eroding their high status derived from being fashionable. They will become stable utilities that we all use, like the many other stable utilities we use without much thought. And that we regulate, often heavily.

You’d think that if we all know regulation is coming, that we’d be starting to argue about how and how much to regulate these things. Yet I hear little of this. Those who want little regulation might keep quiet, hoping the rest will just forget. But silence is more puzzling for those who want more regulation. Are they afraid to seem low status by proposing to regulate things that are still high status?

Similarly puzzling to me are all these internet businesses built on the idea that ordinary regulations don’t apply to stuff bought on the internet. They think that if you buy them on the internet, hired cars and drivers don’t have to follow cab regulations, rooms for a night don’t have to follow hotel regulations, ventures soliciting investors don’t have to follow securities regulations, and so on. Yes, regulators are slow and reluctant to regulate high status things, but can they really expect to evade regulation long enough to pay off their investors?

GD Star Rating
Tagged as: ,

Slowing Computer Gains

Whenever I see an article in the popular sci/tech press on the long term future of computing hardware, it is almost always on quantum computing. I’m not talking about articles on smarter software, more robots, or putting chips on most objects around us; those are about new ways to use the same sort of hardware. I’m talking about articles on how the computer chips themselves will change.

This quantum focus probably isn’t because quantum computing is that important to the future of computing, nor because readers are especially interested in distant futures. No, it is probably because quantum computing is sexy in academia, appearing often in top academic journals and university press releases. After all, sci/tech readers mainly want to affiliate with impressive people, or show they are up on the latest, not actually learn about the universe or the future.

If you search for “future of computing hardware”, you will mostly find articles on 3D hardware, where chips are in effect layered directly in top of one another, because chip makers are running into limits to making chip features smaller. This makes sense, as that seems the next big challenge for hardware firms.

But in fact the rest of the computer world is still early in the process of adjusting to the last big hardware revolution: parallel computing. Because of dramatic slowdowns in the last decade of chip speed gains, the computing world must get used to writing a lot more parallel software. Since that is just harder, there’s a real economic sense in which computer hardware gains have slowed down lately.

The computer world may need to make additional adaptations to accommodate 3D chips, as just breaking a program into parallel processes may not be enough; one may also have to to keep relevant memory closer to each processor to achieve the full potential of 3D chips. The extra effort to go into 3D and make these adaptations suggests that the rate of real economic gains from computer hardware will slow down yet again with 3D.

Somewhere around 2035 or so, an even bigger revolution will be required. That is about when the (free) energy used per gate operations will fall to the level thermodynamics says is required to erase a bit of information. After this point, the energy cost per computation can only fall by switching to “reversible” computing designs, that only rarely erase bits. See (source):


Computer operations are irreversible, and use (free) energy to in effect erase bits, when they lack a one-to-one mapping between input and output states. But any irreversible mapping can be converted to a reversible one-to-one mapping by saving its input state along with its output state. Furthermore, a clever fractal trick allows one to create a reversible version of any irreversible computation that takes exactly the same time, costing only a logarithmic-in-time overhead of extra parallel processors and memory to reversibly erase intermediate computing steps in the background (Bennett 1989).

Computer gates are usually designed today to change as rapidly as possible, and as a result in effect irreversibly erase many bits per gate operation. To erase fewer bits instead, gates must be run “adiabatically,” i.e., slowly enough so key parameters can change smoothly. In this case, the rate of bit erasure per operation is proportional to speed; run a gate twice as slowly, and it erases only half as many bits per operation (Younis 1994).

Once reversible computing is the norm, gains in making more smaller faster gates will have to be split, some going to let gates run more slowly, and the rest going to more operations. This will further slow the rate at which the world gains more economic value from computers. Sometime much further in the future, quantum computing may be feasible enough so it is sometimes worth using special quantum processors inside larger ordinary computing systems. Fully quantum computing is even further off.

My overall image of the future of computing is of continued steady gains at the lowest levels, but with slower rates of economic gains after each new computer hardware revolution. So the “effective Moore’s law” rate of computer capability gains will slow in discrete steps over the next century or so. We’ve already seen a slowdown from a need for parallelism, and within the next decade or so we’ll see more slowdown from a need to adapt to 3D chips. Then about 2030 or so we’ll see a big reversibility slowdown due to a need to divide part gains between more operations and using less energy per operation.

Overall though, I doubt the rate of effective gains will slow down by more than a factor of four over the next half century. So, whatever you might have thought could happen in 50 years if Moore’s law had continued steadily, is pretty likely to happen within 200 years. And since brain emulation is already nicely parallel, including with matching memory usage, I doubt the relevant rate of gains there will slow by much more than a factor of  two.

GD Star Rating
Tagged as: , ,