Can Humans Be The FORTRAN Of Creatures?

It is one of the most fundamental questions in the social and human sciences: how culturally plastic are people? Many anthropologists have long championed the view that humans are very plastic; with matching upbringing people can be made to behave a very wide range of ways, and to want a very wide range of things. Others say human nature is far more constrained, and collect descriptions of “human universals” (See Brown’s 1991 book.)

This dispute has been politically potent. For example, in gender relations some have said that social institutions should reflect the fact that men and women have certain innate differences, while others say that we can pick most any way we want the genders to relate, and then teach our children to be like that.

But let’s set those issues aside, look to the distant future, and ask: do varying degrees of human cultural plasticity make different predictions about the future?

The easiest predictions are at the extremes. For example, if human nature is extremely rigid, and hard to change, then humans will most likely just go extinct. Eventually environments will change, and other creature will evolve or be designed that are better adapted to those new environments. Humans won’t adapt very well, by assumption, so they lose.

At the other extreme, if human nature is very plastic, then it will adapt to most changes, and change to embody whatever innovations are required for such adaptation. But then there would be very little left of us by the end; our descendants would become whatever any initially very plastic species would have become in such an environment.

So if you want some distinctive human features to last, you’ll have to hope for an intermediate level of plasticity. Human nature has to be flexible enough to not be out competed by a more flexible design platform, but inflexible enough to retain some of its original features.

For example, consider the programming language FORTRAN:

Originally developed by IBM … in the 1950s for scientific and engineering applications, Fortran came to dominate this area of programming early on and has been in continual use for over half a century in computationally intensive areas such as numerical weather prediction, finite element analysis, computational fluid dynamics, computational physics and computational chemistry. It is one of the most popular languages in the area of high-performance computing and is the language used for programs that benchmark and rank the world’s fastest supercomputers. (more)

FORTRAN isn’t the best possible programming language, but because it was first, it collected a powerful installed base well adapted to it. It has been flexible enough to stick around, but it isn’t infinitely flexible — one can very much recognize early FORTRAN features in current versions.

Similarly, humans have the advantage of being the first species to master culture in a powerful way. We have slowly accumulated many powerful innovations we call civilization, and we’ve invested a lot in adapting those innovations to the particulars of humanity. This installed based of the ways civilization is matched well to humans gives us an advantage over creatures with a substantially differing design.

If humans are flexible enough, but not too flexible, we may become the FORTRAN of future minds, clunky but still useful enough to keep around, noticeably retaining many of its original features.

I should note that some hope to preserve humanity by ending decentralized competition; they hope a central power will ensure than human features survive regardless of their local efficiency in future environments. I have a lot of concerns about that, but yes it should be included on the list of possibilities.

GD Star Rating
a WordPress rating system
Tagged as: , ,
Trackback URL:
  • Ben Abbott

    Re: “Similarly, humans have the advantage of being the first species to master culture in a powerful way.”

    Have you read Matt Ridley’s “Rational Optimist”?

  • Steve Reilly

    I don’t understand why a rigid human nature would lead to extinction.  “Rigid human nature” just means that individuals don’t change much in response to their environment.  It says nothing about how the species might evolve over millennia of changing environments.

    • http://overcomingbias.com RobinHanson

      We are talking about the flexibility of the species to deal with rapid large changes. For an individual, we’d talk more about the rigidness of their personality or work style.

  • Don Geddis

    Robin, I’m not sure whether you’re inspiring me, or making me sad.  It’s kind of pathetic and unfortunate that Fortran has lasted so long.  It’s like a virus, that uses network effects and first-mover advantage to remain, long after far better alternatives are available.  Computer software would be “better”, if only Fortran were not (still) so popular.

    I understand that we all have an emotional attachment to “humanity”.  And perhaps there is no greater purpose in the universe (for us), than for humanity to dominate the future.  But your future vision sounds dystopian to me: so much “more” could be achieved, if only the future weren’t “stuck” with the poor, primitive design of such early humans.

    I think there’s a conflict between the dreams we have of what the future universe could be like, and our own limited humanity.  We probably can’t have both.  But I’m having a hard time cheering for your vision of a substandard human design fighting to remain relevant, in the face of obviously superior solutions.

    • VV

       Fortran is very well adapted to its niche: numerical computing.
      That’s one of the last areas of computing where you really want to squeeze every bit of performance out of your hardware, and it is very difficult to beat well-optimized Fortran code at its own game.

      Only recently C++ code with heavy use of template meta-programming and modern compilers became able to compete with Fortran at numeric computation.

      Are humans particularly well adapted to some ecological niche? Currently it doesn’t seem so, in fact humans are one of the most generalist species that ever appeared on earth.
      Maybe in the far future humans as we know them will be replaced in every niche except a narrow one, but that doesn’t seem very likely.

      However, I don’t think that would be a problem. I don’t buy the AI-from-first-principles or even the brain uploads as likely replacements for humanity. It will be probably natural evolution plus some genetic engineering plus possibly some cybernetic augmentation: humans as we know them will be replaced by their descendents in the same way that sons and daughters replace their parents.

      Anyway, if you discount future utility, there is no real reason to care about very distant descendents. After all, interstellar travel may well be technologically impossible, so when the sun runs out of juice, it will be all over.

  • Doug

    I see the analogy you’re trying to make, but this description of Fortran’s place in the modern computing ecosystem is inaccurate.

    While some of its use may be due to “legacy” reasons, Fortran’s true strength is that it’s blazing fast. Even faster than C for numeric computations. For example much of the R statistical language base is written in Fortran.

    The better analogy for your point might be COBOL.

  • Johnicholas Hines

    You don’t mention “change human nature by genetic engineering” as a strategy in the very-rigid scenario. Why is that?

    • croncobaurul

       If we ever get to auto-engineer our genetic code, would that make us the LISP of species?

    • http://overcomingbias.com RobinHanson

      The ability to change human genes only moderately increases our plasticity and flexibility. The basic issue remains.

  • dmytryl

    As a programmer, I used to think that differences between programming languages actually matter. But over the years I lost that belief. I believe that the reason fortran lasted so long is that differences between programming languages themselves are very much cosmetic, and the importance of the programming language is greatly overrated. There is some ceiling past which trying to improve stuff simply addresses issues that are not bottlenecking the human worker, while very much getting in the way.

    The advantage of, say, C or C++ over assembly, are easy to demonstrate. Run a programming contest, marathon style, with the sides representing top performance, ending the contest head to head. The assembly will almost never win. Contrast that with the functional programming languages which represent even bigger paradigm shift from the procedural languages than those are from assembly. They are no contest winners; their neatness is that of an OCD sufferer that is unable to get any actual work done while organizing everything on his workplace and keeping it 100% organized through the work. There’s no corresponding advantage. Maybe it is the case that at expert level, a lot of extremely high level yet still very much mechanical functionality is internalized to programmer’s head, and is executed at no extra cost as it is not bottlenecking the work.

    Humans have an advantage that they work as nodes of distributed system – human society. Those ruthless utility maximizer AI ideas and scares – if they are at all implementable – are not, and the architectures are very un-P2P, very impractical. Something that is human level is not useful if it can not be the node in existing society – if it does not work with existing cooperation protocols. Something that is calculating how to cooperate with you starting from it’s own ulterior motives simply has to do a lot of extra work – it is ineffective from our perspective – or is even conceptually impossible due to difficulty of defining utility other than from god’s eye view, and difficulty keeping the system sane (expected utility maximization over made up priors very possibly ends up in the agent acting upon an incredibly improbable but incredibly high payoff idiosyncratic theory it invents – maximizing the expected utility all right but only affecting an incredibly small fraction of possible worlds)

    • Michael Vassar

      I have never seen many of these claims before and I’d really appreciate more experienced programmers chiming in.

      • http://www.facebook.com/people/Jorge-Emilio-Emrys-Landivar/37403083 Jorge Emilio Emrys Landivar

        There are oddles of LISP and Ruby fanatics that would say that dmytryl is full of crap.

      • Drewfus

        Is that because these fanatics like being fanatics, or because they have data that supports their fanaticism?

      • dmytryl

        The fanatics claim, if true, would have quite a lot of real world consequences.

        There’s always support in favour of what ever language, when it is anecdotal without anything resembling case-control environment. The difference spans everything including really small examples. When you set up programming contests, difference somehow disappears, with a zillion excuses for disappearance of course. When you have economic competition, again, the supposedly much better languages somehow fail to assert their dominance.

        I’m not saying that they would be identical, just that the difference from the fancy favourite features of some language is much smaller than claimed.

        It may be up to how much the language is annoying particular person. I find lack of reflection in C++ incredibly annoying, for one thing. But is it a big drag in development? Not really.

      • dmytryl

        http://alarmingdevelopment.org/?p=392

        I think its actually a very common belief among programmers that you get rapidly diminishing returns past C  . There are some differences, of course; sometimes it is very annoying when you have no coroutines in the language, sometimes it is very annoying when you have no reflection, and so on, but it doesn’t make a lot of difference at that point.

      • John Lawrence Aspden

        Michael, I’m a very experienced programmer in business for myself (what’s called a contractor, so I see the insides of many companies in Cambridge, which is the European tech centre). I have used commercially all the languages referenced above, except C# and F#, which are Microsoft versions of things I have used.

        I don’t have any non-anecdotal data on ease of programming, but I think the hierarchy

        Assembler > C > Java > Python

        Is beyond dispute amongst everyone who has used all four for significant work.

        It works both ways. Assembler will take the longest to write, and will run the fastest, and will be hardest to modify and make correct. Python allows you to knock up solid programs very quickly, but runs very slowly in comparison.

        There are many caveats, and often the best option is to write the program in python and then translate the bits where the computer spends the majority of its time into something faster (usually C), getting the best of both worlds.

        C, ML and Lisp all have a ‘clean’ feeling about them, which is very attractive to the mathematically minded. 

        They’re the three languages I’d use for pleasure, and there are many theoretical arguments in their favour. ML and Lisp promise python-like expressivity and C-like execution speed, and often live up to their promises.

        C is very mainstream, but the other two, despite being ancient and widely respected, have never been widely popular in commercial circles. This is inexplicable to their fans, including me. I suspect that there is some sort of network effect that makes them better for someone working alone than for teams of people, but I don’t know what it is.

        On the other hand, a couple of years ago I worked for a small startup called XenSource, that was sold to Citrix for something like £500 million. We used many languages, but the complex and difficult stuff was all done in ML, and that was a rewrite of a program that had originally been done in C, and been found to be undebuggable and unmaintainable in that form, and so rewritten by the team responsible for it in their preferred language. They reported that it was much easier to maintain, understand and modify that the original C version, which I never saw.

        There may be some sort of handicapping effect going on, where by using the academic, mathematical type languages you’re ensuring that you only employ very clever programmers. 

        I think everyone agrees that a couple of good programmers are worth a couple of hundred average ones, for almost any task. But it’s very hard to work out who’s good in advance.

      • dmytryl

        I would agree with hierarchy (I even use python a fair bit myself when appropriate, and I have Lua scripting in game engine I developed), but not with the common fallacy of assuming that the gaps are equal sized.

        C vs assembly: an enormous gap. Portability, readability, everything. Assembly is dead except for very special cases, and been dead long before the compilers got very good at optimizing. That’s how much of a gap it is.

        Java vs C: the memory management in C is annoying and string handling is especially; the Java is to enough extent a superset of C to be better than C, if we don’t care about memory footprint and other quirks.

        Python vs Java: Java is verbose and relatively inflexible when extending the functionality of existing classes, whereas Python is difficult to re-factor because you can’t rely on compiler errors when you change the libraries. This sounds trivial but seems to begin to really suck for big projects.

        Lisp: try doing scientific computing in it. Easily as silly as writing web applications in C++ . . ML: never used it, and from what i seen of it, hopefully never will.

        C++: What I primarily use, and what the industry i work in (computer graphics software, simulation software) uses. It is sufficiently close to scientific computing that things like LISP simply do not fly.

        My overall position is: programming language is a form of human-machine interface and as such really has no business sacrificing usability for sake of purity. Any language where the designers decide, okay, operator overloading is unnecessary and evil and goes against our maxims, is not even worth looking at (from my point of view as programmer that does a lot of applied mathematics), not because the operator overloading is that important (it isn’t) but because its a good proxy for attitudes of language designers and users of that language.

      • http://thomblake.mp thom blake

        It’s hard to make general sweeping claims about programming languages – there are a lot of different mostly-closed ecosystems, and from the inside it seems like everybody in the world is using the same programming language.  In reality, both technical needs and culture make a huge difference in what is the best programming language.

        For example, the statement “C++ is better than C pretty much always” is not something I thought I’d ever hear anyone say.  In my circles, “everybody knows” that C++ is an abomination that has virtually no advantages and many disadvantages as compared to C.

        As another example, I once talked to a fellow who programs targeting systems for long-range missiles.  He only programs in Pascal, as he believes C isn’t strict enough about data types.

        Assembly certainly wasn’t dead before we had good optimizing compilers.  For computer games in the early 90’s, for example, it was common to write graphics routines in assembly since it was much more efficient than C.  And it probably isn’t dead now.  While you can’t write Windows drivers reliably in anything other than C and sometimes C++, I believe folks still use assembly for other systems quite a bit, particularly on specialized hardware.

        Pure functional, declarative, and logical programming are used quite a bit, though again it depends largely on application.  There are people using Haskell in various fields.  And it’s weird to see a list of non-pure functional languages in widespread use without reference to Emacs Lisp.

      • dmytryl

        In my circles, “everybody knows” that C++ is an abomination that has
        virtually no advantages and many disadvantages as compared to C.

        A phenomenon that I always found weird – having strong opinions like this. Clearly C++ is used a lot more than C nowadays. Got to explain this. Obvious explanation, it is pretty much a superset, and is slightly better. Screwed up explanations: some sort of massive C++ conspiracy or what ever. Ditto for Fortran, why it is still used? Because other languages don’t improve much upon it’s feature set, while getting in the way of optimizations. plus tools not existing. For any more exciting explanation need proportionally more evidence.

    • VV

       Well, I don’t think that differences don’t matter.

      Unless you need extreme low-level optimization Fortran is better than Assembly and C is better than Fortran. C++ is better than C pretty much always.

      If you are willing to sacrifice more efficiency, then it’s Java and C# over C++ (and C# over Java), and if you accept further overhead, then Python wins hands down.

      Note that these languages all use the imperative programming paradigm (extended with object orientation). What about different programming paradigms?

      The pure functional paradigm of languages like Haskell and Clean is just of theoretical interest. It is useful only in the same sense that math is, but you can’t use it to do any practical programming. Ditto for logic programming.

      “Impure” functional programming languages such as the ML family and multi-paradigm languages such as the Lisp family, on the other hand, are quite convenient to use after you are familiar with them, and some of their implementations are quite optimized (performances similar to Java and C# are reported), but, with possibly two exceptions, they aren’t widely used. I think that the problem is that these languages are not very much interoperable with existing software, partly because were developed mostly in academic contexts and partly because there is some “impedance” between different paradigms.

      The two exceptions I was thinking of are Wolfram’s Mathematica, essentially a Lisp derivative but with significant changes, which is widely used for scientific computing, and possibly Microsoft F#, an ML derivative which might have to have finally got the interoperability issue right and appears to have some significant use in financial computing (we’ll see if it’s just a fad).

      • VV

         

        which might have to have finally got the interoperability issue right

        which might have finally got the interoperability issue right

      • dmytryl

        The fortran as a language is only standing less in the way of allowing optimizations, IMO.

        When you try to go fancier, you either burden human with second guessing the compiler to write code that compiler would be able to optimize (when you rely on compiler to get from O(n^2) to O(n) or the like), or prevent compiler from doing optimizations (due to pointer mess in case of C which makes it hard to optimize safely), or both.

    • http://www.facebook.com/CronoDAS Douglas Scheinberg

       Paul Graham said that there were things he could write easily in LISP that he probably couldn’t have written at all in C++. (http://paulgraham.com/avg.html)

      • dmytryl

        He’d be an awesome example of a language zealot. Somehow the most amazing languages are always among the least used; my explanation is that the less a decent language (LISP is a decent language) is used, the more people who like that language write about how great it is, trying to improve the situation. The zeal grows faster than the numbers fall off.

  • Drewfus

    “At the other extreme, if human nature is very plastic, then it will adapt to most changes, and change to embody whatever innovations are required for such adaptation.”

    More is always more? No diminishing returns? No optimal level of plasticity?

    Psych researchers found evidence decades ago that people organized into groups had optimal levels of freedom. Neither highly authoritarian nor completely laissez-faire envirnoments were best for optimal creativity or productivity. Something between these two extremes worked best.

    Same for plasticity, IMO.

    Consider competition. Is competition purely a function of the number of competitors in a given domain, or is it also an (inverse) function of the scope of the competition, whereby the narrower the scope for changes is, the more innovative the actual changes will be? That is, more and greater constraints equals more innovation within them – assuming a reasonable level of freedom, otherwise.

    For intelligent systems, constraints are not or not only limits and boundaries, they are catalysts for innovation and creation. This includes our genetic constraints, IMO. Therefore, the following is not necessarily true:

    “But then there would be very little left of us by the end; our descendants would become whatever any initially very plastic species would have become in such an environment.”

    • http://overcomingbias.com RobinHanson

      I didn’t claim that more plasticity always wins. But win or lose, a highly plastic human nature won’t leave much of a legacy. A highly rigid human nature at least leaves a legacy, if it wins.

  • Pingback: A proposed adjustment to the astronomical waste argument | The Effective Altruism Blog