Obviously cultures vary in many details other than this one (Tibetans keep yaks, Maasai keep cattle; Japanese grow rice, Egyptians grow wheat), but the relation between these details may not be complex in the relevant sense, so the dimensionality should either be reducible or nearly separable in most cases.
STE and humanities are categorically different because they rely on different methods, not just differing dimensionality. (I’m leaving pure math out because it is actually a third different thing). STE obviously rely on the scientific method. We get so enamored with the success of the scientific method we sometimes forget it is limited to the domain of repeatable patterns in nature, not the understanding of human actions - things that happened once in a context than can never be re-created. The humanities study human actions in a time-dependent context that cannot be re-created so the scientific method does not apply. Instead, historical/cultural understanding is gained through the “historical method” - scholarly detective work to piece together the past. Scholars 1) comb the archives (plus archeology, literature, etc.) to collect datapoints along the timeline. Scholars then 2) use modern critical tools and their own best attempt to get inside the minds of the people/cultures they study to 3) fit a narrative through the data points in a way that tells a coherent story. Because different scholars have different worldviews and background assumptions about the world and the state of humanity within it, this will always be the view of the individual scholar, and can never be the “view from nowhere.” That the originators of sociology thought they could study culture as a field of science just demonstrates their misunderstanding of the scientific and historical methods.
The answer is timeline-dependence and human agency. Humanities fields cannot avoid non-deterministic individual human actions in a unique context that cannot be re-created.
But now I think you are looking not at STEM in general, but at economics specifically as the example. Why can’t we use statistical aggregates to understand human culture more broadly the way we do with the special case of consumer behavior in a large open market, to get around the limitations I mention above?
I think there could be certain narrow cases where that is possible, beyond what we know as economics today. This approach could never displace the humanities as a whole, but could find application in specific situations where the variable you are trying to study is an aggregate over at least hundreds or thousands of human decisions, and where the context of all these decisions is somewhat controlled. Yes, high dimensionality limits the cases in which the latter criterion (controlled context) fits.
Note: I am a big fan of the genre in which historians try to identify patterns from history to create models of culture or civilizational development, but these historians are usually (at least since 1970) laughed to scorn. Not least because the most they can find are 8 or 10 cases to identify a pattern, and the conclusions always bake in the author’s personal values and biases.
Meanwhile, complex systems science is all about formalizing theory for non-deterministic, irreducible, emergent, fundamentally uncertain behaviors. Claiming human agency prevents a re-marriage of science and the humanities ignores decades of the most promising work in both fields. We have computers now. We can run simulations with ever-diminishing marginal cost. And it turns out that the ghetto science put itself in back when all we had was calculation by hand—that it can only study linear dynamical deterministic systems—covers only a very small region of the actual universe, and only at certain levels of resolution. As just one example, it turns out that yes you can do quite a bit with sociophysics, just like you can forecast weather very effectively (if not perfectly) with chaos theory. Ultimately, a deeper understanding of the constraints on nonlinear dynamical systems is not incommensurable with better prediction but that better prediction depends on it.
But of course, to slightly agree with you, looking at things in the aggregate at only one level of abstraction is not enough and will never be enough. “More is different.” This is why we have and need multiple disciplines: one explanatory level is always inadequate.
Au contraire, the whole point of the scientific method is to ferret out missing variables by studying patterns in which reproducibility has not been achieved. It’s a process of creating exceptions to our models.
Humans are not exempt from the scientific method. They’re the same game in hard mode. In a world where science is practiced with total integrity, all validity claims would not just be about some revealed ontology but would fully address the methodology and epistemology of the researchers. By pretending there is a “view from nowhere” we’ve dismissed enormous fields of potentially very fruitful study. But it’s a matter of rigor, not of type. The digital humanities are just one area where more careful accounting proves to be both incredibly illuminating and restores science to its rightful position as an object-agnostic process of incrementally more precise specification and nth-person verification.
Models are great for identifying gaps in our knowledge and proposing hypotheses for what might reasonably fill the gaps. But models alone cannot experimentally test these hypotheses. You can create models of the origins of culture to generate hypotheses, but if you can’t test your hypotheses (which you can’t for most aspects of culture), it is not the scientific method. You are using the historical method. That’s not a bad thing! The historical method has been refined to be a powerful method of analysis. As digital humanities tool are added to it, its power will increase, but the best study of culture will still not be science. It will still necessarily involve a scholar using judgement according to his or her own worldview and biases to generate a narrative that fits a set of facts curated according to the worldview and biases of the scholar. And that’s ok! It’s better to acknowledge the limitations of our methods with intellectual humility than to lose sight of the assumptions and biases we are building into our models by holding a false perception of the methods we are using.
This speaks to Wigner's "unreasonable effectiveness of mathematics". I.e. the empirical fact that nature's building blocks have low dimensionality in the space of ideas, as you put it. We can be precise in talking about such things.
We lose that precision when we talk about systems with very high dimensionality. LLMs are an interesting intermediate case (dimensionality ~10^11) where there's a level of description that is very precise (what the floating-point units are doing to perform the linear algebra), but that same language does a poor job describing their emergent behaviors. For that we turn to loosey-goosey language (what the LLM "wants", "confabulation", etc.).
As far as what a given individual prefers, I'm guessing it's just a personality trait: Does one like precisely-defined ideas, or fuzzy ideas with a lot of irreducible complexity?
I think there's merit in your observation that, when humans talk about things high-dimensional systems, they necessarily cluster them and then talk about those clusters, rather than using high-dimensional descriptors. But I don't think that explains the love of the humanities for dialectic. STEM people cluster high-dimensional points using dimension reduction, so they can still talk quantitatively about their model of the system. Humanities people cluster points so they can make True/False claims using Boolean logic, which is always the wrong way to talk about complex cultural phenomena, but the only way that lets you claim your conclusions are 100% certain and apply universally.
Contemporary scientific problems that aren't taken from textbooks are often higher in dimensions than any humanities professor would dare tackle. A company I worked at designed a system for detecting and diagnosing helicopter engine problems before there were any symptoms observable to humans or to existing diagnostics; it reduced the data from the engine from about a dozen dimensions to a 3-dimensional model. I built a system for detecting zero-day exploits in TCP packets; it reduced a system that was maybe 80-dimensional to maybe 12-dimensional. For the Netflix competition, I (like many others) reduced a 16,000-dimensional system to a 50-dimensional model.
The humanities are vague because the humanities never went through the shift from a qualitative to a quantitative mindset, circa 1300 CE, when Europeans (A) started measuring things on a continuum instead of just counting them, and (B) got Hindu-Arabic numerals.
In the High Middle Ages, tailors didn't write down your measurements; they held a piece of string up to your body, pinched it off at 2 points, then moved that string over a piece of cloth to tell them where to cut it. The string was an analog measuring device. People didn't know how to measure time, velocity, or temperature. They didn't even have graduated rulers, which Rome had; they had rods, which measured rods. Stonemasons carried a separate ruler for each unit they wanted to measure, and their system of measurement was designed so that they didn't have to /count/ units. It was a binary system. The use of Roman numerals made multiplication difficult and division nearly impossible. Tax collectors couldn't compute taxes using multiplication; they'd place a stone for each thing of some type someone had in a square whose rows had as many stones as the the divisor for the tax (which would be a unit fraction), and the tax would be the number of stones in each column. Or they'd use an abacus to do the counting; but they never directly computed eg 15% of anything.
This shift from qualitative to quantitative was a necessary precondition for the later shift from the certainty of rationalism to the uncertainty of empiricism, as explained by Bacon's Novum Organum (1620), and by the explanation which the Royal Society of London provided of its founding principles (Spratt 1667, History of the Royal Society of London). Both identified claims to certainty and universality as a serious epistemological problem.
The humanities, unlike tax collectors, never even learned to count. And Derrida explicitly rejected modern logic in favor of Aristotelian logic, which is useless, because it assumes the world has the same ontology as Aristotle's metaphysics does. It has only unary predicates. But Derrida, like most people who aren't in STEM, didn't even use Aristotle's full logic, which at least had the quantifier "some". Modern dialecticians don't use that word. Marxists don't say /some/ property is theft. It would be a simple change, but most non-STEM people don't have that word in their internal model of the world, which is still the model that Christianity took from Plato, in which every instance of a type has the same essential nature. Most people not in STEM don't have anything but universal quantifiers in their mental models of the world (plus the Aristotelian accidental properties of specifically-enumerated instances). That's why their talk is necessarily vague. You can't make grand assertions using only universal quantifiers without running into counter-examples and contradictions, so you need to use ambiguous words that give you enough slack to slide around those counter-examples & contradictions. The evolution of the humanities, via survival of the slipperiest, has taken it deeper and deeper into obscurity over the past centuries: from Kant, to Hegel, to Heidegger, to Derrida. Possibly this began around 1800 because that was when it became impossible for philosophy to compete with science in explaining things, so philosophers had to specialize in mystifying things.
Try this sometime: Tell someone in the humanities some not-terribly-shocking "some" claim, like that some libertarians support mandatory background checks to buy a gun. If you bring it up a week later, they're likely to recall you as saying that libertarians support mandatory background checks to buy a gun. Or just read any reddit thread about politics and try to find the word "some" before any statement about Republicans or Democrats.
So the humanities continues to use dialectic, which is simply not capable of supporting intelligent conversation about complicated topics.
So I'm wondering about local minima here. Using neural networks as a proxy for a high dimensional space, it seems that if there are enough dimensions, no local minima is inescapable (a widely noted feature of LLM training). If a local minima in cultural space is a stable non drifting culture, based on the analogy to ai, such points are almost a myth. There can never be stability and drift would be a constant for any culture.
This is kind of what i liked about working for a few neuroimaging labs: I got to work with/study people whose culture was very non stem and see how similar/diff we all were in diff aspects just by analyzing data and engineering feedback driven protocols collected from electromagnetic sensors and tailoring the functionality for individual subjects. Of course, was only a sliver of what we could record in lab, and we still haven't gotten to hardware cheap enough to be able to collect it outside of the lab with same fidelity. I suspect when more can talk about persons feelings not in abstract terms, but in electromagnetic distributions in perticular regions of spacetime we call our bodies with/out varying stimuli, i think these kind of things will be come more concrete.
Not sure if you saw this @Robin Hanson but you and I are thinking along very similar lines, calling for similar strategic pivots, and it would be good to discuss this stuff with you on my show. Mentioned you in the short talk I gave at The Diverse Intelligences Summer Institute this summer because I found your last article on high-dimensional foraging the day I was preparing to present on the exact same subject:
It can be hard to *prove* that a system is intrinsically high dimensional. Especially if we can't observe it for long.
Sometimes things that seem high-dimensional turn out to be not. For example economics has low-dimensional representations of how goods and services flow through the economy. Laws of supply and demand, etc. They don't capture 100% of the truth, but enough to be useful.
Could similar low-d representations be found for other aspects of culture? This was the premise of Asimov's "psychohistory" in his Foundation books. It's hard to know how to exclude the possibility.
The whole point of the Foundation books was that Seldon’s preferred level of explanatory depth was inadequate. The question of how many dimensions are useful has everything to do with the spatiotemporal resolution of desired prediction, and to get it truly “right” you need to use several at once. Which is of course a receding horizon of ever-increasing difficulty once you realize that the models actually influence the behavior of the systems observed (at least in human systems). So no, Econ 101 has proven to be only useful within a very restrictive sense of “utility” because its mass adoption has undermined its own efficacy and it turns out it’s not that accurate of a model to begin with. (I recommend looking into the CoreEcon curriculum if you want an example of a better starting point for modern economics education.)
You are highlighting different ways to understand the world, based on various internally-organized focal points within an internally coherent model of a dual-polar perspective. This perspective could instead be more nuanced and fractured, more multipolar and providing subtle resonance to incoming signals along various pathways, though it requires a much wider scope of enlived (fully alive and integrated) experiences to develop.
Externally copied models of understanding, from any perspective, are usually shallow and non-resonant, lacking the ability to interact with deeper nuances of meaning. However, this is usually unnecessary for building a consensus model of interaction within an ecosystem, depending on the temporal dimensional scale. Common multi-stage vocabularies are useful to keep agents in sync on topics of base principles for all involved.
This seems plausible and profound. Including examples would make it much easier to understand and discuss.
One question occurs to me: why are cultural spaces higher dimensional? (Edited:) Is it just that science is our word for lower dimensional concept spaces? For as sciences become higher dimensional they merge into humanities, eg psychology -> sociology -> history.
Incidentally I reckon the search for unifying/shared concepts in the arts/humanities is a very interesting one (but not one those involved are particularly good at/interested in, except philosophers)
They’re not actually higher-dimensional; they just appear that way because of how we coarse-grain the microcosm and macrocosm. It’s complexity all the way down…check out Jessica Flack’s work on hourglass emergence.
Along with this, it's worth noting that higher-dimensional spaces are bigger - there's simply more cartesian room for distance between points. This means they're much sparser than low-dimensional measures that have similar numbers of members.
It MAY be that this also increases the measurement error of determining where a concept or action is in the high-dimensional space, which is related to (maybe; and causality direction is unclear) the illegibility problem in non-STEM thinking.
We might also end up with the impression that the space has a large number of relevant dimensions if we had a taboo against acknowledging one predominant vector, e.g. https://unstableontology.com/2021/04/12/on-commitments-to-anti-normativity/ or https://benjaminrosshoffman.com/civil-law-and-political-drama
Obviously cultures vary in many details other than this one (Tibetans keep yaks, Maasai keep cattle; Japanese grow rice, Egyptians grow wheat), but the relation between these details may not be complex in the relevant sense, so the dimensionality should either be reducible or nearly separable in most cases.
STE and humanities are categorically different because they rely on different methods, not just differing dimensionality. (I’m leaving pure math out because it is actually a third different thing). STE obviously rely on the scientific method. We get so enamored with the success of the scientific method we sometimes forget it is limited to the domain of repeatable patterns in nature, not the understanding of human actions - things that happened once in a context than can never be re-created. The humanities study human actions in a time-dependent context that cannot be re-created so the scientific method does not apply. Instead, historical/cultural understanding is gained through the “historical method” - scholarly detective work to piece together the past. Scholars 1) comb the archives (plus archeology, literature, etc.) to collect datapoints along the timeline. Scholars then 2) use modern critical tools and their own best attempt to get inside the minds of the people/cultures they study to 3) fit a narrative through the data points in a way that tells a coherent story. Because different scholars have different worldviews and background assumptions about the world and the state of humanity within it, this will always be the view of the individual scholar, and can never be the “view from nowhere.” That the originators of sociology thought they could study culture as a field of science just demonstrates their misunderstanding of the scientific and historical methods.
The question is WHY human actions can't be well studied with the same methods.
The answer is timeline-dependence and human agency. Humanities fields cannot avoid non-deterministic individual human actions in a unique context that cannot be re-created.
But now I think you are looking not at STEM in general, but at economics specifically as the example. Why can’t we use statistical aggregates to understand human culture more broadly the way we do with the special case of consumer behavior in a large open market, to get around the limitations I mention above?
I think there could be certain narrow cases where that is possible, beyond what we know as economics today. This approach could never displace the humanities as a whole, but could find application in specific situations where the variable you are trying to study is an aggregate over at least hundreds or thousands of human decisions, and where the context of all these decisions is somewhat controlled. Yes, high dimensionality limits the cases in which the latter criterion (controlled context) fits.
Note: I am a big fan of the genre in which historians try to identify patterns from history to create models of culture or civilizational development, but these historians are usually (at least since 1970) laughed to scorn. Not least because the most they can find are 8 or 10 cases to identify a pattern, and the conclusions always bake in the author’s personal values and biases.
Meanwhile, complex systems science is all about formalizing theory for non-deterministic, irreducible, emergent, fundamentally uncertain behaviors. Claiming human agency prevents a re-marriage of science and the humanities ignores decades of the most promising work in both fields. We have computers now. We can run simulations with ever-diminishing marginal cost. And it turns out that the ghetto science put itself in back when all we had was calculation by hand—that it can only study linear dynamical deterministic systems—covers only a very small region of the actual universe, and only at certain levels of resolution. As just one example, it turns out that yes you can do quite a bit with sociophysics, just like you can forecast weather very effectively (if not perfectly) with chaos theory. Ultimately, a deeper understanding of the constraints on nonlinear dynamical systems is not incommensurable with better prediction but that better prediction depends on it.
But of course, to slightly agree with you, looking at things in the aggregate at only one level of abstraction is not enough and will never be enough. “More is different.” This is why we have and need multiple disciplines: one explanatory level is always inadequate.
Au contraire, the whole point of the scientific method is to ferret out missing variables by studying patterns in which reproducibility has not been achieved. It’s a process of creating exceptions to our models.
Humans are not exempt from the scientific method. They’re the same game in hard mode. In a world where science is practiced with total integrity, all validity claims would not just be about some revealed ontology but would fully address the methodology and epistemology of the researchers. By pretending there is a “view from nowhere” we’ve dismissed enormous fields of potentially very fruitful study. But it’s a matter of rigor, not of type. The digital humanities are just one area where more careful accounting proves to be both incredibly illuminating and restores science to its rightful position as an object-agnostic process of incrementally more precise specification and nth-person verification.
Models are great for identifying gaps in our knowledge and proposing hypotheses for what might reasonably fill the gaps. But models alone cannot experimentally test these hypotheses. You can create models of the origins of culture to generate hypotheses, but if you can’t test your hypotheses (which you can’t for most aspects of culture), it is not the scientific method. You are using the historical method. That’s not a bad thing! The historical method has been refined to be a powerful method of analysis. As digital humanities tool are added to it, its power will increase, but the best study of culture will still not be science. It will still necessarily involve a scholar using judgement according to his or her own worldview and biases to generate a narrative that fits a set of facts curated according to the worldview and biases of the scholar. And that’s ok! It’s better to acknowledge the limitations of our methods with intellectual humility than to lose sight of the assumptions and biases we are building into our models by holding a false perception of the methods we are using.
This speaks to Wigner's "unreasonable effectiveness of mathematics". I.e. the empirical fact that nature's building blocks have low dimensionality in the space of ideas, as you put it. We can be precise in talking about such things.
We lose that precision when we talk about systems with very high dimensionality. LLMs are an interesting intermediate case (dimensionality ~10^11) where there's a level of description that is very precise (what the floating-point units are doing to perform the linear algebra), but that same language does a poor job describing their emergent behaviors. For that we turn to loosey-goosey language (what the LLM "wants", "confabulation", etc.).
As far as what a given individual prefers, I'm guessing it's just a personality trait: Does one like precisely-defined ideas, or fuzzy ideas with a lot of irreducible complexity?
I think there's merit in your observation that, when humans talk about things high-dimensional systems, they necessarily cluster them and then talk about those clusters, rather than using high-dimensional descriptors. But I don't think that explains the love of the humanities for dialectic. STEM people cluster high-dimensional points using dimension reduction, so they can still talk quantitatively about their model of the system. Humanities people cluster points so they can make True/False claims using Boolean logic, which is always the wrong way to talk about complex cultural phenomena, but the only way that lets you claim your conclusions are 100% certain and apply universally.
Contemporary scientific problems that aren't taken from textbooks are often higher in dimensions than any humanities professor would dare tackle. A company I worked at designed a system for detecting and diagnosing helicopter engine problems before there were any symptoms observable to humans or to existing diagnostics; it reduced the data from the engine from about a dozen dimensions to a 3-dimensional model. I built a system for detecting zero-day exploits in TCP packets; it reduced a system that was maybe 80-dimensional to maybe 12-dimensional. For the Netflix competition, I (like many others) reduced a 16,000-dimensional system to a 50-dimensional model.
The humanities are vague because the humanities never went through the shift from a qualitative to a quantitative mindset, circa 1300 CE, when Europeans (A) started measuring things on a continuum instead of just counting them, and (B) got Hindu-Arabic numerals.
In the High Middle Ages, tailors didn't write down your measurements; they held a piece of string up to your body, pinched it off at 2 points, then moved that string over a piece of cloth to tell them where to cut it. The string was an analog measuring device. People didn't know how to measure time, velocity, or temperature. They didn't even have graduated rulers, which Rome had; they had rods, which measured rods. Stonemasons carried a separate ruler for each unit they wanted to measure, and their system of measurement was designed so that they didn't have to /count/ units. It was a binary system. The use of Roman numerals made multiplication difficult and division nearly impossible. Tax collectors couldn't compute taxes using multiplication; they'd place a stone for each thing of some type someone had in a square whose rows had as many stones as the the divisor for the tax (which would be a unit fraction), and the tax would be the number of stones in each column. Or they'd use an abacus to do the counting; but they never directly computed eg 15% of anything.
This shift from qualitative to quantitative was a necessary precondition for the later shift from the certainty of rationalism to the uncertainty of empiricism, as explained by Bacon's Novum Organum (1620), and by the explanation which the Royal Society of London provided of its founding principles (Spratt 1667, History of the Royal Society of London). Both identified claims to certainty and universality as a serious epistemological problem.
The humanities, unlike tax collectors, never even learned to count. And Derrida explicitly rejected modern logic in favor of Aristotelian logic, which is useless, because it assumes the world has the same ontology as Aristotle's metaphysics does. It has only unary predicates. But Derrida, like most people who aren't in STEM, didn't even use Aristotle's full logic, which at least had the quantifier "some". Modern dialecticians don't use that word. Marxists don't say /some/ property is theft. It would be a simple change, but most non-STEM people don't have that word in their internal model of the world, which is still the model that Christianity took from Plato, in which every instance of a type has the same essential nature. Most people not in STEM don't have anything but universal quantifiers in their mental models of the world (plus the Aristotelian accidental properties of specifically-enumerated instances). That's why their talk is necessarily vague. You can't make grand assertions using only universal quantifiers without running into counter-examples and contradictions, so you need to use ambiguous words that give you enough slack to slide around those counter-examples & contradictions. The evolution of the humanities, via survival of the slipperiest, has taken it deeper and deeper into obscurity over the past centuries: from Kant, to Hegel, to Heidegger, to Derrida. Possibly this began around 1800 because that was when it became impossible for philosophy to compete with science in explaining things, so philosophers had to specialize in mystifying things.
Try this sometime: Tell someone in the humanities some not-terribly-shocking "some" claim, like that some libertarians support mandatory background checks to buy a gun. If you bring it up a week later, they're likely to recall you as saying that libertarians support mandatory background checks to buy a gun. Or just read any reddit thread about politics and try to find the word "some" before any statement about Republicans or Democrats.
So the humanities continues to use dialectic, which is simply not capable of supporting intelligent conversation about complicated topics.
"Property is theft" comes from Proudhon, not Marx.
Thank you for that correction!
I do hear it from Marxists, though, so I've changed it to "Marxists".
So I'm wondering about local minima here. Using neural networks as a proxy for a high dimensional space, it seems that if there are enough dimensions, no local minima is inescapable (a widely noted feature of LLM training). If a local minima in cultural space is a stable non drifting culture, based on the analogy to ai, such points are almost a myth. There can never be stability and drift would be a constant for any culture.
I have no idea what I just read, but I definitely want more. Thanks Robin!
This is kind of what i liked about working for a few neuroimaging labs: I got to work with/study people whose culture was very non stem and see how similar/diff we all were in diff aspects just by analyzing data and engineering feedback driven protocols collected from electromagnetic sensors and tailoring the functionality for individual subjects. Of course, was only a sliver of what we could record in lab, and we still haven't gotten to hardware cheap enough to be able to collect it outside of the lab with same fidelity. I suspect when more can talk about persons feelings not in abstract terms, but in electromagnetic distributions in perticular regions of spacetime we call our bodies with/out varying stimuli, i think these kind of things will be come more concrete.
Not sure if you saw this @Robin Hanson but you and I are thinking along very similar lines, calling for similar strategic pivots, and it would be good to discuss this stuff with you on my show. Mentioned you in the short talk I gave at The Diverse Intelligences Summer Institute this summer because I found your last article on high-dimensional foraging the day I was preparing to present on the exact same subject:
https://michaelgarfield.substack.com/p/foraging
Happy to talk to you on your show; DM or email me.
It can be hard to *prove* that a system is intrinsically high dimensional. Especially if we can't observe it for long.
Sometimes things that seem high-dimensional turn out to be not. For example economics has low-dimensional representations of how goods and services flow through the economy. Laws of supply and demand, etc. They don't capture 100% of the truth, but enough to be useful.
Could similar low-d representations be found for other aspects of culture? This was the premise of Asimov's "psychohistory" in his Foundation books. It's hard to know how to exclude the possibility.
The whole point of the Foundation books was that Seldon’s preferred level of explanatory depth was inadequate. The question of how many dimensions are useful has everything to do with the spatiotemporal resolution of desired prediction, and to get it truly “right” you need to use several at once. Which is of course a receding horizon of ever-increasing difficulty once you realize that the models actually influence the behavior of the systems observed (at least in human systems). So no, Econ 101 has proven to be only useful within a very restrictive sense of “utility” because its mass adoption has undermined its own efficacy and it turns out it’s not that accurate of a model to begin with. (I recommend looking into the CoreEcon curriculum if you want an example of a better starting point for modern economics education.)
You are highlighting different ways to understand the world, based on various internally-organized focal points within an internally coherent model of a dual-polar perspective. This perspective could instead be more nuanced and fractured, more multipolar and providing subtle resonance to incoming signals along various pathways, though it requires a much wider scope of enlived (fully alive and integrated) experiences to develop.
Externally copied models of understanding, from any perspective, are usually shallow and non-resonant, lacking the ability to interact with deeper nuances of meaning. However, this is usually unnecessary for building a consensus model of interaction within an ecosystem, depending on the temporal dimensional scale. Common multi-stage vocabularies are useful to keep agents in sync on topics of base principles for all involved.
Your comment is an example of the kind of culture speak that STEM people find hard to understand.
in what way?
the irony of the originating comment written by a STEM cultured person with multipolar views and enlived diverse experiences.
This seems plausible and profound. Including examples would make it much easier to understand and discuss.
One question occurs to me: why are cultural spaces higher dimensional? (Edited:) Is it just that science is our word for lower dimensional concept spaces? For as sciences become higher dimensional they merge into humanities, eg psychology -> sociology -> history.
Incidentally I reckon the search for unifying/shared concepts in the arts/humanities is a very interesting one (but not one those involved are particularly good at/interested in, except philosophers)
They’re not actually higher-dimensional; they just appear that way because of how we coarse-grain the microcosm and macrocosm. It’s complexity all the way down…check out Jessica Flack’s work on hourglass emergence.
Along with this, it's worth noting that higher-dimensional spaces are bigger - there's simply more cartesian room for distance between points. This means they're much sparser than low-dimensional measures that have similar numbers of members.
It MAY be that this also increases the measurement error of determining where a concept or action is in the high-dimensional space, which is related to (maybe; and causality direction is unclear) the illegibility problem in non-STEM thinking.