Daniel Schmachtenberger— the only way we'll have a future
We hope you will thoroughly understand Daniel Schmachtenberger’s train of thought. It is vital that we do.
FUTURE THINKERS PODCAST
In this episode of Future Thinkers:
The generator functions of existential risks
The impact of win-lose games, multiplied by exponential technology
Win-lose games in the essence of capitalism, science, and technology
How to solve multi-polar traps
How to replace rivalry with anti-rivalry
The design criteria of an effective civilization
The characteristics of complex and complicated systems
Open-loop vs. closed-loop systems
Scalable collective intelligence, sense-making, and choice-making
The relationship between choice and causation
Natural and conditioned experiences
The difference between power and strength
The path to a post-existential-risk world
How to increase our self-sovereignty
Why incentives are intrinsically evil
Daniel Schmachtenberger
Today on the show we welcome back Daniel Schmachtenberger, the co-founder of Neurohacker Collective and founder of Emergence Project.
After addressing the existential risks that are threatening humanity in one of our earlier episodes, Daniel now dives deeper into the matter. In the following three episodes, he talks about the underlying generator functions of existential risks and how we can solve them.
Win-Lose Games Multiplied by Exponential Technology
As Daniel explains, all human-induced existential risks are symptoms of two underlying generator functions.
One of these functions is rivalrous (win-lose) games. This includes any activity where one party competes to win at the expense of another party. Daniel believes that win-lose games are at the root of almost all harm that humans have caused, both to each other and to the biosphere. As technology is increasing our capacity to cause harm, these competitive games start to exceed the capacity of the playing field. Scaled to a global level and multiplied by exponential technology, these win-lose games become an omni lose-lose generator. When the stakes are high enough, winning the game means destroying the entire playing field and all the players.
Daniel then looks into some of the issues that capitalism, science, and technology have created. Among byproducts of these rivalrous games are what he calls “multipolar traps”. Multipolar traps are scenarios where the things that work well for individuals locally are directly against global well-being. He proposes that our sense-making and choice-making processes need to be upgraded and improved if we want to solve these traps as a category.
Daniel believes that the current phases of capitalism, science, technology, and democracy are destabilizing and coming to an end. In order to avoid extinction, we have to come up with different systems altogether and replace rivalry with anti-rivalry. One of the ways to do that is moving from ownership of goods towards access to shared common resources. Daniel argues that we are at the place where the harmful win-lose dynamics both have to and can change.
He also proposes a new system of governance that would allow groups of people that have different goals and values to come to decisions together on various issues.
Humanity’s current predatory capacity enhanced with technology makes us catastrophically harmful to the environment that we depend on. Daniel challenges the notion of “the survival of the fittest”, and argues that it is not the most competitive ecosystem that makes it through, but the most self-stabilizing one.
Complicated Open-Loop Systems vs. Complex Closed-Loop Systems
The biosphere is a complex self-regulating system. It is also a closed-loop system, meaning that once a component stops serving its function, it gets recycled and reincorporated back into the system. In contrast, the systems humans have created are complicated, open-loop systems. They are neither self-organizing nor self-repairing. Complex systems, which come from evolution, are anti-fragile. Complicated systems, designed by humans, are fragile. Complicated open-loop systems are the second generator function of existential risks.
Open loops in a complicated system, such as modern industry, create depletion and accumulation. This means that resources are depleted on one end of the chain and waste is accumulated on the other end. A natural complex system, on the contrary, reabsorbs and processes everything, which means there is no depletion or waste in the long run. This makes natural systems anti-fragile. By interfering with natural complicated system, we affect the biosphere so much that it begins to lose its anti-fragility.
At the same time, man-made complicated systems are outgrowing the planet’s natural resources to the point where collapse becomes unavoidable.
Daniel explains that the necessary design criteria for a viable civilization which is not self-terminating are:
Creating loop closure within complicated man-made systems
Having the right relationship between complex natural and complicated man-made systems
Creating anti-rivalrous environments within which exponential technology does not threaten our existence
The Relationship Between Choice and Causation
Daniel explains that adaptive capacity increases in groups, but only up to a point. After a certain point, adding more people starts having diminishing effects per capita. This results in people defecting against the system because that’s where their incentives are. He proposes that we create new systems of collective intelligence and choice-making that can scale more effectively.
Science has given us a solid theory of causation. Through science, we have gained incredible technological power that magnifies the outcomes of our choices. We don’t have a similarly well-grounded theory of choice, an ethical framework to guide us through using our increased power. When it comes to ethics, science rejects all non-scientific efforts, such as religious ideas or morals. Instead, win-lose game theory has served as the default theory of choice in science. This has lead to dangerous myopia towards the existential risks that are generated from win-lose games.
It is necessary to address these ethical questions, especially in terms of existential risk we are now at. We have to improve the individual and collective choice-making to take everything in consideration and realize how we are interconnected with everything around us. “I” is not a separate entity, but an emergent property of the whole.
We need to have a theory of choice that relates choice and causation. The core to the solution, as Daniel explains, is the coherence dynamics, which internalizes the external and includes it in the decision making process.
The Path to a Post-Existential-Risk World
Daniel talks about the need for individuals and systems to have strength as opposed to power. Strength is not the ability to beat others, the ability to maintain sovereignty in the presence of outside forces.
The path to the post-existential risk world is towards a civilization that is anti-rivalrous, anti-fragile, and self-propagating. Ultimately, we have to create a world that has not only overcome today’s existential risks but is also a world where humanity can thrive.
FTP057, 058, 059: Daniel Schmachtenberger – Solving The Generator Functions of Existential Risks
Euvie: I’m reading Carl Young right now and he’s talking about his experience [00:01:30] of going to live with the [inaudible [0:01:32] Indians and how it completely just blew apart his conception of what was natural and how the western world view is different from other world views. He noticed that they were so happy and serene and they felt as one with their environment and they had this very special relationship with the sun. It was very beautiful but, at the same time, he realized how they were very vulnerable to [00:02:00] the invasion of western civilization. If we create a new civilization operating system that is not oriented towards winning wars, then how do we ensure that it doesn’t get destroyed by those who are?
Daniel: Imagine there’s a group of people that get a stronger theory of causation. They learn Newton’s physics and now they can use calculus to plot a [inaudible [0:02:22] curve and make the [inaudible [0:02:23] hit the right spot every time, rather than the pendulum dousing, which is hit or miss. That belief is going to catch on [00:02:30] and that’s why science really caught on, took us out of the dark ages, was because it led to better weapons and better agriculture tech and better real shit. It proliferated because it was proliferative. If we increase our theory of causation, that ends up catching on.
If we could increase our theory of causation and our theory of choice, and the relationship between them, that would actually be the most adaptive. Especially in the presence of where our particular game-theoretic model of choice, with the extension of causation we have, is definitely self-determinating, [00:03:00] definitely anti-adaptive. I know we’ve been on for a long time. There’s really only one more thing that I want to share that closes this set of concepts. Remember we said that any source of asymmetric advantage, competitive advantage in a win-lose game will end up, once it’s deployed, being figured out and utilized by everybody. You just up the level of ante in the playing field.
We also said that in the [inaudible [0:03:25] and many of the tribes we’ve mentioned [00:03:30] lost win-lose games. We don’t want to try and build something that’s just going to lose at a win-lose game, but we know that if it tries to win at win-lose games it’s just still part of the same existential curve that we’re on. It has to not lose at a win-lose game while also not seeking to win. It’s basically not playing the game but it is oriented about how not to lose. It’s a very important thing. We can think about power, the way we have traditionally thought of power, as a power over or power [00:04:00] against type dynamic – game-theoretic, win-lose dynamic. Any agent that deploys a particular kind of power leads to other agents figuring how to deploy the same and other kinds of power. Power keeps anteing up until we get to problems.
We could think about another term we might call strength, which is not the power to beat someone else but it’s the ability to not be beaten by someone else. It’s the ability to maintain our own sovereignty and our own coherence in the presence of outside forces. We could talk about my power, “Can I go beat somebody up?” [00:04:30] But my strength is, “Can my body fend off viruses? Can I fend off cancers? Can I actually protect myself if I need to protect myself?” Which is different than, “Can I go beat other people up?” The power game is the game we actually have to [inaudible [0:04:45]. Power over dynamics means rivalrous dynamics, mean win-lose dynamics, is the source of evil.
It’s not that money is the source of evil, it’s that power over where I think my wellbeing is anti-coupled to yours ends up being [00:05:00] the source of evil and money’s just very deep in the stack of power dynamics. Status is and certain ways of relating to sex and a number of things are. We have to get rid of the power over dynamics. It doesn’t mean that I can’t develop the strength that makes me anti-fragile in the presence of rivalry. Then I say, “What kind of capacity can I develop that doesn’t get weaponized by somebody else and used against me, given that any asymmetric capacity I get can be weaponized?” There’s really only one and this is a really interesting thing.
[00:05:30] If I make the adaptive capacity of… Say we’re trying to make a new civilization as a model and a new [inaudible [0:05:38] civilization, new economics, new governance, new infrastructure, a new culture that has comprehensive loop closure, doesn’t create accumulation or depletion, doesn’t have rival risk games within it etcetera. If I try to have some unique adaptive capacity via a certain type of information tech, the other world will see that information tech and [00:06:00] use it for all kinds of purposes including against me where there’s an incentive to do so. The same is true if I use military tech or if I use environmental extraction tech, I’m still in the same problem.
If my advantage, if the advantage of the way this civilization’s structured has to do with increase coherence in the sense-making and choice-making between all the agents in the system, all the people in the system, increase interpersonal coherence, this cannot be weaponized. Anyone else employing it is now just a [00:06:30] system it’s self-propagating. For instance, when we start playing rivalrous games we start realizing that it’s not just us against somebody else, its teams against larger teams. Then the idea with a team is we’re supposed to cooperate with each other to compete against somebody else.
They compete against someone else idea ends up going fractal and I end up even competing against my teammates sometimes, and that’s part of why the collective intelligence doesn’t scale thing is because I’ll cooperate with my other buddies [00:07:00] on the basketball team unless there’s also a thing called most valuable player and I’m in the running for it and I have a chance to make the three-point shot rather than pass, even though it decreases the chance of the team winning. Now, I have an inventiveness alignment. I might go for that. Then it gets bigger where there’s a couple of us that both want the same promotion to the same position at the company and we’re actually going to try and sabotage the other one, even though that harms the company because my own incentive is not coupled with their [00:07:30] incentive and with the company.
Then I can look a couple of different government agencies that are competing for the same chunk of the budget. They will actually seek to undermine each other so they get more of the budget when they’re supposed to be on the same team called that country. What we realize is we get this thing called fractal disinformation, fractal decoherence, and defection happening everywhere. That creates the most broken information ecology and the least effective coordination and cooperation possible. [00:08:00] That’s everywhere, that’s ubiquitous. It’s the result of that underlying rivalry. As we mentioned before, if I have some information, I want to make it to where nobody else can use it. I’m going to trademark it, patent it, protect my intellectual property. Before I release it, I actually want to disinform everybody else about it, tell them the gold is somewhere else so that they go digging somewhere else and don’t pay attention to what I’m doing.
If I am both hoarding information, disinforming others, and [00:08:30] keeping my information from being able to be synthesized with others, that means I’m going to not let my knowledge about cancer research and whatever it is be out there because I gotta make the [inaudible [0:08:39] back. The best computer that the world could build doesn’t exist because Apple has some of the IP but Google has some of the IP, and 10 other companies have some of the IP. The best computer that science knows how to build can legally not be built in this world. And the best phone [00:09:00] and the best car and the best medicine and the best every fucking thing there is because we keep the actual adaptive knowledge from synthesizing, let alone that everybody’s having to reproduce the same fucking work because we don’t want to share our best practices.
Then almost all the budget is going into marketing against the other ones rather than actual development and the marketing’s just lying in manipulation, at least, about why ours is comprehensively better when they then have to say the same thing about what their IP does that’s good and our IP does [00:09:30] another thing. Imagine if we had a world where all the IP got to be synthesized. Nobody was disinforming anybody else. Nobody was sabotaging anyone else. Everyone was incented to share all of the info. To synthesize all the info, to synthesize all of the intellectual property ideas etcetera, work towards the best things possible, imagine how much more innovation would actually be possible, how much more collective intelligence and capacity would actually be possible.
If our source of adaptive advantage [00:10:00] is that, is we make a world and now we have to come back to – we were talking about – if you possess a good and I no longer have access to it, we’re in a rivalrous relationship. You possess a piece of information that I don’t then get to have access to, we’re in rivalrous information of knowledge etcetera. If you have access to something and we’ve structured the nature of access where we have engineered the scarcity out of the system as such that you’re having access doesn’t make me not have access, and you having access leads to you [00:10:30] being a human who has a full life and some of your full life is creativity and generativity.
Now, not only do you have full access to those transportation resources but also maker studios and art studios and education and healthcare and all the kinds of things that would make you a healthy, well adaptive, creative person – and every well-adapted person’s creative. Nobody wants to just chill out watching TV all the time unless they were already broken, broken by a system that tried to incent them to shit that nobody wants to do or, if they can get a way out, they will but they’re a broken person. If someone was supported in an educational system [00:11:00] to pay attention to what they were innately fascinated and to facilitate that, they will all become masterful at some things with innate, intrinsic motivation to do those things.
Now, in a world where we support everybody to have access to the things that they are intrinsically incented to want to create. If, right now, I get status by having stuff but if we are engineering [inaudible [0:11:24] system everyone has access, nobody possesses any of it, everybody has access to all of it. There’s no status of having things and it’s totally boring. [00:11:30] There’s no differential advantage, the only way you get status, the only way you get to express the uniqueness of what you are is by what you create. Now, the whole system’s running towards that but you don’t create something to get money because money for what, to have access to shit you already have access to? Because you get to be someone who created that thing and both your own intrinsic express of it and extrinsically getting to offer to the world that would recognize that.
Now, we have a situation where we all have access to commonwealth resources that create an anti-rivalrous relationship to [00:12:00] each other. Obviously, I’m just speaking about this at the 100,000-foot level. We could drill down on what the actual architecture looks like but there is actual architecture here. It is viable, it meets the design of criteria. We have sense-making processes where we look at what a good design would be before making a proposition for a design that doesn’t lead to polarization and radicalization, that lead to progressively better synergistic satisfiers and get us out of theory of trade-offs and into [inaudible [0:12:27] also as a way of having people be [00:12:30] more unifiable and on the same team.
If I’ve got this world where it’s the source of competitive advantage if you want to call it that, is that it is obsolete at a competition within itself, it has real coherence. Then not only is the quality of life erratically higher because the people don’t feel lonely and they actually have creative shit to do and they aren’t being used as instrumental pawns to some other purpose etcetera, and the quality of life is better because they’re actually making better medicine and better technology and better [00:13:00] etcetera because of the ability for the IP to synthesize and everything else. This world can also out-innovate in really key ways in other places in the world. Then, rather than the rest of the world wanting to attack it, it can actually say, “Here, we’ll export solutions you want.” The rest of the world starts to create a positive dependence relationship.
The rest of the world says, “Shit, we want to be able to innovate. Why were they able to solve that problem we weren’t able to solve?” Because our guys were sabotaging each other and their guys weren’t sabotaging each other. We say, “Great, [00:13:30] here’s the social technology to use. Now, as soon as the implement that, it’s not being weaponized, that’s just the world actually shifting. That’s where this model actually becomes a new base [inaudible [0:13:37] the world starts flowing to. You have to do that. You create a prototype of a false act civilization that is anti-rivalrous, that is anti-fragile against rivalry – strengths, not power – and that is auto-propagating that by the nature of the solutions that it is exporting and by its own adaptive capacity, its own design [00:14:00] starts to implemented in other places. That’s ultimately, the desire. That is a path to a post-existential risk world, which is building it in prototype in a way where it auto-propagates.
Mike: That’s so exciting.
Euvie: Are there places where these prototypes are being built?
Daniel: Kind of but not really. There are intentional communities where people are trying to practice some things they feel will be relevant, a closed look agriculture [00:14:30] system where they at least have regenerative agriculture and maybe some kinds of social coherence technologies where they have a better system of conflict resolution than our current judicial system. Better parenting, better education. We have those things and those are cool and they’re valuable but they still have to buy their computers from Apple and fly on a Boeing to get somewhere that depends upon environmental destruction and war. They can’t actually provide a high-tech civilization, they’re not yet [00:15:00] civilization models and the civilization models are all part of this one dominant civilization model. This is the next endeavor.
Before a full-stack civilization occurs, obviously partial ones but that is directed towards a full stack civilization have to occur. Because in the world we’re talking about, there is no place for the things currently called judges or lawyers or politicians or bankers. Those systems don’t exist. That doesn’t mean that there isn’t an equivalent of a judicial system that is totally fucking different from the level of the theory [00:15:30] of ethics to the [inaudible [0:15:31]. Somebody has to be getting trained in the civics of that system. There’s nothing like banking but there are things like paying attention to how the accounting of this new economy works but people have to be trained in that. I’ll give you one, for instance, we think about the physical economy.
We’ll take attention out and just look at physics. We see that there are at least three different kinds of physics involved in the materials economy that are fundamentally different in their math. There is a physics of atoms, [00:16:00] physical atoms. There is a physics of energy and there is a physics of bits. Right now, those are fungible. I can use the same dollar to buy software or to buy energy or to buy metals or physical stuff, food. There’s a fixed number of atoms of a certain type on the planet that are reasonably accessible.
Right now, we’re just taking them from the environment in a way that causes depletion and then putting them back into the environment as waste in a way that causes accumulation toxicity on both sides. [00:16:30] You can’t keep doing that, we have to close loop it where we have been, give or take, a finite amount of metals. Not just metals but hydrocarbons, everything. A finite amount of atoms that are in a closed-loop relationship but they can be upcycled because we have the energy to upcycle them, which means putting the same atoms into a higher pattern – where the pattern is evolving, the pattern’s stored in bits.
If I take the atoms out of one battery, put it into a new battery which evolved as battery technology. That new battery is in bits, a blueprint. [00:17:00] I’m going to use energy to take the atoms in the current form, disassemble them and reassemble them into this new battery. There’s a fixed amount of atoms – we have to close loop those. There’s not a fixed amount of energy. We get new energy from the sun every day but we have a finite bandwidth of how much we get, we have to operate within. That’s not closed-loop, we can use that up and it has entropy. Within that bandwidth, we [00:17:30] have to work. Bits are fundamentally unlimited – limited only by the computing of the energy and matter. That can keep expanding basically indefinitely.
Once I’ve made a bit, I can reproduce it exponentially without any unit cost because I can reproduce it exponentially without unit cost once I’ve developed at once. I get an exponential return on software in a way that I could never get on atomic stuff, which is why Elon has a hard time raising money for the physical stuff, and [00:18:00] WhatsApp sold for 19 billion dollars. It’s why all the unicorns are software and mostly social tech or fintech or something that is actually doing not good things for the world can create exponential returns. It’s why Silicon Valley has basically mostly just invested in software stuff. If you make those fungible, you’ll actually be moving the energy away from the atom and away from the energy into the virtual. Away from the physical into the virtual, even though the virtual depends [00:18:30] on the physical, so you’re actually debasing the substrate upon which it depends.
You notice that since the bit we can keep having more of forever, they don’t go through an entropy degradation when we use them, the energy we can use entropically degrades but we get more of it every day and the atoms don’t entropically degrade but we have to keep cycling them and there’s a fixed number. The physics, the accounting of those are totally different. That’s not one economy that’s totally fungible to its self-appointed accounting system. That’s three completely separate [00:19:00] but interacting physical economies. Again, we already said we’re not owning goods, we’re having access to shared commonwealth services. To really go into it, it’s a lot of things. These are examples of some of the considerations that have to happen to actually be able to think about things like economics at a level of depth that is appropriate to the nature of the issues.
If we don’t answer the question of what makes a good civilization. We simply say [00:19:30] what allows civilization to endure. We start with, let’s just say we don’t want the existential catastrophic risk. There’s a whole bunch of different types of existential catastrophic risks that all have the same generator function, so we have to create categorical solutions to the generator functions. It turns out that those are the generator functions that have made all the things that we intuitively have experienced as sucking – like violence and environmental devastation.
Solving those generator functions doesn’t just allow us to survive [00:20:00] in maybe some dystopian dynamic. Anti-rivalrous dynamics with each other and close loop dynamics and the proper relationship between the complicated and the complex. Scalable collective intelligence systems in a right understanding in theory of choice and relationship with the theory of causation end up being a way of mapping to a world that is definitely [inaudible [0:20:20] on any meaningful definition of [inaudible [0:20:23] and any meaningful consideration of what good could mean. We come back to this mythopoetic. [00:20:30] We can’t keep going the way that we’re going. There’s a purgatory coming and it’s going to go one way or the other. One way is really shitty and one way’s really lovely.
That’s a true story. Bucky Fuller said utopia or oblivion and it’s going to be hit or miss until we’re actually there. We’re not gonna know which way it goes. That’s the thing we’re just kind of in on is what it takes if we try to solve the various risks in isolation is impossible, we fail. If what it takes to solve them categorically ends up [00:21:00] also mapping to how we engage everyone in creating the true, the good, and the beautiful that is theirs to create progressively better, both upregulating their sense-making of what that is with themselves and with each other being able to make that, scaling the collective intelligence that is progressively answering those questions better.
Mike: Can you leave us with some book recommendations for anyone who wants to read up on this a little more and expand their understanding?
Euvie: Books or other resources.
Daniel: Yes. [00:21:30] I wish I could share more things than I can but a lot of what we’re thinking in terms of a new civilization design like this is new. It doesn’t mean it’s not drawing on lots of elements. Couple things. We mentioned Jeffrey West’s work Scale on collective intelligence, that’s very valuable. We talk about some of the dynamics of game theory that have to shift and so finite and infinite games are just my favorite starting point. It’s one of the types of books that is very simple but has multiple levels of depth [00:22:00] of meaning. If you read it multiple times, you’ll gain new insights.
Euvie: That one blew my mind.
Mike: Yup, me too.
Daniel: James Carse, very beautiful. I have a blog, civilizationemerging.com that has some articles on these types of topics. There’s also a booklist there with a heap of books.
Euvie: Great.
Mike: Awesome. This, as always, is enlightening and so fun.
Daniel: There are books that are valuable and there’s obviously all of your podcasts available. If you had the [00:22:30] experience of anything that I said making sense and actually seeming obvious. But then you also realize you never thought it in that particular way, then there’s a question, “Why did I never think it in that way, even though it seems obvious after the fact?” That’s one of the properties that clarity has is it can add novel insight but that seems obvious and [inaudible [0:22:50] relates to everything that we know.
Then we say, “Okay, I wasn’t thinking about rivalrous dynamics and upping the ante clearly enough, [00:23:00] I wasn’t thinking about the exponential economy and software and atoms all being fungible. There’s a problem there. I wasn’t thinking about open loops, closed loops in this particular way.” You can. If you start just asking for yourself, “What do I think is actually wrong?” Of all the things that seem wrong, what do they have in common? Why are those things that are wrong, wrong? Then go deeper. Keep going deeper with that. Don’t look for one answer. Are there a number of different things that come together [00:23:30] that are partial answers to this? What would solving that look like? There are resources of other peoples thinking on these things but they won’t replace. They will inspire.
They won’t replace your own deep thinking on these things for your own sense-making. The resource that I would offer the most is when you are bothered by something or you wish some beautiful thing existed more than it does, really think hard about why things are the way they are. Know that the first thoughts you will come with [00:24:00] are not that good. If you stop, you won’t get beyond there. If you really keep working on it and thinking about it and then going and researching in light of that question and then thinking about it more, you actually start to get novel and meaningful and deeper sense-making that is aligned with what is yours to pay attention to and work on.
Euvie: To relate to what you said earlier, I think it really helps to use different modes of inquiry. People can get stuck in just intellectual inquiry or just [00:24:30] spiritual inquiry. But all are valuable. When we can see the same thing from several different perspectives, it becomes a 3D object, rather than just being flat.
Daniel: You just actually mentioned one of my favorite practices is really endeavoring to see and experience the world through the perspective of someone else and actually see and experience it. If I’m still thinking, “No, if I was in their position, I wouldn’t do that,” I haven’t got it yet. If I was in their position, I would do. If I’m really putting myself [00:25:00] in a position, I would get enraged by the things they’re enraged by. I would get excited by the things they’re excited by. This, both as a practice of empathy and connection, as a practice of understanding, as a practice of intelligence and learning because I see different things. If I look at the world through the lens of a mechanical engineer, I see shit everywhere that mechanical engineers see that you never saw.
This is different than if you look through the lens of a fashion designer, or you look through the lens of a game theory person. They’re looking at different things. Or an evolutionary biologist. [00:25:30] There’s a whole universe I wasn’t paying attention to – like when you buy a car and you see it everywhere, you put on a lens and you start seeing all kinds of stuff, effective sense-making. Also, as a kind of spiritual technology of getting out of the default mode of what you think you are. When I’m trying to be someone else, it’s not my personality that can do that. If I’m trying to take their personality, it’s not perspective that can do it. It’s the same consciousness witnessing my perspective that can then witness somebody else’s. As soon as I do that, I actually dissociate from just being [00:26:00] my personality and then I get some more spaciousness around it and less reacted by it.
Euvie: This also relates to not just looking through the lens of different personalities or different modern frameworks, it’s also looking through the lens of premodern frameworks or even animal frameworks and that can all be very useful, as well. If we look at the world through the lens of quote-unquote primitive tribe, then different things come into focus and different things become very meaningful and very powerfully meaningful. It [00:26:30] resonates through your whole being… wow. That’s not to be dismissed, because there’s something there.
Mike: At the very minimum, for self-discovery, it’s super useful because we spent more time as those primitive versions of ourselves than we have the modern versions. You can untie a lot of behaviors that you don’t really realize you have based off of just looking at the world from a primitive standpoint.
Daniel: If your [inaudible [0:26:55] decide to go visit the Amazon and live with a tribe [00:27:00] and experience the world through those eyes and be affected by it and then look at how they can incorporate elements of that experience of the world and their previous experience of the world to being able to live more fully. That would be beautiful. This was wonderful, I really appreciated being here with you both. I really do want to say that I love your podcasts and I love what you’re creating, both of you together. It’s very easy to have [00:27:30] well informed dystopian views and it’s easy to not think about things or it’s easy to have poorly informed positive views.
To have well informed positive views is actually tricky. If we keep being anything like the kinds of people that we have always been, that do really wonderful and really atrocious shit with our power but having exponentially more power, they’re all dystopian scenarios. We have to be something really different than we’ve ever been, which requires some type of deep [00:28:00] shift that could make that happen. That requires some deep thinking, some deep imagination. I know that what you are really dedicated to doing here on the show. I’m reminded of this quote from the book of Romans. It says the pathway to heaven and narrow and steep and the pathway to hell is wife and many. It’s just like a way of thinking about thermodynamics, which is that there’s just more ways to break shit than to build it.
There’s not that many ways all the cells can in your body [00:28:30] can come together that make you, the emergent property of you. There’s a lot of ways that you just get 150 pounds of goo. We say, “Okay, we’ve got a lot of power and most of those scenarios with a lot of power suck. How could we have this much power that doesn’t suck? How could we have this much power and not use it against each other?” We start seeing Orwell and control systems and we start saying, “That sucks, too.” To keep thinking through, “How could we have it that doesn’t suck that can’t depend on aliens or Jesus [00:29:00] coming back? How do we get us to be that kind of consciousness?” It’s a really good way of thinking about how to actually address these problems.
If we can’t [inaudible [0:29:09] without vision, man perishes. If we can’t even see a well-grounded positive future and positive use of the technological capacity have, we are not going to make it. I love that you all have this space dedicated to exploring a topic. The incentive is always evil. It’s a bitch. I don’t want to [00:29:30] move from perverse incentive to positive incentive. Positive incentive means my sense-making has determined what I think is good and I’m going to try and extrinsically override your sense-making to make a choice aligned with my sense-making.
I’m going to use an extrinsic reward strategy to co-opt your sovereignty and have your choice making be based on my sense-making incentive scheme rather than your own sense-making. That is always the basis of evil. If I want to have a collective [00:30:00] intelligence that’s actually intelligent, I need everyone to have intrinsic sense-making and choice-making that is incorruptible, which means it’s not being co-opted by extrinsic reward and punishment schemas. I got this wrong at first. I use to say, “We have to create a world where the incentive of every agent is rigorously aligned with the wellbeing of every other agent and of the commons, that is wrong.
What is right is to say that we must [00:30:30] rigorously remove any place where the incentive of an agent is misaligned with the wellbeing of other agents and the commons, but an adequate future is one that has no system of structural incentive.
Euvie: That’s a mind-blower.
Daniel: The cells in your body are actually not trying to get the other ones to do what they want them to do. They have their own internal sense-making processes and they do what makes sense to them. What makes sense to them also happens to be [00:31:00] what’s good for the ones around them, because they depend on the ones around them and vice versa and they’re in a communication process. The brain is not overriding the cells and in no way could handle the complexity necessary of the cells not doing their own sense-making. Better incentive schemas as a transition, which is happening in the blockchain, is nice. It’s worse than more perverse incentives but it is transitional, not post-transitional.
It actually does not address existential-risk, it doesn’t give us the right collective intelligence. The right collective intelligence has to be [00:31:30] fractal sovereignty. Meaning, at the level of an individual and every group size, it has its own intact sense-making and choice-making that ends up being vectoring towards Omni consideration. The level of shift that we’re talking about is hard to imagine.
Mike: Yeah. What has to be invented to even begin a transition and then be put to rest so that the next version can come along is such a long road.
Daniel: The reason we [00:32:00] incent people is because we have a civilization that needs a lot of shit done that is not fun. It’s dreadful stuff. We want to get the people to do the dreadful stuff. If we created a commonwealth where everyone had access to resources, then nobody would do the dreadful stuff and then the state would have to force them. That’s why we don’t like communism. Then you get the state imperialism. We say, “Okay, cool, let the free market force them instead,” it’s economic servitude but at least that doesn’t look like somebody did it because the [00:32:30] market is just the anonymous thing.
If you don’t do the shitty job, you’re homeless and you’re kids can’t eat. Cool. But we’ll tell you the story that you can work your way up and become wealthy, even though statistically we know that it’s silly, it happened to those two guys that one time. Even though statistically the rest of the time having more resources makes it easier to make more resources, and having less resources makes it harder to make more resources. The system has a gradient that makes it actually continue in the direction of inequality, not otherwise. [00:33:00] That’s where incentive came from. That’s the good side.
The negative side is a few controlling the many is to use incentive, reward, and punishment, and to get people to do the shitty things we have to do them. This is using choice to create a system of causation – incentive is a causal system, game theory is a causal system – to control the choice of others. Control or co-op. I want to have my theory of choice effect [00:33:30] causal dynamics that are only causal. I.e. If I make an automated robot, I haven’t actually made a sentient being a utility. I’m going to say something even deeper, which is instrumental relationships are evil.
Mike: Can you expand that?
Daniel: Yeah. If I’m interacting with you to meet some people that you know to get my network ahead or to get some knowledge from you or to gain access to something or to whatever it is, I have something [00:34:00] that I want to do that you are an instrument towards, you are a path towards. It’s a utilitarian ethic. You are a ends to a means for me. However I relate with you, however it affects your own sovereign, sentient experience is a place I might externalize harm because it’s not why I’m relating with you.
Mike: Yeah, yeah.
Daniel: Again, a healthy world, a world of the future, other people need to have intrinsic [00:34:30] value independent of utilitarian value to everyone. That’s a part of the culture. Not just the other people and other beings, all kinds of sentient beings, but relationships have intrinsic value. I’m going to invest in the integrity of our relationship independent of me getting anything out of it because it is actually the basis of meaningfulness itself. Which is why in a utilitarian and instrumental dynamic, we’re getting ahead while feeling utterly fucking meaningless and destroying everything [00:35:00] that is meaningful in the process. That is us being hooked to addiction to a stupid game, where what we think we want is not what we actually want, and what we think of as a win is actually an Omni stupid thing.
This is why the Hindu concept of Dharma was a virtue ethic, not a utilitarian ethic and there was a very meaningful set of concepts of, “Do what is inherently right in your relationships with life, [00:35:30] independent of what the outcome might be, because you really don’t know what the fucking outcome is going to be.” If you try and just figure out what the outcome is going to be, you’re going to be wrong a lot of times and you’re also going to justify a lot of unethical stuff. Utilitarianism is the rampant ethics that anyone who’s paying attention to ethics pays attention to right now. It’s not without any merit but it is also problematic. It is up there with democracy and capitalism and the philosophy of science in terms of being a [00:36:00] problematic thing, to be the dominant system.
We cannot actually predict in complex systems well enough to do a utilitarian thing and the intrinsic dynamics of a relationship in another being end up becoming moved to being a means to an ends other than them. As soon as I start factoring everything meaningful along the chain of whatever I think my outcome is to where my outcome is actually being in a way that is an integrity with an honouring of all life, [00:36:30] now it’s a virtue ethic.
Euvie: Yeah. I was having this conversation recently about people who are obsessed with life hacking and optimizing everything. When they get into that mindset, eventually they get to what they call optimizing relationships and then they start putting people on a value hierarchy where they want to interact with high value people and they want to get a high value woman and they’re using these tactics to find and attract the most high value woman. It’s funny, because those people, [00:37:00] in my experience, are some of the most existentially unhappy people that I’ve met. They will never demonstrate it outward, in an outward way, but that’s what I’ve noticed. That people who try to optimize everything in this kind of utilitarian way end up really profoundly unhappy.
Daniel: It’s the same thing as continuously pursuing a better high. It’s, “I’m getting a hit from winning at a particular thing, so I’ve got to try to win at it all the time.” But, “I need the hit because my baseline [00:37:30] is that life feels fucking meaningless because I don’t actually have any real relationships and I don’t even know what meaning means. I don’t even know what intimacy means.” That hyper normal environment needs hyper normal stimuli to feel anything. The fact that I use people instrumentally has people end up not liking me, which makes me hurt even more, which makes me want another hit even more.
Mike: People like you make it super easy. You just come on and it’s like we listen to audiobooks all day then we get to actually talk [00:38:00] to the person who’s coming up with the cutting-edge ideas themselves. It’s quite interesting, thank you.
Daniel: Bye ya’ll.
Euvie: It’s always wonderful getting our brains blown by you, thank you.
Daniel: Thank you both, this was really fun.
Mentions:
Book Recommendations:
More from Future Thinkers:
Daniel Schmachtenberger on The Global Phase Shift (FTP036)
Daniel Schmachtenberger on Neurohacking (FTP042)
Daniel Schmachtenberger on Neurogenesis (FTP043)
Daniel Schmachtenberger on Winning Humanity’s Existential Game (FTP046)