Will AI Replace Us
Firstly, does it matter if AI replaces us?
Please allow me to prime the pump with some good old-fashioned culture before moving on to the new-fangled genies and black boxes that control our lives.
On the Seashore
On the seashore of endless worlds children meet.
The infinite sky is motionless overhead and the restless water is boisterous. On the seashore of endless worlds the children meet with shouts and dances.
They build their houses with sand, and they play with empty shells. With withered leaves they weave their boats and smilingly float them on the vast deep. Children have their play on the seashore of worlds.
They know not how to swim, they know not how to cast nets. Pearl-fishers dive for pearls, merchants sail in their ships, while children gather pebbles and scatter them again. They seek not for hidden treasures, they know not how to cast nets.
The sea surges up with laughter, and pale gleams the smile of the sea-beach. Death-dealing waves sing meaningless ballads to the children, even like a mother while rocking her baby's cradle. The sea plays with children, and pale gleams the smile of the sea-beach.
On the seashore of endless worlds children meet. Tempest roams in the pathless sky, ships are wrecked in the trackless water, death is abroad and children play. On the seashore of endless worlds is the great meeting of children.
Think of the mad hubris involved in thinking our species knows the mind of God. It is undeniable that all religious scriptures are human stories serving human purposes. There is a vast amount of literature on the subject.
Our experience with science, technology, mathematics, philosophy, etc., is a struggle to understand and exploit nature and reality to pursue benefits. People pray to God for help. People also learn from nature and use their intellect to help themselves overcome challenges and improve the quality of their lives. All stories across cultures express this adventure.
All of our endeavors and experiences are human, all too human.
Esteeming humble truths. It is the sign of a higher culture to esteem more highly the little, humble truths, those discovered by a strict method, rather than the gladdening and dazzling errors that originate in metaphysical and artistic ages and men. At first, one has scorn on his lips for humble truths, as if they could offer no match for the others: they stand so modest, simple, sober, even apparently discouraging, while the other truths are so beautiful, splendid, enchanting, or even enrapturing. But truths that are hard won, certain, enduring, and therefore still of consequence for all further knowledge are the higher; to keep to them is manly, and shows bravery, simplicity, restraint. Eventually, not only the individual, but all mankind will be elevated to this manliness, when men finally grow accustomed to the greater esteem for durable, lasting knowledge and have lost all belief in inspiration and a seemingly miraculous communication of truths. — Human, All Too Human: A Book for Free Spirits by Friedrich Nietzsche
AI is a Greek Tragedy unfolding in front of us that we are mostly incapable of understanding because we don't understand the technology, how it works, or its significance. No, most of us don’t understand the ubiquitous science, engineering, and technology we all take for granted; we struggle even to understand ourselves.
Extinction, for us, is the end of time. What consciousness in the Universe will remember us? Is Gaia conscious in the way humans are? How would we know? When will we know?
Gaia because Gaia
Remember, extinction is the rule for life on Earth, not the exception.
Greek Tragedy
Tragedies don't always go from good to bad. They can go from bad to good to bad or from good to bad to good again, and for Aristotle, it was mutability; it was the change that was tragic, not necessarily the direction of the change. (so much for progress) We think of tragedy as something terrible happening. We mostly misuse the term.
Greek tragedy was a ritual performance of the downfall of a great man — usually a king or a nobleman — brought low because of some random mishap or inexplicable fate.
tragedy
/ˈtradʒɪdi/
noun
an event causing great suffering, destruction, and distress, such as a serious accident, crime, or natural catastrophe.
"a tragedy that killed 95 people"
a play dealing with tragic events and having an unhappy ending, especially one concerning the downfall of the main character. "Shakespeare's tragedies"
Greek tragedies were stories about how people interact, and inevitibilities occur. Accidents and chance play a significant role in Greek tragedy. We always encounter what we don't expect: outliers, black swans, long tails in statistics, collateral damage, externalities, unforeseen disasters, etc.
Friar Lawrence helps “Romeo and Juliet” throughout Shakespeare’s play simply because he cares about the star-crossed lovers. But what foils all of the characters is an accident. A letter arrives late, and a reversal of fate ensues.
We assume that we know the mind of God because we are God's creatures, and we are geniuses in controlling nature because we are Homo Sapiens with unique intellectual superpowers. This allows us to play God and create God. And yet, we still have that pesky problem of evil. We invent a loving, merciful God and still go to Hell. We develop machines and exploit cheap, abundant, reliable, and powerful energy sources only to run out of them while changing the atmosphere's chemistry to the point where we are in danger of losing habitat to such a degree that it might extinct our species.
Lousy timing, fate, and things spinning out of control are familiar to most of us—so much so that we subscribe to a dozen information channels to focus our attention, day after day, on the misfortune surrounding us and the tragedy we invent in our stories.
Romeo and Juliet is a dramedy until, through many mishaps, Romeo finds Juliet dead. Romeo takes poison and, while dying, kisses Juliet. Friar Lawrence enters the tomb, and Juliet wakes up and finds Romeo dead. Frightened by a noise, the Friar flees the tomb. Juliet kills herself with Romeo's dagger.
Romeo and Juliet was an innovative, hybrid play. Shakespeare was familiar with Greek tragedy and comedy. His works fell out of favor until Coleridge and German Romanticism revived an appreciation for his works’ organic form.
Greek tragedy presents moral dilemmas.
A hero or great man has to do something but makes a mistake, which creates a reversal that is an emotionally wrenching, cathartic experience for the audience. A tragic flaw is always present. One shoots for the bullseye and misses. The character mistook something; he got it wrong. He did not know. He was caught by surprise. In the case of Oedipus, he could not have known who his parents were and ended up in his mother's household. He doesn't know the woman is his mother and ends up marrying her, a horror they can not survive. These events have nothing to do with a failure on Oedipus' part.
Agamemnon is confronted with the need to sacrifice his daughter, which is horrific, but it's framed in a situation where there is no way to avoid doing something wrong.
Agamemnon was forced to put on the the yoke, the harness of necessity. Necessity implies something he HAS to do; it’s unavoidable. He was both responsible and not responsible. Tragedy plays with the relationship between necessity and freedom, fate and choice. Achilles chooses to fight even though he knows he will die if he does. His reward is fame.
These stories knew nothing of the modern debate concerning free will.
Why is there so much suffering, and why do we enjoy stories of so much suffering? We watch hours and hours of horror, pain, and suffering on our screens every day.
Pathei Mathos—"We learn by suffering." Suffering is a great teacher. It is part and parcel of any intelligent grappling with the human experience, both physically, personally, and politically. Salon, Polis, Cosmos—from small to large. Suffering is programmed into the human system, internally and externally, in our minds, bodies, and souls and in manifesting our ambitions and actions.
Suffering is not someone's fault; it happens; it is a feature, not a bug. It may seem unjust, but it's programmed into the system, into human experience. We must not be afraid to look at the injustice written into and structuring the system itself.
Examining suffering and the randomness of fate is unavoidable if we want to understand who and what Homo Sapiens are.
Will AGI replace us? It seems as likely as any other accident. We must not ignore the question, however: our ambitions have led to weapons of mass destruction, the Anthropocene, the sixth extinction, global heating, destruction of habitat, pollution, and ill health despite science-based/evidence-based medicine.
AI is a cascade of tragic accidents happening right now. It is only the hubris of a conscious species with a clever mind that thinks it can understand the mind of God or play God.
Do we want to be replaced? Accelerationism, Tech Optimism, The Singularity, etc. Who wants this? Do ordinary people dream of becoming a machine? Is there a pill one can take to understand reality beyond how we can access and understand it with our minds and bodies? What must we build to make a human better? Must we always optimize and improve to the point where we are no longer human?
It won't replace us, it won't serve us, it isn't us, but it may help us destroy what it is to be human. Before that happens, billions of us may die due to our hubristic nature and failure of stewardship. If anything is left after the next dark age, only a great science fiction writer with a fantastic imagination can attempt to answer the question of what's left and what it is or looks like.
If AGI replaces us, we can only hope there will remain, in the Universe, the faint echo of a clever conscious species of life on Earth that remained only briefly in the mind of God.
To me, God is creation's ineffable, unknowable mystery. Peace be with us.
Well, that was dark. Let’s say that AI, LLMs, and, in a decade or so, AGI helps governments and corporations come up with ways to return the atmosphere to equilibrium, saving us from the worst consequences of global heating. Let’s also imagine that we find new energy sources that are clean, safe, cheap, abundant, etc. Without radical changes in our social, political, and economic structures and systems, will we be able to develop a sustainable, peaceful global civilization where people are healthy, wealthy, and wise? One must not lose hope.
Mick and I spoke about governments and governance. Perhaps people in First World nations, in particular, “The Anglosphere,” are feeling that they can’t trust governments to regulate and control the development of AI such that it may benefit people without threatening our hard-won freedoms and civil rights. If so, who can be trusted with these world-changing technologies? Private corporations like Amazon, Google, and Facebook.
The imperative for regulatory oversight of large language models (or generative AI) in healthcare
Historically, Public/Private partnerships have driven technological development, particularly in wealthy countries that can invest heavily in their military-industrial complexes.
Here are some historical examples:
DARPA and AI Research:
The Defense Advanced Research Projects Agency (DARPA) in the United States has played a significant role in funding and supporting AI research since the 1960s. DARPA's investments have contributed to the development of various AI technologies, including natural language processing and machine learning.
Academic and Corporate Collaboration:
Many breakthroughs in AI research have emerged from collaborations between academia and industry. Researchers in universities often work closely with private companies, sharing expertise and resources. For example, companies like Google, Facebook, and Microsoft have sponsored AI research at universities and have hired many AI researchers from academia.
OpenAI and Partnership Models:
OpenAI, a research organization focused on artificial general intelligence (AGI), was founded with the goal of advancing AI in a safe and beneficial manner. It was initially backed by prominent figures in the tech industry, including Elon Musk and Sam Altman. OpenAI has also sought partnerships with private companies to fund its research.
Industry Research Labs:
Many major technology companies, including Google, Microsoft, IBM, and Facebook, have established their own AI research labs. These labs conduct extensive research on AI and LLMs, contributing to the advancement of the field. These companies often collaborate with each other and with academic institutions.
Government Funding and Initiatives:
Various governments around the world have provided funding and support for AI research. For instance, the European Union has invested in AI research through programs like Horizon 2020. In China, the government has outlined ambitious plans for AI development and has invested heavily in AI research and infrastructure.
Language Model Development:
The development of Large Language Models (LLMs) like GPT (Generative Pre-trained Transformer) involved collaborations between researchers and engineers in both industry and academia. OpenAI, the organization behind GPT, has received funding from private investors and has collaborated with various partners.
Commercialization of AI:
Private corporations played a crucial role in commercializing AI applications as AI technologies matured. Companies like Amazon, Google, and Microsoft offer cloud-based AI services, making AI capabilities accessible to businesses and developers.
Overall, the development of AI and LLMs has been a collaborative effort involving a mix of government support, academic research, and contributions from private corporations. The landscape is dynamic, with ongoing partnerships shaping the future of AI technologies.
The Essential Skills for Large Language Model Development: What You Need to Know
complex enough for you?
The development of Artificial General Intelligence (AGI) is a complex and resource-intensive endeavor, and currently, only a few technologically advanced countries are actively working on AGI research. These countries have the necessary infrastructure, talent pool, and financial resources to collaborate with private companies to pursue AGI. Some of these countries include:
United States:
The United States is a global leader in AI research and development. It is home to major technology companies, research institutions, and government agencies actively contributing to AGI research. Silicon Valley, in particular, is a hub for AI innovation.
China:
China has made significant strides in AI research and has ambitious plans to become a global AI leader by 2030. The Chinese government has invested heavily in AI initiatives, and Chinese tech companies are actively involved in AGI research.
European Union (EU) Countries:
Several countries within the European Union, such as Germany, the United Kingdom, France, and others, have been actively investing in AI research. The European Commission has outlined strategies to promote AI development, and there are collaborative efforts at the national and EU levels.
Canada:
Canada, and in particular the city of Toronto, has emerged as a significant hub for AI research. The country has a robust academic community that has produced influential contributions to the field. Canadian companies, along with international collaborations, are engaged in AGI research.
These countries have a combination of factors that enable them to afford and pursue AGI development, including an intense research and educational infrastructure, access to top talent, robust funding mechanisms, and supportive government policies. AGI development involves private companies and collaborations with academic institutions and government agencies. The landscape may evolve as other countries with emerging AI capabilities actively participate in AGI research and development.
Major AI concerns.
Speed Kills—we are not incentivized to go slow.
Bias and Fairness: Many experts have raised concerns about the potential bias in large language models, reflecting and perpetuating existing societal biases. If the training data is biased, the model can learn and reproduce those biases in its outputs.
Ethical Use: There are worries about the potential misuse of AI and large language models for malicious purposes, such as generating fake news, deepfakes, or engaging in harmful activities like cyber attacks.
Lack of Explainability: Large language models often operate as "black boxes," making it challenging to understand how they arrive at specific decisions or generate certain outputs. This lack of transparency raises concerns about accountability and trust.
Security Risks: As AI systems become more integrated into various domains, concerns about their susceptibility to attacks exist. Adversarial attacks, where input data is manipulated to deceive the model, are of particular concern.
Unintended Consequences: Deploying large language models may lead to unforeseen consequences. For instance, the generation of misleading or harmful content, unintentional biases, or other adverse outcomes arise from these models' complexity and scale.
Many AI experts are concerned with aligning the goals of AGI with human values and the potential for unintended consequences. AGI systems need to be designed with sufficient safety measures.
There will be a "control problem," emphasizing the difficulty of ensuring that a superintelligent AI system behaves in ways aligned with human values and doesn't pose risks to humanity.
WHAT PARTICULAR SET OF HUMAN VALUES CAN THE WORLD AGREE ON?
There will probably be all kinds of contrasting and competing AI systems, leading to an arms race and the weaponization of AI.
How can we (who are we) align the goals and values of AGI with a globally agreed-upon set of human values to prevent unintended consequences?
Quis custodiet ipsos custodes?
What are the potential dangers of artificial intelligence?
Automation-spurred job loss.
Deepfakes.
Privacy Violations.
Algorithmic bias caused by bad data.
Socioeconomic inequality.
Danger to humans.
Unclear legal regulation.
Social manipulation.
In a world where billions of people are starving and migrants wander the Earth in search of food, who will shepherd and police AI?
Who has the money and resources to run AI, and what are their goals? Profit? Power? Control? State Control, Private Control, or a perfect combination of both? How much energy and resources can we channel into these data centers or decentralized systems?
Do we need quantum computing to ensure safety and security? Security for people or what?
WE HAVE A HUMAN NATURE PROBLEM.
“Guns don't kill, people do.”
Can “we” develop robust and verifiable control methods to ensure AGI's safe deployment? Who regulates it?
Jobs? What kinds of jobs will AI, LLMs, and AGI generate? What kind of people will we become if we are over-dependent on machines?
Oh, and there is still the global neoliberal, financialized, unfettered, omnicidal heat engine and our religious belief in growth as measured by GDP. A.K.A Capitalism. What do we do with those values? Will AI be wise enough to nudge us toward change if we are not?
Stuart Russell, a professor of computer science and co-author of the widely used textbook "Artificial Intelligence: A Modern Approach," has emphasized the importance of aligning AI systems with human values to ensure their safe and beneficial deployment.
What values? Values randomly scrapped off of social media? Who will curate the inputs or develop the algorithms to curate inputs by machines? What libraries will we use, Western, Eastern, Eurasian—perhaps in Esperanto?
Demis Hassabis is the co-founder and CEO of DeepMind. This leading AI research lab has been instrumental in advancing the field of reinforcement learning and neural networks, with a focus on AGI.
Who among us knows how neural networks work? We live in a black box with genies stochastically flying around doing opaque miraculous deeds. What will "They" (insert conspiracy theory here) want to use them for, and why? To what purpose?
Yoshua Bengio, Geoffrey Hinton, and Yann LeCun are three researchers often called the "Godfathers of AI" or the pioneers of deep learning. Their work has profoundly impacted the development of machine learning techniques that are crucial for AGI research.
Who uses machine learning, and to whose benefit? The Market? Are we, forever more, the beneficiaries of these technologies? We must be careful how these technologies are used and for what purpose. Can these technologies be deployed and used in a “democratic” way?
Andrew Ng is a computer scientist and co-founder of Google Brain. Ng has significantly contributed to developing deep learning algorithms and advocated for AI education. Is Andrew our friend?
Shane Legg is a co-founder of DeepMind along with Demis Hassabis. Shane Legg has contributed to the research and development of artificial intelligence, particularly in reinforcement learning and AGI.
What the heck is reinforcement learning? Do you think you should know about any of this, or will we sit around watching Netflix Black Mirror episodes until we come what may? We must get involved if we will use these technologies to benefit life on Earth.
According to experts, what are the top potential benefits of AI, LLMs, and AGI?
Do ordinary folks have any ideas about how they'd like to use the technology or benefits they'd like to experience? Who decides for us?
Automation and Efficiency:
AI has the potential to automate routine and repetitive tasks, improving efficiency and freeing up human resources for more creative and complex endeavors.
I’d love to see people having more time for creative work. Can ordinary people embrace complexity? Efficiency to what end? What of the simple life? We might want to keep bees and fish in a pristine lake with healthy fish stocks.
Medical Advancements:
AI can assist in medical diagnosis and treatment by analyzing vast amounts of medical data, identifying patterns, and providing insights to healthcare professionals. This is potentially great, but if profit is the primary driver of these endeavors, then we are in big trouble.
Improved Decision-Making:
AI systems can quickly process and analyze large datasets, helping humans make more informed and data-driven decisions across various industries. Decisions to what ends?
Enhanced Productivity:
Businesses can use AI to streamline processes, optimize supply chains, and improve productivity. To make more stuff?
Natural Language Processing:
LLMs and advancements in natural language processing enable more realistic and sophisticated interactions between humans and machines, improving communication and user experience. All of my best friends are machines.
Scientific Discovery:
AI can accelerate scientific research by analyzing complex datasets, simulating experiments, and identifying patterns that might be challenging for humans to discern. How much control do we want to give our machines?
Environmental Monitoring:
AI can contribute to environmental monitoring and conservation efforts by analyzing data from various sources, such as satellite imagery and sensor networks, to track ecosystem changes and address environmental challenges. This would be nice.
Personalized Services:
AI can tailor services and recommendations based on individual preferences and behavior, providing a more personalized experience in entertainment, marketing, and e-commerce. To heck with that, I need to interact with people. I need good relationships.
Education and Training:
AI technologies can support personalized learning experiences, providing adaptive educational content and assessments tailored to individual student needs. Education is a fine use case, but we still need teachers. We can’t thrive without the human “interface.”
Assistive Technologies:
AI can be applied to develop assistive technologies that enhance the quality of life for individuals with disabilities, offering mobility, communication, and independent living solutions. This would be great. I hope this happens.
Exploration and Space Research:
AI can aid in space exploration by assisting in autonomous navigation, analyzing data from space probes, and supporting the planning of complex missions. I love the idea of intelligent machines exploring the universe while people stay on Earth doing Earthling stuff to maintain a healthy biosphere for diverse life forms.
Innovation and Creativity:
AI systems can generate novel ideas, designs, and solutions, contributing to innovation and creativity across various industries. It can help creative people.
additional book suggestion
ARTICLES
The 15 Biggest Risks Of Artificial Intelligence
12 Risks and Dangers of Artificial Intelligence (AI)
SQ10. What are the most pressing dangers of AI?
Here's Why AI May Be Extremely Dangerous—Whether It's Conscious or Not
The risks of AI are real but manageable
Against Safetyism
AI system outperforms humans in global weather forecasting
Among the most extreme sci-fi speculations of AI doomsday are “Roko’s basilisk” and the “paperclip maximizer” thought experiments, which are designed to illustrate the risks of a superintelligent, self-replicating, and constantly self-improving future AGI — one that might become uncontrollable and incomprehensible, even for its creators. But these hypothetical scenarios are built on questionable, and often highly anthropomorphizing assumptions, such as that safety measures can’t be built into these systems, that AI can't be contained, that a future AGI is subject to the selection pressures of natural evolution, or that a superintelligent AI will invariably turn evil.
And a deeper problem with these extreme scenarios is that it’s essentially impossible to predict the medium-term, let alone the long-term impact of emerging technologies. Over the past half-century, even leading AI researchers completely failed in their predictions of AGI timelines. So, instead of worrying about sci-fi paperclips or Terminator scenarios, we should be more concerned, for example, with all the diseases for which we weren’t able to discover a cure or the scientific breakthroughs that won’t materialize because we’ve prematurely banned AI research and development based on the improbable scenario of a sentient AI superintelligence annihilating humanity.
Last year, a Google engineer and AI ethicist claimed that Google’s chatbot achieved sentience. And leading AI researchers and others, including Elon Musk, just recently signed a letter calling for a moratorium on AI research — all it will take, in other words, are six months to “flatten the curve” of AI progress (obviously, China’s CCP would be very supportive). Signatories seem to not only fear the shimmering cyborg exoskeletons crushing human skulls — which Terminator 2’s opening scene immortalized — but also the automatization of jobs and, predictably, fake news, propaganda, misinformation, and other “threats to democracy.” But the call for a moratorium on AI — which confuses existential risks with concerns about unemployment — doesn’t define what the risks and their probabilities are, and lacks any criteria for when and how to lift the ban. So, given that regulations for AI applications, such as for autonomous driving and medical diagnostics, already exist, it’s unclear why a ban on basic AI research and development is needed in the first place.
But, as if a temporary ban on AI research isn’t enough, Eliezer Yudkowsky, a leading proponent of AI doomerism — and who expects that humanity will very soon go extinct due to a superhuman intelligence-induced Armageddon — called for a complete shutdown of all large GPU clusters, restricting the computing power anyone is allowed to use in training AI systems, and, if necessary, even destroying data centers by airstrike. The only way to prevent the apocalypse, according to this extreme form of AI safety doomerism, is for America to reserve the right to launch a preemptive strike on a nuclear power to defeat Ernie, the chatbot of Chinese search engine Baidu. (This would be doubly ironic, because China's Great Firewall turns out to be a pioneering effort to censor text at scale, exactly what generative AI companies are being called on to do today.) It’s one thing to generate an infinite number of improbable apocalypse scenarios, but it’s another thing to advocate for nuclear war based on the release of a chatbot and purely speculative sci-fi scenarios involving Skynet.
“Deeply-entrenched risk aversion” as “paralysis”:
Now, while concerns about the safety of emerging technologies might be reasonable in some cases, they are symptoms of a societally deeply-entrenched risk aversion. Over the past decades, we’ve become extremely risk intolerant. It’s not just AI or genetic engineering where this risk aversion manifests. From the abandonment of nuclear energy and the bureaucratization of science to the eternal recurrence of formulaic and generic reboots, sequels, and prequels, this collective risk intolerance has infected and paralyzed society and culture at large (think Marvel Cinematic Universe or startups pitched as “X for Y” where X is something unique and impossible to replicate).
Take nuclear energy. Over the last decades, irrational fear-mongering resulted in the abandonment and demonization of the cleanest, safest, and most reliable energy source available to humanity. Despite an abundance of evidence, which scientifically demonstrates its safety, we abandoned an eternal source of energy, which could have powered civilization indefinitely, for unreliable and dirty substitutes while we simultaneously worry about catastrophic climate change. It’s hard to conceive now but nuclear energy once encapsulated the utopian promise of infinite progress, and nuclear engineering was, up until the 1960s, one of the most prestigious scientific fields. Today, mainly because of Hiroshima, Fukushima, and the pop-culture imagery of a nuclear holocaust though, the narrative has shifted from “alchemy,” “transmutation,” and “renewal” to dystopian imagery of “contamination,” “mutation,” and “destruction.” Although most deaths during the Fukushima incident resulted from evacuation measures — and more people died because of Japan’s shutting down of nuclear reactors than the accident itself — many Western nations, in response to the meltdown, started to obstruct the construction of new reactors or phase out nuclear energy altogether. This resulted in the perverse situation where Germany, which has obsessively focused on green and sustainable energy, now needs to rely on on highly polluting coal for up to 40% of its electricity demand. The rise of irrational nuclear fear illustrates a fundamental problem with safetyism: obsessively attempting to eliminate all visible risks often creates invisible risks that are far more consequential for human flourishing. Just imagine what would have happened, if we hadn’t phased out nuclear reactors — would we now have to obsess over “net-zero” or “2°C targets”?
Mitigating risk creates risk:
Now, whether we think that an AI apocalypse is imminent or the lab-leak hypothesis is correct or not, by mitigating or suppressing visible risks, safetyism is often creating invisible or hidden risks that are far more consequential or impactful than the risks it attempts to mitigate. In a way, this makes sense: creating a new technology and deploying it widely entails a definite vision for the future. But a focus on the risks means a definite vision of the past, and a more stochastic model of what the future might hold. Given time’s annoying habit of only moving in one direction, we have no choice but to live in somebody’s future — the question is whether it’s somebody with a plan or somebody with a neurosis.