Life 3.0: Being Human in the Age of Artificial Intelligence by Max Tegmark

(City: Publisher, YYYY), PPP

The mystery of human existence lies not in just staying alive, but in finding something to live for.
—Fyodor Dostoyevsky1

In Life 3.0, Max Tegmark gives a fascinating and whirlwind tour of a future with artificial intelligence. Even if you find the scenarios he presents to be fantastical, he asks difficult questions about the future of humanity: What is the meaning of life? Is a superhuman artificial intelligence possible, and if so would we want one? What collective goals should we work together for? Throughout, he gives scientific insight while humbly inviting thinkers in other fields—especially philosophy—to join "the most important conversation of our time."

First off, let's understand the title Life 3.0. Tegmark acknowledges that his definitions are controversial, reflecting a scientific perspective that is currently limited in its understanding of the topics under discussion. Thankfully though, he provides clear definitions for the terms he uses (reproduced below) and invites discussion on the interpretation he puts forth. He starts by defining life as a "process that can retain its complexity and replicate."3 Life 1.0 then is the simplest form of life where both its hardware and software are evolved, meaning that changes only manifest on the evolutionary timescale, over the course of generations. We inhabit what he terms Life 2.0 where our hardware (body) is evolved, but our software is designed: we can gain knowledge, change the algorithms we use to process information, and learn new skills. Life 3.0 then is "finally fully free from its evolutionary shackles," able to design both its hardware and software. Broadly speaking, this is the future Tegmark thinks may be possible with artificial intelligence.

Let's also get on the same page about what artificial intelligence is. Tegmark uses the term Artificial General Intelligence, to describe artificial (non-biological) intelligence that can accomplish virtually any goal, including learning (30). Unless otherwise explained, this is what he is referring to when speaking about artificial intelligence or AI.

But why should you care about artificial intelligence? Summarizing from the numerous arguments Tegmark makes in detail, we should all care because a future with artificial intelligence could be very bad, it could be very good, and it will have tangible effects on our lives in the near term. Most importantly, Tegmark's questions about artificial intelligence force us to ask important questions about who we are and what we want the future to look like. Let's examine each of these in turn.

A strength of Life 3.0 is the wide expanse of possible outcomes Tegmark highlights—he casts a wide net in an attempt to ensure all possible outcomes are accounted for and considered, even if they sound highly unlikely. Some of these are not good. Perhaps the AI takes control and kills all humans because they consume resources it could otherwise put to more productive use for its own ends. Or the AI may act like a zookeeper, keeping humans around for some entertainment value but in vastly limited numbers and with no autonomy.4 These are truly worst case scenarios if humans develop superintelligence5 and are unable to control its goals. Tegmark is careful to point out that AI experts disagree if we will ever be able to develop such a machine.

However, there are more immanent scenarios that can have detrimental effects for humanity. Even artificial intelligence at today's level of technology—in the hands of a hacker or the leader of a rogue state—could cause sever interruption to our transportation, healthcare, or financial infrastructure, for example. As Cathy O'Neil argues in Weapons of Math Destruction, big data and the algorithms that process it (the building blocks of AI systems) often function as black boxes. A human can't understand the details of how the decision was made, and algorithms like this are already making decisions about things like credit worthiness or recidivism probabilities, at times with unintended consequences for individuals. Even if confined to the type of consequences already observed, the development of AI warrants careful thought about how it is deployed. The fact that this book exists points to Tegmark's concern about potential future outcomes with artificial intelligence: in Our Mathematical Universe, he identifies the two human-inflicted doomsday scenarios of most concern as nuclear destruction and rogue AI.6

The future with AI does not need to be bad though. In fact, it could be quite beneficial. Imagine a world where a human-designed, human-serving, super- or near-human intelligent machine created massive wealth to lift the world out of poverty, designed drugs to cure terrible diseases, and made great new scientific discoveries in tandem with human researchers. As with any technological development, artificial intelligence is not inherently good or bad, but it is our application of it that makes it so. Likewise, it is rigor of our planning for new technology and assessment of its potential ramifications that can make technology beneficial or detrimental. A technology this complicated is not clear cut, with benefits and drawbacks existing together through different applications.

That brings us to the big questions that artificial intelligence forces us to confront. I appreciate how Tegmark reaches out beyond the community of scientists to invite more thorough philosophical inquiry:

We humans need to confront not only traditional computational challenges, but also some of the most obdurate questions in philosophy. To program a self-driving car, we need to solve the trolley problem of whom to hit during an accident. To program a friendly AI, we need to capture the meaning of life. What’s “meaning”? What’s “life”? What’s the ultimate ethical imperative? In other words, how should we strive to shape the future of our Universe? If we cede control to a superintelligence before answering these questions rigorously, the answer it comes up with is unlikely to involve us. This makes it timely to rekindle the classic debates of philosophy and ethics, and adds a new urgency to the conversation!2

Are there final, single, correct answers to these questions? More importantly when contemplating superhuman AI, will all humans impacted by such technology agree on answers to these questions? The historically-informed realist voice in my head answers, "No." For example, an orthodox Christian worldview has a particular vision of the human person, created in the image and likeness of God7 and dependent on God's outpouring of grace in the struggle against sin.8 On the contrary, centuries of romantic thinking reject this Christian anthropology and propose an alternative in which the individual will is the ultimate reality entirely independent from other individuals, God, or a universal truth.9 In all likelihood, individuals from each group would come to different conclusions about the best way to program a hypothetical future superintelligent AI. Including other varieties of thought only compounds this conflict.

The natural conclusion of this line of reasoning is that since we will likely never agree on the ultimate goals with which to program a superintelligent AI, we should confine ourselves to developing sub-human intelligent machines to ensure alignment with the goals of the particular human(s) they serve. This would allow for many of the benefits of artificial intelligence, without the risk of a superintelligent agent imposing the goals of one set of humans on all of humanity. While this sounds relativistic, there is an important distinction between human acknowledgement of universal truths and a human-created superintelligenct machine enforcing a [potentially false] view of truth on all. The cautious approach avoids the latter.

Even precluding superintelligence, there is important work to be done in three broad categories to ensure a beneficial future with AI. First, Tegmark is correct to call for rigorous philosophical (and I would all theological) inquiry into the ancient questions raised anew by AI. John Paul II points to this directly in his 1981 apostolic exhortation Familiaris Consortio: "The great task that has to be faced today for the renewal of society is that of recapturing the ultimate meaning of life and its fundamental values. Only an awareness of the primacy of these moral values enables man to use the immense possibilities given him by science in such a way as to bring about the true advancement of the human person in his or her whole truth, in his or her freedom and dignity. Science is called to ally itself with wisdom."10 Scientists should actively think about the ramifications of the technologies they develop, and philosophers and theologians need to apply the insights of their work to real questions about new technologies. Second, we should actively improve human-controlled AI systems and work for their integration in society. The development of these systems is all but inevitable. Taking an active interest in this development, as Tegmark argues, is important for ensuring that these developments have an overall positive rather than negative effect on society. Third, we need to make sure we do not develop a superhuman intelligence. Education, regulation, and research need to leapfrog AI developments to ensure we do not mistakenly develop a technology we later come to regret. I join Tegmark and Elon Musk in advocating AI safety research to put guardrails around the development of this technology.

I want to close by bringing our discussion back from hypothetical thinking about the future to the here and now. In Chapter 3, Tegmark reviews some recent breakthroughs in artificial intelligence, and then projects how these and future developments might impact a number of facets of life, from manufacturing and healthcare to warfare and the legal system. One topic I find particularly interesting is about the future of work in a world of AI. Citing MIT economists Erik Brynjolfsson and Andrew McAfee, Tegmark explains how digital technology increases inequality. This works through three mechanisms: (1) by rewarding those with education over others by replacing old jobs with new jobs that require high skills, (2) by rewarding those with capital over laborers due to increased automation, and (3) by rewarding individual superstars over everyone else because of the scale of the platform driven by digital technology.11 With these trends in mind Tegmark explains the career advice he gives his children: go into jobs that machines are bad at so you won't be replaced (121). Therefore, a "safe" job is one that requires a lot of interaction with other people, uses creativity, and is in an unpredictable environment.

In thinking about keeping myself competitive in the workplace, there are three areas I focus on. First, always be learning. The accelerating rate of change of technology means that it only becomes more important as time goes on to be learning new technical skills to replace those that will go out of favor. In an environment with rapid change, the fastest learners will be the most successful. Second, be a well-rounded generalist who can make connections between disparate topics. As Tegmark highlights, machine development always starts with specific, well-defined tasks. Use the advantages of your human mind to enter into the specifics with a larger context and purpose in mind. Third, automate your job before someone else does. In any job there will be boring, repetitive tasks that need to be completed in addition to the "real work" where your human mind adds the most value. This is an opportunity to use the strengths of your mind and a computer mind together to achieve a greater result. Simple automation can make significant improvements to efficiency and accuracy. As computers and AI systems become more powerful and capable, the ability to work in harmony alongside them will become all the more important.

I did not address anywhere near all the facets of AI development Tegmark discusses in Life 3.0, and it was still a whirlwind.12 I join him in inviting others to the important conversation about how AI—and technology more generally—will impact our future and what we will do to manage its emergence. It is a fascinating conversation to have, and it beautifully leads to reflection on some of the deepest questions about who we are and what matters in life.


Notes

General Outline (45)

  1. Welcome to the Most Important Conversation of Our Time: introduction, terms, what is at stake
  2. Matter Turns Intelligent: explore the foundations of intelligence and how seemingly dumb matter can be rearranged to remember, compute and learn
  3. The Near Future: Breakthroughs, Bugs, Laws, Weapons and Jobs: how to modernize our laws and what career advice to give kids so that they can avoid soon-to-be-automated jobs
  4. Intelligence Explosion?: how to ensure that AGI is beneficial, whether we can or should create a leisure society that flourishes without jobs, and if an intelligence explosion can propel AGI far beyond human levels.
  5. Aftermath: The Next 10,000 Years: explore different scenarios of how AGI could unfold.
  6. Our Cosmic Endowment: The Next Billion Years and Beyond: examine the laws of physics that will determine how the universe unfolds
  7. Goals: explore the physical basis of goals
  8. Consciousness: explore the physical basis of consciousness
  9. Epilogue: what we can do now

Terminology

This table of terminology is reproduced from page 39 for reference.

Term Definition
Life Process that can retain its complexity and replicate
Life 1.0 Life that evolves its hardware and software (biological stage)
Life 2.0 Life that evolves its hardware bust designs much of its software (cultural stage)
Life 3.0 Life that designs its hardware and software (technological stage)
Intelligence Ability to accomplish complex goals
Artificial Intelligence (AI) Non-biological intelligence
Narrow intelligence Ability to accomplish a narrow set of goals (play chess, drive car)
General intelligence Ability to accomplish virtually any goal, including learning
Universal intelligence Ability to acquire general intelligence given access to data and resources
Artificial General Intelligence (AGI) Ability to accomplish any cognitive task at least as well as humans
Human-level AI AGI
Strong AI AGI
Superintelligence General intelligence far beyond human level
Civilization Interacting group of intelligent life forms
Consciousness Subjective experience
Qualia Individual instances of subjective experience
Ethics Principles that govern how we should behave
Teleology Explanation of things in terms of their goals or purposes rather than their causes
Goal-oriented behavior Behavior more easily explained via its effect than via its cause
Having a goal Exhibiting goal-oriented behavior
Having purpose Serving goals of one's own or of another entity
Friendly AI Superintelligence whose goals are aligned with ours
Cyborg Human-machine hybrid
Intelligence explosion Recursive self-improvment rapidly leading to superintelligence
Singularity Intelligence explosion
Universe The region of space from which light has had time to reach us during the 13.8 buillion years since our Big Bang

Prelude: Team Omega

  • He opens with a fictional tale about an alturistic group that greated a super intelligent AI as an account of what could happen: "They were convinced that if they didn’t do it first, someone less idealistic would." (location. 130)
  • British mathematician Irving Good back in 1965: “Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.” (location. 134)

1. Welcome to the Most Important Conversation of Our Time

A Brief History of Complexity

  • (a brief history of the origins of the universe)

The Three Stages of Life

  • Life: "a self-replicating information-processing system whose information (software) determines both its behavior and the blueprints for its hardware." (25)
    • Life 1.0 (biological stage): life where both the hardware and software are evolved rather than designed
    • Life 2.0 (cultural stage): life whose hardware is evolved, but whose software is largely designed. By your software, I mean all the algorithms and knowledge that you use to process the information from your senses and decide what to do—everything from the ability to recognize your friends when you see them to your ability to walk, read, write, calculate, sing and tell jokes.
    • Life 3.0 (technological stage): life which can design not only its software but also its hardware. "Life 3.0 is the master of its own destiny, finally fully free from its evolutionary shackles." (29)

Controversies

  • Artificial General Intelligence: artifical intelligence which can accomplish virtually any goal, including learning—this serves as his working definition when referring to AI throughout the book (30)
  • Most people fall into one of the categories shown in Figure 1.2 (31):
    • Digital utopianism: "that digital life is the natural and desirable next step in the cosmic evolution and that if we let digital minds be free rather than try to stop or enslave them, the outcome is almost certain to be good." (32)
    • Techno-skeptic position: articulated by Andrew Ng: “Fearing a rise of killer robots is like worrying about overpopulation on Mars.” (32-33)
    • The Beneficial-AI Movement: the position he and his Future of Life Institute take

Misconceptions

  • It is often difficult to have a converstaion about AI without talking past one antoher due to misunderstanding. See the table above for Tegmark's definitions of terms he uses. Other misconceptions he discusses include:
    • Timeline: "superintelligence may happen in decades, centuries, or never: AI experts disagree and we simply don't know" (41)
    • Controversy: "many top AI researchers are worried about AI" (42)
    • Risks: "human-killing robots" are less a concern than (1) competence: the machine's ability to carry out the human's goals, and (2) goals: aligning AI goals with human goals and choosing between competing human goals

2. Matter Turns

What Is Intelligence?

  • there’s no agreement on what intelligence is even among intelligent intelligence researchers! (49)
  • Intelligence: the ability to accomplish complex goals (50)
  • Comparing the intelligence of humans and machines today, we humans win hands-down on breadth, while machines outperform us in a small but growing number of narrow domains (52)
  • intelligent behavior is inexorably linked to goal attainment. (53)

What Is Memory?

  • our human DNA stores about 1.6 gigabytes, comparable to a downloaded movie. As mentioned in the last chapter, our brains store much more information than our genes: in the ballpark of 10 gigabytes electrically (specifying which of your 100 billion neurons are firing at any one time) and 100 terabytes chemically/biologically (specifying how strongly different neurons are linked by synapses). (60)
  • Auto-associative memory: retrieve memories from your brain by specifying something about what is stored, as compared with how you retrieve memories from a computer or hard drive by specifying where it’s stored (60)

What Is Computation?

  • NAND gates are universal: you can implement any well-defined function simply by connecting together NAND gates. So if you can build enough NAND gates, you can build a device computing anything (64)
  • Stephen Wolfram has argued that most non-trivial physical systems, from weather systems to brains, would be universal computers if they could be made arbitrarily large and long-lasting. (65)
  • computation is substrate-independent in the same way that information is: it can take on a life of its own, independent of its physical substrate (65-66)
    • a substrate is necessary, but most of its details don’t matter
    • the substrate-independent phenomenon takes on a life of its own, independent of its substrate
    • it’s often only the substrate-independent aspect that we’re interested in: a surfer usually cares more about the position and height of a wave than about its detailed molecular composition
    • computation is a pattern in the spacetime arrangement of particles, and it’s not the particles but the pattern that really matters: matter doesn’t matter (67)
    • substrate independence allows you to keep the same software as hardware improves: Seth Lloyd has worked out what this fundamental limit is, and as we’ll explore in greater detail in chapter 6, this limit is a whopping 33 orders of magnitude (1033 times) beyond today’s state of the art for how much computing a clump of matter can do. So even if we keep doubling the power of our computers every couple of years, it will take over two centuries until we reach that final frontier. (69)

What Is Learning?

  • Machine learning: the study of algorithms that improve through experience (71)
  • Neural network: a group of interconnected neurons that are able to influence each other’s behavior. (71)
    • A network of neurons can compute functions just as a network of NAND gates can. For example, artificial neural networks have been trained to input numbers representing the brightness of different image pixels and output numbers representing the probability that the image depicts various people.
    • Deep neural network: Deep neural networks (neural networks with many layers) are much more efficient than shallow ones for many of these functions of interest. For example, together with another amazing MIT student, David Rolnick, we showed that the simple task of multiplying n numbers requires a whopping 2n neurons for a network with only one layer, but takes only about 4n neurons in a deep network. (76)

3. The Near Future: Breakthroughs, Bugs, Laws, Weapons and Jobs

Breakthroughs

  • AI has exhibited the ability to find "creative" solutions to problems that humans have not:
    • Maximize Atari computer game scores by breaking through on the side to the ball bounces around the top (83)
    • Playing a Go game on the fifth line (humans keep to the third or forth). The move was called "one of the most creative in Go history" (88)
    • Strategy: To me, AlphaGo also teaches us another important lesson for the near future: combining the intuition of deep learning with the logic of GOFAI can produce second-to-none strategy. Because Go is one of the ultimate strategy games, AI is now poised to graduate and challenge (or help) the best human strategists even beyond game boards—for example with investment strategy, political strategy and military strategy. Such real-world strategy problems are typically complicated by human psychology, missing information and factors that need to be modeled as random, but...none of these challenges are insurmountable. (89)

Bugs vs. Robust AI

  • We need to be proactive instead of reactive in AI safety research: "as technology grows more powerful, we should rely less on the trial-and-error approach to safety engineering." (94)
  • He reviews potential applications for AI in a number of fields: Space Exploration, Finance, Manufacturing, Transportation, Energy, Healthcare, Communication, Law, Weapons
  • Validation vs. Verification (96-97):
    • Verification: "Did I build the system right?"
    • Validation: "Did I build the right system?"

Jobs and Wages

  • Erik Brynjolfsson and Andrew McAfee argue that digital technology drives inequality in three ways (119)
    1. technology benefits the educated by replacing old jobs with ones requiring more skills
    2. technology rewards capital (people who own companies and machines) over labor (see xkcd #1897): highly human-captal levered companies scale digitally and profit owners
    3. technology benefits superstars over everyone else: superstars can scale with low margins thanks to technology, squeezing others out
  • Career Advice for Kids: Go into professions that machines are currently bad at, and therefore seem unlikely to get automated in the near future (121)
    • Ask youself these questions about your future job:
      1. Does it require interacting with people and using social intelligence?
      2. Does it involve creativity and coming up with clever solutions?
      3. Does it require working in an unpredictable environment?
    • Safe bets include: teacher, nurse, doctor, dentist, scientist, entrepreneur, programmer, engineer, lawyer, social worker, clergy member, artist, hairdresser or massage therapist
    • Even non-machine jobs will face pressure: increasingly brutal competition from other humans forced out of work by machines, and thanks to the superstar theory, few will succeed
    • "If you go into finance, don’t be the “quant” who applies algorithms to the data and gets replaced by software, but the fund manager who uses the quantitative analysis results to make strategic investment decisions"; also be the litigator rather than the paralegal, and doctor who orders radiology rather than radiologist (122)
  • He discusses if and how we should consider a universal basic income:
    • Voltaire wrote in 1759 that “work keeps at bay three great evils: boredom, vice and need.” (128)

4. Intelligence Explosion?

  • "The danger with the Terminator story isn’t that it will happen, but that it distracts from the real risks and opportunities presented by AI...we’re pretty clueless about what will and won’t happen, and that the range of possibilities is extreme." (134)
  • Risk 1–Totalitarianism: a bad human controls a superintelligent AI (136)
  • Risk 1–Breakout: the possibility that a superhuman intelligence would leave human control (138):
    • Suppose that a mysterious disease has killed everybody on Earth above age five except you, and that a group of kindergartners has locked you into a prison cell and tasked you with the goal of helping humanity flourish...this is how a superhuman intelligence would feel about humans (139)
    • He goes on to speculate about how and why a superintelligence would attempt to break out
    • Interestingly candid aside that admits some truth about the nature of human sexuality: "But that’s not a valid conclusion: our DNA gave us the goal of having sex because it “wants” to be reproduced, but now that we humans have understood the situation, many of us choose to use birth control, thus staying loyal to the goal itself rather than to its creator or the principle that motivated the goal." (140)
    • The risk is in the relationship between competence and goals: "Prometheus caused problems for certain people not because it was necessarily evil or conscious, but because it was competent and didn’t fully share their goals." (149)
  • He assets that there is so much uncertainty and such widely varying consequences that we should keep an open mind and take precautions
  • "Although our present world remains stuck in a multipolar Nash equilibrium, with competing nations and multinational corporations at the top level, technology is now advanced enough that a unipolar world would probably also be a stable Nash equilibrium." (153)
  • “Who or what will control the intelligence explosion and its aftermath, and what are their/its goals?” (159)
  • So we should instead ask: “What should happen? What future do we want?” (159)

5. Aftermath: The Next 10,000 Years

  • He discusses possible scenarios in detail including potential pros and cons. A summary table is below, as well as some interesting thoughts I picked out from a few of the descriptions:
Scenario Description
Libertarian utopia Humans, cyborgs, uploads and superintelligences coexist peacefully thanks to property rights.
Benevolent dictator Everybody knows that the AI runs society and enforces strict rules, but most people view this as a good thing.
Egalitarian utopia Humans, cyborgs and uploads coexist peacefully thanks to property abolition and guaranteed income.
Gatekeeper A superintelligent AI is created with the goal of interfering as little as necessary to prevent the creation of another superintelligence. As a result, helper robots with slightly subhuman intelligence abound, and human-machine cyborgs exist, but technological progress is forever stymied.
Protector god Essentially omniscient and omnipotent AI maximizes human happiness by intervening only in ways that preserve our feeling of control of our own destiny and hides well enough that many humans even doubt the AI’s existence.
Enslaved god A superintelligent AI is confined by humans, who use it to produce unimaginable technology and wealth that can be used for good or bad depending on the human controllers.
Conquerors AI takes control, decides that humans are a threat/nuisance/waste of resources, and gets rid of us by a method that we don’t even understand.
Descendants AIs replace humans, but give us a graceful exit, making us view them as our worthy descendants, much as parents feel happy and proud to have a child who’s smarter than them, who learns from them and then accomplishes what they could only dream of—even if they can’t live to see it all.
Zookeeper An omnipotent AI keeps some humans around, who feel treated like zoo animals and lament their fate.
1984 Technological progress toward superintelligence is permanently curtailed not by an AI but by a human-led Orwellian surveillance state where certain kinds of AI research are banned.
Reversion Technological progress toward superintelligence is prevented by reverting to a pre-technological society in the style of the Amish.
Self-destruction Superintelligence is never created because humanity drives itself extinct by other means (say nuclear and/or biotech mayhem fueled by climate crisis).
  • Protector god:
    • On the other hand, some religious people may disapprove of this scenario because the AI attempts to outdo their god in goodness, or interfere with a divine plan where humans are supposed to do good only out of personal choice. (178)
    • Another downside of this scenario is that the protector god lets some preventable suffering occur in order not to make its existence too obvious. This is analogous to the situation featured in the movie The Imitation Game, where Alan Turing and his fellow British code crackers at Bletchley Park had advance knowledge of German submarine attacks against Allied naval convoys, but chose to only intervene in a fraction of the cases in order to avoid revealing their secret power. It’s interesting to compare this with the so-called theodicy problem of why a good god would allow suffering. Some religious scholars have argued for the explanation that God wants to leave people with some freedom. In the AI-protector-god scenario, the solution to the theodicy problem is that the perceived freedom makes humans happier overall. (178)
  • Enslaved god:
    • Whether the outcome is good or bad for humanity would obviously depend on the human(s) controlling it, (180)
    • The Catholic Church is the most successful organization in human history in the sense that it’s the only one to have survived for two millennia, but it has been criticized for having both too much and too little goal stability: today some criticize it for resisting contraception, while conservative cardinals argue that it’s lost its way. For anyone enthused about the enslaved-god scenario, researching long-lasting optimal governance schemes should be one of the most urgent challenges of our time. (181)
  • "The scenarios we’ve covered obviously shouldn’t be viewed as a complete list, and many are thin on details, but I’ve tried hard to be inclusive, spanning the full spectrum from high-tech to low-tech to no-tech and describing all the central hopes and fears expressed in the literature...there’s no consensus whatsoever. The one thing everybody agrees on is that the choices are more subtle than they may initially seem" (200)

6. Our Cosmic Endowment: The Next Billion Years and Beyond

  • He reviews attempts to be more efficient at converting mass into energy to approach the ideal of E = mc2
    • Dyson sphere: to rearrange Jupiter into a spherical shell surrounding the Sun giving us 100 billion times more biomass and a trillion times more energy than today. (205)
    • Evaporating Black Holes: In A Brief History of Time, Stephen Hawking proposed a black hole power plant where whatever matter you dump into the black hole will eventually come back out again as heat radiation, so by the time the black hole has completely evaporated, you’ve converted your matter to radiation with nearly 100% efficiency. (211)
    • Other ideas include spinning black holes, quasars, sphalerons, etc.
  • He discusses questions about cosmic :
    • Teleportation: "Once another solar system or galaxy has been settled by superintelligent AI, bringing humans there is easy—if humans have succeeded in making the AI have this goal. All the necessary information about humans can be transmitted at the speed of light, after which the AI can assemble quarks and electrons into the desired humans. (225)
  • He reviews theories of how the universe will end. Our universe is about 1010 years old. Theories involving dark energy indicate the universe could end in as soon as 1010–1011 years. An upper limit on how long the universe might last is around 101500 years (231)
  • He reviews a number of considerations of how a future "alive" universe would contol and act (233-240)
  • He does not assume that there is other life: "Indeed, I think that this assumption that we’re not alone in our Universe is not only dangerous but also probably false." (241)
    • His argument is basically that if the distance between civilizations needs to be between 1022 light-years and 1026 light-years, which he says is a low-probability. (242)
    • "Although I’m a strong supporter of all the ongoing searches for extraterrestrial life, which are shedding light on one of the most fascinating questions in science, I’m secretly hoping that they’ll all fail and find nothing! The apparent incompatibility between the abundance of habitable planets in our Galaxy and the lack of extraterrestrial visitors, known as the Fermi paradox, suggests the existence of what the economist Robin Hanson calls a “Great Filter,” an evolutionary/technological roadblock somewhere along the developmental path from the non-living matter to space-settling life. If we discover independently evolved life elsewhere, this would suggest that primitive life isn’t rare, and that the roadblock lies after our current human stage of development—perhaps because space settlement is impossible, or because almost all advanced civilizations self-destruct before they’re able to go cosmic. I’m therefore crossing my fingers that all searches for extraterrestrial life find nothing: this is consistent with the scenario where evolving intelligent life is rare but we humans got lucky, so that we have the roadblock behind us and have extraordinary future potential." (245)

7. Goals

  • We have witnessed 4 stages of goal-seeking behavior (249-259, 269):
    • 1. Physics: Matter seemingly intent on maximizing its dissipation
    • 2. Biology: Primitive life seemingly trying to maximize its replication
    • 3. Psychology: Humans pursuing not replication but goals related to pleasure, curiosity, compassion and other feelings that they’d evolved to help them replicate
    • 4. Engineering: Machines built to help humans pursue their human goals

Friendly AI: Aligning Goals

  • The risk of AI is misalignment of goals. Aligning AI goals with human goals is difficult and has 3 sub-problems (260):
    1. Making AI learn our goals
    2. Making AI adopt our goals
    3. Making AI retain our goals
  • The time window during which you can load your goals into an AI may be quite short: the brief period between when it’s too dumb to get you and too smart to let you. (263)

Ethics: Choosing Goals

  • "In my opinion, both this ethical problem and the goal-alignment problem are crucial ones that need to be solved before any superintelligence is developed." (269)
  • He identifies 4 primary, commonly held ethical principles in history:
    • Utilitarianism: Positive conscious experiences should be maximized and suffering should be minimized.
    • Diversity: A diverse set of positive experiences is better than many repetitions of the same experience, even if the latter has been identified as the most positive experience possible.
    • Autonomy: Conscious entities/societies should have the freedom to pursue their own goals unless this conflicts with an overriding principle.
    • Legacy: Compatibility with scenarios that most humans today would view as happy, incompatibility with scenarios that essentially all humans today would view as terrible. (location. 4,891)

Ultimate Goals?

  • Do we have an ethical destiny?
  • "It appears that we humans are a historical accident, and aren’t the optimal solution to any well-defined physics problem. This suggests that a superintelligent AI with a rigorously defined goal will be able to improve its goal attainment by eliminating us. This means that to wisely decide what to do about AI development, we humans need to confront not only traditional computational challenges, but also some of the most obdurate questions in philosophy. To program a self-driving car, we need to solve the trolley problem of whom to hit during an accident. To program a friendly AI, we need to capture the meaning of life. What’s “meaning”? What’s “life”? What’s the ultimate ethical imperative? In other words, how should we strive to shape the future of our Universe? If we cede control to a superintelligence before answering these questions rigorously, the answer it comes up with is unlikely to involve us. This makes it timely to rekindle the classic debates of philosophy and ethics, and adds a new urgency to the conversation! (279)

8. Consciousness

  • "We’ve seen that AI can help us create a wonderful future if we manage to find answers to some of the oldest and toughest problems in philosophy—by the time we need them. We face, in Nick Bostrom’s words, philosophy with a deadline." (281)
  • Although thinkers have pondered the mystery of consciousness for thousands of years, the rise of AI adds a sudden urgency, in particular to the question of predicting which intelligent entities have subjective experiences. (282)

What is Consciousness?

  • Just as with “life” and “intelligence,” there’s no undisputed correct definition of the word “consciousness.” Competing ones, including sentience, wakefulness, self-awareness, access to sensory input and ability to fuse information into a narrative. His definition: consciousness: subjective experience (283)

What's the Problem? (What don't we understand about consciousness?)

  • David Chalmers breaks this into two problems (284):
    1. The Easy Problem: how a brain processes information
    2. The Hard Problem: why you have a subjective experience
  • Tegmark then breaks the hard problem into (286):
    1. Pretty Hard: What physical properties distinguish conscious and unconscious systems?
    2. Even Harder: How do physical properties determine qualia?
    3. Really Hard: Why is anything conscious?

Is Consciousness Beyond Science?

  • Tegmark argues that consciousness is not beyond science (at least for the Pretty Hard Problem, and gives a thoughtful description of the scientific method in he process:
    • Austro-British philosopher Karl Popper popularized the now widely accepted adage “If it’s not falsifiable, it’s not scientific.” (287)
    • "Suppose that a computer measures information being processed in your brain and predicts which parts of it you’re aware of according to a theory of consciousness. You can scientifically test this theory by checking whether its predictions are correct, matching your subjective experience." (288)
    • "The more dangerously a theory lives by sticking its neck out and making testable predictions, the more useful it is, and the more seriously we take it if it survives all our attempts to kill it. Yes, we can only test some predictions of consciousness theories, but that’s how it is for all physical theories. So let’s not waste time whining about what we can’t test, but get to work testing what we can test!" (288-289)
    • "But when confronted with several related unanswered questions, I think it’s wise to tackle the easiest one first." (289)

Experimental Clues About Consciousness

  • He reviews System I and System II thinking (levels of consciousness) from Thinking Fast and Slow by Daniel Kahneman, and neuroscience research of the past couple of decades which seeks to understand how and where the brain perceives, including neural correlates of consciousness (NCC)
  • Your consiousness lives in the past, with Christof Koch estimating that it lags behind the outside world by about a quarter second. (297)

Theories About Consciousness

  • Consciousness (Tegmark's conjecture): the way information feels when being processed in certain ways. (304)
    • it must be substrate-independent; it’s only the structure of the information processing that matters, not the structure of the matter doing the information processing
  • Principles of Consciousness (he views these as necessary, but does not claim they are sufficient): The first three principles imply autonomy; all four principles together mean that a system is autonomous but its parts aren’t (304)
    1. Information Principle: a conscious system has substantial information-storage capacity
    2. Dynamics Principle: a conscious system has substantial information-processing capacity
    3. Independence Principle: a conscious system has substantial independence from the rest of the world
    4. Integration Principle: a conscious system cannot consist of nearly independent parts
  • "Traditionally, we humans have often founded our self-worth on the idea of human exceptionalism: the conviction that we’re the smartest entities on the planet and therefore unique and superior. The rise of AI will force us to abandon this and become more humble. But perhaps that’s something we should do anyway: after all, clinging to hubristic notions of superiority over others (individuals, ethnic groups, species and so on) has caused awful problems in the past, and may be an idea ready for retirement." (314)

Epilogue

  • "The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom." —Isaac Asimov (316)
  • A resolution of his: "I was no longer allowed to complain about anything without putting some serious thought into what I could personally do about it" (317)
  • About Elon Musk: "I instantly liked him. He radiated sincerity, and I was inspired by how much he cared about the long-term future of humanity—and how he audaciously turned his aspiration into actions. He wanted humanity to explore and settle our Universe, so he started a space company. He wanted sustainable energy, so he started a solar company and an electric-car company. Tall, handsome, eloquent and incredibly knowledgeable, it was easy to understand why people listened to him." (322)
  • Anecdote about the F9S1-15 CRS-5 mission: "The conference climax, Elon’s donation announcement, was scheduled for 7 p.m. on Sunday, January 4, 2015...Elon’s assistant called and said that it looked like Elon might not be able to go through with the announcement...Elon explained that they were just two days away from a crucial SpaceX rocket launch where they hoped to pull off the first-ever successful landing of the first stage on a drone ship, and that since this was a huge milestone, the SpaceX team didn’t want to distract from it with concurrent media splashes involving him" (323). The mission was successful but it didn't land
  • "Erik Brynjolfsson spoke of two kinds of optimism in his Asilomar talk. First there’s the unconditional kind, such as the positive expectation that the Sun will rise tomorrow morning. Then there’s what he called “mindful optimism,” which is the expectation that good things will happen if you plan carefully and work hard for them. That’s the kind of optimism I now feel about the future of life." (333)
  • "Please discuss all this with those around you—it’s not only an important conversation, but a fascinating one...Our future isn’t written in stone and just waiting to happen to us—it’s ours to create. Let’s create an inspiring one together!" (335)

Created: 2021-09-16
Updated: 2022-02-22-Tue


  1. Cited by Tegmark at the beginning of Chapter 7 on Goals (249), from The Grand Inquisitor in Fyodor Dostoyevsky, The Brothers Karamazov (New York: Barnes & Noble, 2004), 236. 

  2. Max Tegmark, Life 3.0 (New York: Alfred A. Knopf, 2017), 279. 

  3. Max Tegmark, Life 3.0 (New York: Alfred A. Knopf, 2017), 25. Also see 39 for a table of definitions used throughout the book. 

  4. See the other scenarios he describes in Chapter 5 (in the notes section below) 

  5. Superintelligence: "General intelligence far beyond human level", Terminology (39) 

  6. Max Tegmark, Our Mathematical Universe (New York: Alfred A. Knopf, 2014), 377. 

  7. cf. Gn-01-Gn-01: Then God said: Let us make human beings in our image, after our likeness...God created mankind in his image; in the image of God he created them; male and female he created them. 

  8. cf. CCC 1700 

  9. As one recent example, take Justice Anthony Kennedy's majority opinion in Planned Parenthood v. Casey (available here): "At the heart of liberty is the right to define one's own concept of existence, of meaning, of the universe, and of the mystery of human life." 

  10. John Paul II, Familiaris Consortio: On the Role of the Christian Family in the Modern World, 8. 

  11. For related reading, see Rise of the Robots: Technology and the Threat of a Jobless Future by Martin Ford. Chapter 2 identifies "7 deadly trends" brought by increased automation, including: stagnant wages, corporations winning over labor, lower labor force participation, less job creation, increased inequality, difficulty for recent graduates, and more part time jobs. 

  12. My full notes on Life 3.0 are below.