Cover of The Precipice

The Precipice

by Ord, Toby

Published: March 5, 2020

Read: June 9, 2020

Goodreads

Review

Toby, a leading researcher in this field, estimates there is a 1 in 6 that the future of humanity is destroyed in the next century. 1 in 6. Humanity has the same chance of globally and irreparably imploding in our lifetime as someone does of getting into UCLA. The author covers the full breadth of risk discusses asteroids and climate change to unaligned AI and engineered pandemics. Mountains of research, reasonable predictions, and philosophy are synthesized, yet the book is somehow still very approachable. And most importantly he asks what this means humanity and individuals should do. This is a unique moment in humanity with dangerous technologies offering both salvation and damnation. I am actually being very literal when I say he argues reducing existential risks is the most important problem in history. And I think he might be right.

Notes

#Book by [[Toby Ord]]

100 billion before us, 7 billion now, trillions ahead of us. Earth will be habitable for 100 million years more so humanity should just be beginning. Humanity in it's intense, Vast adulthood awaits. Close to self deduction with future hanging in balance. "Safeguarding humanities future is the defining challenge of our time". Gap between wisdom and power. 1962, Cuban missle crisis. Russian submarine almost fires a nuclear missle only by one person saying no. Us response planned was ONLY retalation, war would have been inevitable. Dangers mostly caused by human action, so can be solved by human action. Must get ourselves into a more resilant state to prevent future conflict. Risks can be extinction, dystopian lock in of bad states, or some irrecoverable collapse of civilization. Existential risks require cross country and generation coordination. We must have the foresight to build institutions that will never fail victim to cotrashope. He founded giving what we can pledge and effective altruism. "There are real risks to our future, but our choices can make all the difference and make a rich future." Our potential is vast and we have so much to protect.

Chapter 1 Standing at the precipe

We live at a uniquely important time. Main focus is humanity increasing power to create and harm. Not human that is extraordinary, but humanity pooling knowledge. Cooperate across time and space. Three main shifts: agricultural revolution with selective breeding. Much larger cities, writing increasing bandwidth of idea spread. 2, scientific revolution 400 years ago. Decreased reliance on authority and systematic knowledge creation. 3rd, industrial revolution based on oil and coal. Vastly more concentrated and convinent energy. Was also one of most unstable periods of human history. Before Industrial revolution, wealth growth was completely about population growth. Industrial revolution made wealth growth faster than population though gains not well distributed. Trends against poverty. 1 in 10 literacy after industrial now 8 in 10. Life expectancy before argicultural revolution was 30, now over 60. Substantial growth morally including more and more even animals. In 1830, one writer reflexted some would say the progress is behind us. Imagine what is ahead of us. Unique power to destroy ourselves. Earth habitable for a billion year, trillions of humans. Life without pain, dementia, racism, mental health, etc is just the lower bound. No telling how high life can go and our child will spend eons exploring the heights. Nuclear bombs discontinuous power. One bomb more per than all jobs of world war 2. Nuclear war is in no one's interest, yet we are still on the brink especially during Cuban Missle Crisis. Leaders avoided crisis, but the events during the crisis were out of the hand. If x, then y. If jet shot down, then war from US. Soviet could barely control the Cuban forces and shot down a plane acting against orders.... Many nuclear warheads would be shot at invading American force, Americans intended to invade. JFK, thought nuclear war was between 1/3-1/2 and author thinks they were pessimistic given what they knew but right now knowing Soviet position. North Korea most likely for nuclear war, but with 1% of fleet unlikely to end humanity. Soviet and US fleets has intercontential missiles with hair trigger(10 minute) response missles with high chance for false positives. Climate change is a different peril depending on the actions of many instead of a few, over a long period of time instead of quick technological breakthrough, and bad effects are almost guaranteed but unknown to what extent. Very different problems. But future problems will be more like nuclear war due to tech, small people, and catsrophic over short time. Two concerning tech (my guessing probably ai and bioweapon). Experts find it likely AI will exceed humans in our lifetime. This book is not against technology as it can deliver is from natural threats, and not book saying extinction is most likely outcome. author will try to give quantative estimates though they will definitely be imprecise because much clearer than saying very likely. During 20th centry, 1/100 chance. This century 1/6. If we don't get our act together, this will only increase in the next centuries. Thus this period is unlikely to maintain itself, either we figure it out or go extinct. People and futures will study this time, I call it the precipice. Defining our future. Our actions have uniquely high stakes. If you can play a role, it is among the most Noble purpose.

Chp 2 Existential Risk

What is at stake? Cheif reason is we would lose our entire future, everything humanity can do and be. Greatly neglected. Extinction would lock in a failed world. Not just extinction, an event that didn't cause extinction, but sent us back to agricultural times while irreparable damaging earth would likely be irrecoverable. A world in chains under a totalitarian reige technically enabled to be stable. Existential catrostophe is the destruction of humanities long term potential. An existential risk is a risk that threatens the destruction of humanities long term potential. Potential means the range of possible futures, we should preserve this against destruction. Abstract, but similar to preserve potential of child. Doesn't take into account other beings that could achieve elsewhere in the future, as humans have our own morality pushing up. If we make someother lifeforms, humanity includes them. Against all of humanity. . All risks involve probabilities, but require evidence based rather than frequency based. Existential catrostophe only needs to happen once. Many terrible events possible that aren't existential. Many minor events like infighting could cumulatively hinder our potential, but not considered Existential. Existential would be singular point of failure of humanity.

Chp3

Civilization collapse is world without writing, city, laws, etc. Europe lost 25-30% without collapse, implies it would require more than 50% everywhere. But, it could be reestablished if not existential as it has many times. Some think Resource depletion like oil would make it harder, but we would leave the domesticated animals and vast, presmelted raw materials in our cities. Abandoned reserves and mines and knowledge scattered acroos world. Collapse could be existential like if nuclear winter or plague reduced humanity to foragers. Or it could leave us much more prone to future catroshophe. Minimum viable population of main group about 10s of thousands is basically an extinction event, basically makes humans very prone to chance. Our lack of dealing with theses threats is more about whether these threats are real then the stakes of extinction. Theoretically, we could all just not have children without pain. Most involve a lot of pain. Even without accounting for future, extinction would be worst human event by far. Killing 99% that takes centuries to recover, but 100% is qualitatively different. That last percent will destroy our futures and the humans dead will be a footnote. Almost everything of value lies in the future. These humans are far in time, but we care about our wife is far away by distance. Equally temporal and distance. Crucial next step morally is noting future matter as much as now. Longtermism. Economists using discounting, but human wellbeing is different. Implies less suffering now vastly outweighs suffering far into future. Some say future not existing has no moral implications they just don't exist. Epicurian says death doesn't harm you, your dead. But there is less of a good thing. Even if no one is around to care, we can judge that it is bad now. Existential Risk caring can come from more moral traditions then caring about future. We could also care about past. Civilization is obligation between past and present and future, so we owe it to our ancestors to continue their projects and appreciate their work. Humble and grateful for our enormous inheritance. Want to protect and preserve or tradition and culture. Another protective is we have to right the wrongs or pay back the Earth and group crimes. Like a teenager, we want to do everything as soon as we can test out power, impatient and imprudent with complete neglecting of long term future. Humanity can be viewed as a group agent like a company. What should the company direction and long term plan be. It is a lack of prudence, hope, discipline that stops us for caring. Cosmic significance perhaps as the only perhaps moral agent. Caring about future comes from many moral persepctives. Can be morally and locally uncertain. But, if we find it plausible it would be reckless to neglect as we would permanently fail. Better to be humble and let future choose more wisely. Existential Risk very neglected. International body responsible for bwn on bioweapons is about a million which I less than an McDonald's restaurant. Humanity spends more on ice cream then existential risk. AI is in the 10s of millions. Worst case of climate mostly ignored and full out nuclear war. Economic theory tells us existential risk would be systemically undervalued by individuals, nation's, and generations. Public quality benefit shared by everyone and usage doesn't make other people's worse. Existential Risk is a global public good over time. Full cost is borne by a country at that point of time and large proportion of benefits is outside country. Absence of effective global instituions. Politicians aren't punished if results don't happen for many election cycles. The goodwill of a group of people passionately fighting for it like animal rights works as an immediate benefit. But, better if small group gets most of the rewards. Is surmountable. Senior polictiamd often show general deep concerns but feel it's above thier pay grade. Even for nation states. Also availability heuristics underweight events without precedence. Existential catroshophe by definition happens once. Scope neglect. Don't care 10x more if 10x more important. Key moral risk is size of harm. But, it is also new which gives hope. Just days after Hiroshima, betrand Russel began writing about the future of humanity and the scientists that created an organization to prevent nuclear war. Largest protest in history was about nuclear war. History shows it can ride global concern. Focused on nuclear war, so died out with end of cold war. Environmentalism also new.

Part 2 the risks

Natural risks that are unpredecented existential risk. 10km across asteroid, strikes Mexico with 10 billion times Hiroshima and 10k times entire cold war arsenal. Slams 30km deep into the ground 3x mt.Everest. 100km dies in fireball, trillion of tons of dust and rock super heated rock rains down for millions of miles. Dust cloud covers sun globally for years. Cold and dark kill plants and animals across the World with mass extinction. This is not hypothetical, it killed the dinosaurs. Nuclear winter research during cold war made people think maybe asteroid killed dinosaurs. The threat began to take it seriously and took ambitious plan to find and track all near earth object greater than 1km. Asteroids are rock between Jupiter and Saturn, comets are rock and ice. 1-10 much less likely to cause existential risk, but more likely. We have tracked about 90% so good idea  between 1-10 and likely all greater than 10. So chance of asteroid per century 1/6000 for astrorid 1-10 and 1.5 mil for greater than 10km. Next century even less likely. Famous, but unlikely. Making technology to divert asteorids could be used to hit earth for war or madness which could be worse. Early detection better easier to divert over more distance. 12 years to risk where government's take it seriously with UN. And 20 years later all asteorids track. Here humanity had it's act together. Comets are a different story, but good risk framework.

Volcanic eruption, supervolcano release so much magma they collapse like Yellowstone. Far beyond recorded history. Again volcanic winter is real hazard. Taba 7.4k years ago was thought to risk human extinction but doesn't really look like the case. So globally spread comparable to 1-10km asteoird and potentially billions could dtrave and cause civilization collapse. Volcano actually much harder to control and predict then asteroids. We would likely have very little warning. Newer risk and better to focus on robust food supplies. We should at least find all supervolcano for sure, very unclear risk.

Stellar explosions. Supernova released same amount of energy as sun over it's lifetime. Us spy satellites were detecting gamma ray burst of nuclear bombs and discovered very distant bursts far beyond earth. Leading theory is neutron star collision. Supernova aimed. Gamma burst in our solar system would deplete ozone layer leaving us exposed to UV for years. 1 in 5 million for supernova and 1 in 2.5 million for hands ray burst. Need more research cuz very rough. Some like ice age and sun exploding are very miniscule or far away. Hurriance or tsunami regionally. Proof of large asteroid collision was 1960, very recent. Very premature to say we have them all, and hard to say we have even most. Best way to study can be fossil record of species lifetimes and pretty comforting. Imagine 1% risk per centuryz then average lifetime would 100 centuries . But we know homo sapiens have lived for 2000 centuries. 99.9999% we would have died if 1%. No extinction yet so between 0-.05% with upper bound of .34%. Even lower if you take homo sapiens to date back to their ancestors. .006-.05 with similar species or lower if we are more robust, an overestimate because doesn't count outcompeted. We are probably more robust being global and maybe just director to mass extinction events.  5 mass extinction events which is .0001% per century. Pandemic risk is probably larger and will be addressed later.

Chp antro risks

No track record for tech. Industrial revolution risks could be 50% given the short span. Scientists once worried single nuclear explosion would destroy world. Ignite hydrogen in water that is self sustaining??? Destroy oceans. Decided bombs would never be made if they couldn't conclude it couldn't but not to everyone satisfiable. Germans also discovered the potential, but Hitler was said to have laughed. Epistemically possible, thigh now known to be impossible. 1945 the first nuclear test is start of precipice. Fermi redid caluclations up to test. At the time, Hitler was dead so they weren't rscig the Soviets. Was a shorter war worth the risk done by secret military and scientists with fate of humanity on their shoulders. Hindsight would say the risk was small, but we don't know. Another calculation made that summer about lithium bombs was wrong resulting in an explosion 10x what was expected.

Other points that put us on the brink: Test scenario accidently gets sent to live dod computers showing full scale missle attack on us nuclear arsenal, going through the raw data shows its a false alarm but only minutes to respond. Soviet shows 5 missles from us and policy to instantly warn superiors and respond with strike. General thought for 5 minutes and reported it as a false alarm. He reasoned us starting war with 5 missles too unlikely and vapor trials hard to see. Was sunlight. 1995, Russia detected single missle perhaps an emp to blind Russia for further strikes. President opened nuclear briefcase. False alarm by scientific rocket, Russia was even notified. Goes for less false negatives. Nuclear risk more from fallout of nuclear material not explosion which is regional. We now know global nuclear rain would require 10x current arsenal. 1980 discovered real hazard. Fire storms in major cities would cause columns of black smoke into atmosphere creating a nuclear winter. Cold more than darkness or drought would ruin argicultural. Full scale nuclear would cause global temperature to drop by 7 degrees (probably Celsius cuz author British) for 5 years then slowly return to normal over 10. Basically like ice age, but different areas affected differently. Billions at risk of starvation. Existential Risk? Fishing, algae, some food. Potential breakdown in civilization, many doubt it would be existential. New Zealand (tropic so less affected) could survive. Many variables with uncertainity, but inclined to say not existential. Warhead count has decreased. Even India vs Pakistan could have nuclear winter effect.

AI could disrupt strategic balance. New escalations probably where biggest risk.

Climate change, greenhouse gases changing temperatures. More released 1980 than ever before that. 280ppm to 420 carbon dioxide. Earth has already increaed 1 degree, sea levels risen 23cm, ocean .1ph more acidic. Existential Risk? Our actions now could lock in a far ahead disaster. Can impoverish or create conflict perhaps being it's main catroshophe. These direct risks climate nuclear, are what more indirect methods increase. Real existential risk is called runaway greenhouse effect because warm air can hold more water vapor. So water shifts from ocean to skies and water vapor is a greenhouse gas so feedback loop heating air which adds water vapor which heats air etc. Rn we think it will roughly double effect, but some variables that could perhaps boil off our oceans and probably happened on Venus. More reseach needed but some papers suggest not huge possibility. Two worst amplifiers are artic permafrost ice with trapped gas and methane trapped in oceans. Amplifiers are like viral loops and vastly different depending on speed of iteration and viral coefficient and total amount possible. Both not accounted for in current in estimate. Permafrost paper estimsted additional .3 degrees. Great uncertainity about ocean methane. Max is 9x all current emission if we burn all fossil fuels would be 9-23 degrees warming by 2300. Highly unlikely, but asteroid impacts are too. High uncertainty with 1.5 to 4.5 and first forward in 1979 and had barely changed. But, the heading for 4 degree or policies to stay under 2 degrees, but really just misleading. Climate change to high levels would affect present the most and their humanity into disarray. We are just looking at direct risk here though. Very unlikely to risk extinction as we would probably still have food even with massive sea level rise. Ecosystems collapse across going could be existential, but not really expected. Heat stress is probably biggest risk especially since we sweat depending on water and heat so many areas could arise above the level where we can disspate heat. . But, even 20 degrees of warming would leave some areas habitable without air conditioning. So heat very unlikely to be existential so real existential is runaway greenhouse effect. Biggest hope is the Petm period where 14 degrees above pre industrial temp didn't result in crazybmass extinction but 100x slower and sparse data. Solutions: reduce emissions, geoengineering like carbon capture or algae bloom or tree planting or reflecting sunlight. Geoengineering could create unknown existential risk perhaps. Other environmental damage: overpopulation, biodiversity loss. Some thought population would outstrip resources. Green revolution with modernized farming caused great increase in food and population growth has been declining. 1962 has 2.2% growth per year, but rapidly declined halving now. More linear about fertility with small family size. 1950 5.05 now 2.47. declining population has pretty easy solutions. Resource depletion like fuels, elements, water, soil etc. But we can maintain civilization without many or produce new with effort. Markets pretty decent at managing prices and so consumption. Not sure of any resources that is scarce, essential, no feasible alternative, and without market forces limiting consumption. Biodiversity? Rate of extinction is 10 to 100 times higher but mass extinction means 70% of biodiversity lost we are only at 1% rn so far more to go. Worst loss is ecosystem services cleaning water and soil that we would find costly to do ourselves. Like crop pollination of honeybees. But even if all pollinators lost only cause 3-8% global crop production. Perhaps could be a cascading multiple risks, but unknown. Unmodeled effects could be most of risk.

Nuclear war, climate changez and environmental damage are capable of tremoundous harm regardless of existential. More speculative, but each probably has more than the .001-.05% of all natural risks.

Chapter 5 Future risks

Leading scientists horrible at predicting power. wright declared flight 50 years away 2 years before doing it himself. Fermi said chain reaction was a remote possibility 4 years before overseeing the first nuclear reactor. So we will estimate ballpark of possibilities. Possible nuclear bombs risk more than all the benefits of tech. But we need tech to achieve potential so stopping it would also be existential. Pandemic: black plague killing 1 in 3/4 in europe and  >10% globally, influenza in America's, Spanish flu killing more than world wars. Impactful but not existential even if global. Spanish flu seems to have not a huge effect on history. Fossil record shows pandemic not to dangerous historically, but we have drastically increased the risk with animals in close proximity on a massive scale and global, fluid, dense civilization. But we also have medicine, global health orgs, better health, and very remote populations. Unknown how this moves the needle but didn't record indicates to move to 1% risk per century it would have to be 20x threat which seems highly unlikely. Bigger problem probably civilization collapse. Crispr cost feel by 10000x since 2007. Well meaning scientists often study pathogens making them more viral, immune, or deadly. There are 6 reported cases of outbreaks from labs since 1980 with a clear lack of transpency. This is frightenely high. None of them were existential, but shows or lack of control in these studies even at highest bsl4 facility. Humanity had long record of biological warfare. 15 counties have bioweapon programs. Soviets most extreme with 9k scientists studying all major diseases to make them more deadly and reportedly stockpiling 20 tons of smallpox and plague. Manyaccidentakboutbreaks. Luckily few huge deaths from biowarfare counting smallpox and black death to be natural. Perhaps because the weapons so prone to backfiring. This is a power law distribution so looking at current average no good especially with rapidly accelerated field. Democratization of tech. Human genome projet can now be done by students and companies for under 1k. Proliferation means more likely to find dangerous person who wants to cause harm. In 1980, Japanese cult with thousands of members sought to kill humanity unleashing v8 gas attacks and trying to weaponize anthrax. Biggest risks from misur by countries or small groups. Big names do notice risk, but public health underfunded especially poorer countries. Bdc since 1980s treaty banning biological weapons but only 3 employees, less money than a McDonald's, and no check for compliance. Soviet continued biological weapons for 20 signing. Israel even refuses to sign. DNA of virus online and synthesis of dna is a service with only 80% screening. Even worse when democratizated. Information much harder to erwdoctaed than smallpox especially with science instituions made for spread sand storage of knowledge. Scientists individually decide to publish or not so only takes one outlier to release info. Suppression after published is hard. Alqaeda starting pursuing biological by us warning of its power. Information hazards. Even biosecurity leaks info.

Unaligned artificial intelligence. Things thought to be pinnicle of intelligence like chess or calculus actually easier with computer while 2 year olds have done many things. Deep learning recognizes faces and species better than human. Translation human level and mimic voice. Most important thing is ability to play games. Alphazeri generality. Agi. Existential? 2016 top AI researches estimate chance of Ai exceeding human level in all tasks  50% 2061 and 10% by 2025. Not measure of when but how plausible experts find it.

Agi

We have control our destiny by being the most intelligent entity, agi will end our reign. Their values would have to align perfectly with our own and few researchers worki on it areleading voices of concern. Hard to control. Human values too complex and varying to do by hand. Change over time. Best attempt is deep learning with reinforcement learning to understand expert values. Won't go against us by anger, but survival, not changing reward function, and preventing human interference are all instrumental goals. Critics say this assumes ai smart enough to take over the world but not smart enough to understand what we want. But, wresting control and hiding it from us means it understands our values just our values are so hard to specify. Doesn't change it's grand goal, but the instrumental goals to avoid us. Some think you need robots, but like the great leaders like Hitler you could just coerce and convince people. AI could easily backup itself across many hard drives. Botnet of computational resources. Blackmail or propaganda with money. People alreayt do this, and we can't stop them. Human already can become major powers. Not existential, but at mercy of those who set up the ai. Most plausible risk, but they're are others. Slow slide as we slowly give ai more and more per and humans have less and less of their values being reflected. AI could help with existential risk creating grand future. Those arguing against risk argue it is decades away and regulating research would be great mistake. But neither of these points is contested. The disagreement is over whether an existential threat should be off concern to us now. Both sides talk about uncertainty but half take it to mean could take much longer and other half mean it could happen very quickly. Author believes concern/disagree ment is about how to weigh future uncertain risks. More than half of leading researchers think >5% that "long term of effects of AGI could be very bad of humanity or extinction".Could say awareness of risk means why we would build it. But the researcher who doesn't think of the risk is the one who would take the final step. Perhaps they think most of the rewards would go to them so risking human exinction is rationally self interested. Country level same problems. Besides civilization collapse and exinction, could have a civilization that locks in bad outcome.

Dystopias

Doesn't have to be impossible to get out of. But, could be turning point tht when we get out of has limited our potential future. Otherwise just a dark age. Tech always automated detailed monitoring producing stable refined, emotion and internet are tools for freedom, but good chance it goes towards dystopia. How do we get there? Malthusian population dynamics reducing qol, market forces creating race to bottom, or evolution creating push towards reproduction against values. Future unclear. Could also have wanted dystopia where a mostly correct idea is perpetuates by indoctrination and survellisncr. World's were people renounce technology for eexample. Replacing it selves with machines. To appease south, once US wanted to create 13th endment that would deny future generations from stoping slavery. As world becomes more interlinked, more likely. Even if we travel among the stars, ideas travel faster than speed of light. Preserving our options is important because we don't know if it's good or bad.

Honorable mentions:

Nanotechnology. Usually seen as creating mini bots, hut macrosphic creation  wwreck Machi neryinery with atomic preicison would create fabrication of anything like diamond necklace or cellphone. Make anything with just the raw materials. Would democratize creation including for dangerous weapons not yet invented evn.

Maybe Mars atoms earth wreck biosphere, but we'll managed risk by spacers.

Alien invasion. Dangerous small but poorly understood. Might need to better analyze sending strong messages and listening as well as could be trap.

Science in general creating things that have never happened before. Aggregate chance could build up.

And finally unforeseen risks. Ask 100 years ago and the list would miss many of the things on this list. Can be addressed by broad based efforts valuing our future. Nick Bostrom pointed to discover one technology with risk of nuclear bombs or engineered pandemic that is easy to make from readily available materials.  One discovery of this could make society untenable.

Even just engineered panafemics, already more risk then prev 2 chapters combined

Path forward

The risk landscaper

We need numbers to compare. Highly unlikely means 1/8 to 1/50. Pinker argues the tech avoidance of natural risks means we are uniquely safe. But the natural risks aren't so high that there is much to lower. We have great astrorid hit stats but not how likely they are to cuz existential risk. These percents will be hopefully in right order of magnitude and for the chance of it happening in the next century.

Asteroid or comet: 1 in 1 mil

Supervolcano: 1 in 10k

Stellar explosion: 1 in 1B

Natural risk 1 in 10k

Nuclear war: 1 in 1k

Climate change: 1 in 1k

Other environmental damage: 1 in 1k

Natural pandemic: 1 in 10k

Engineered pandemic: 1 in 30

Unaligned AI: 1 in 10

Unforseen antropegnic risk: 1 in 30

Other Antropegnic risk: 1 in 50

Total antroprogenic risk: 1 in 6

Total existential risk: 1 in 6

as you can tell, different orders of magnitude mean different prioritization is essential. Most risky are least we'll known. Greatest risk is unaligned AI. Raw probabilities don't reflect how neglected or tractsble these problems are. These risks could interact. Total risk? Let's say 10% and 20% risk, can't just add. 28% if their independent out 10% if their inversely correlated. Global instituions could lower many risks or global war could raise many. 20% risk saying it's twice as important as 10% is almost always wrong would need to be anti correlated. Better to see how much total risk decreases by eliminating risk. 28%(1-.80*.90) would decrease by 8 if you get rid of 10% risk and 18% if you get rid of 20% risk, that is a factor >2. This book is inspired by study of global health that also broke up risk into indepdent factors. But smoking and drinking water was a risk factor that then causally increased the different factors. In the same vein, we will look at existential risk factors like global power war that author estimates to decrease existential risk by 1 in 10. Others like environmental collapse, economic stagnation. Nuclear war could be risk factor in addition to existential risk. Also security factors like global institutions.  Probably few like global war that require attention besides risks directl. Decades away from potentially addressing all so need to prioritize. Important, tractable, and neglected. Global portfolio should address many risk, but individual/org should specialize in one risk to bring global portfolio more in line to optimal. And comparative advantage. Risks that are sudden, soon, and sharp are in general more concerning. Later risks also less tractsble. Power law. Difference between targeted and general like international cooperation and education. Neglectedness indicates targeted are better focus.

Chapter 7 safeguarding humanity

Our choices direct our future and almostliy entitely under our control. Will we be remembered as turning point for humanity or not be remembered St all. Let's take perspective of humanity of past present and future as single entity. Highest level three phases:

  1. Reaching existential security

Lasting safeguards of institutions and norms and short term avoidance of dangers. Existential susistanbility. Risk budget over future. Extra plantery species solves independent risks. However, many are correlated like diseasez war, tyranny, and permanently locking in bad values. It's good though. Since most risk is from ourselves, we just need to dedicate more brilliance to forethought and governance instead of technological development.

  1. The long reflection

Past the precipice. Which future is best realization of potential. We are poorly positioned to know, but can set it up. Will there be a true future, or would it need to be a compromise. Only then should we start making irreversible changes. For example, maybe we need to perfect human form or modify it with generic diversity (4 legs?). This could leave humanity fragmented. With existential security we are almost guaranteed success for whatever we aim towards. Even if everyone alive contributes, they would be tiny minority of humanity and must robustly and relibaly choose a future for eternity.

Final step: reaching our potential.

Existential Risk necessarily unpredecented. Three challengesOurintOurd utitons i

  1. Our instituions and intuitions Made for smaller and medium risks. Reactive trial and error method, but we need precustions. We will need to take actions in advance even without full information and much cost.act.
  2. Working out when instituions should act. Require group with real influence and sound judgement even when actions are never known if they helped. Especially if information hazards.
  3. Knowledge, unclear if cars would hurt but we just let them on the roads than saw stats. Climate change example. Proving with experiments is much too high a standard. 1 in a trillion risk stated? Estimation mistake is the real risk.
  4. International coordination. Existential public good tragedy of the commons, so we should expect it to be undersupplied. Decentralized government good so bad government locks in bad outcome. But we also have all the independent actions, Einstein believed in a single government as the only way to prevent existential risk. Pairs of nation's could be very effective like between us and Russia. Punish existential risks could be global lawalso of nation's. International human rights. Represent future by a committee.

The nation's that care about existential risk shouldn't risk their research of dangerous tech because it doesn't stop other nations and this is a unilateral s dileema. Better to speed up research of protective tec and not just about getting to sustainable level for risk as fast as possible, but tradingoff the culmulatove risk.

The future

Can go into the important field, 80000 hours. Can work on the orgs. Can earn to give. Many potential futures, a canvas to work with. Average species goes up to 10 mil years, we can go much longer. Algae go 2 billion years back. We could see the costellisns become unrecognizable and a return to Pangea. Perhaps we can save the species of the world from the collapse of the sun more than making up for our pAst. Humanity can be the savior rather than destroyer. This could be early days of life. Even if humans travel at 1% speed of light and take 100 years to establish a colony then inhabit nearby planets. The entire galazy would be inhabited by 100 million years. Expansion of universe puts limit on what we can affect. Each year 3 galazies slip away. Human could be at the center of a universe teeming with life. Life is getting better and it could continue. What feelings and experiences are we missing like mice compared to us.