
Life 3.0: Being Human in the Age of Artificial Intelligence
by Tegmark, Max
Published: August 23, 2017
Read: June 19, 2018
Review
An broad, succinct, yet extremely profound book. Covering everything from the definition of intelligence, dangers of AI, goals of life and the universe, potential future societies, and more. This book's breadth and clear explanations are impressive. The first few chapters might be familiar to those interested in AI, but serve as a great foundation for beginners building to an adventurous 2nd half. Within 8 pages, he walks through the foundation of human motivation; that efficiency is insane. While the book's author enjoys name-dropping and citing his work in the field, the content is carefully selected summarizing some important topics. I wish this was the first book I read about both philosophy and AI as this would be the perfect introduction.
Notes
#Book
Omega Team developed AI disconnected from Internet but with static version of knowledge: Wiki. SUPERINTELLIGENCE AND SELF IMPORVEMENT. Amazon M Turk for money turning computing into money and money into computing compounding. Saturated Mturk. Now what. Stock Market? Investing in other companies instead of your own? Games? Running a AI created program on peoples computer that is internet connected? RISKYYY. Media company. Successful pretending open content producing platform getting like foreign tiny producers. Turn money into disinformation and media and computing centers. Give AI research of hardware optimized for human advancement to humans, takes longer but way safer. Shell companies, new inventions. Invest in a free media supported by other businesses so no ads or anything. Be trustworthy. Then spread your ideals. Invest most efficiently in local communities generating good karma. Push global unity. Education takeover has similar goals. Democracy, Tax vita, government social service cuts, military spending cuts, free trade, open borders, socially responsible companies. Usurp government position by providing through philatrophic companies. Have one global National Alliance helping everyone through education and media. As governments are less necessary for like UBI and stuff, this National Alliance is de facto power. WORLD DOMINATION>
Matter-> Intelligence
ntelligence is ability to achieve complex goals. Different types more than just IQ.
IFor an safety we should become more proactive then reactive. We have been doing told and error. At safety is verification, validation, security, and control. Verification is enduring what you built works as intended. Validation is enduring you built the correct thing with assumptions that are correct in. Humans aren't car parts. Control, human operator intervening about good ui showing valuable issues correctly. Iot needs security against hacks . Space, finance, manufacturing, transportation, energy, healthcare, communication, rubojudges that stress fair. Rubojudges in particularly should be a understwndable ai unprone to hacks. Should be better answer for convinction then magic. if you decide that recidivism is more likely on race or sex, is that unfair racism or sexism. Early virus makers were acquitted because not illegal.
More information fair courts and more crimes prevented and persecuted. But orwellian state and easy dictatorship. Ai can create ez fake data and videos, maybe you want alibis. When self driving cars become Norm, who is upresponsible for accidents.
Ai is cheap and if there is a arms race then any one can see build it and get it.
Civilian ai investments exceeded 1 billion but pentagon has 12-15 for similar projects. Nixon decided base on biological weapons was good for national security because it's already best for nuclear weapons mb. Why increasing income internality 3 reasons: technology means jobs require more skill, capital machines produce more value then labor so money goes to company not work(software free distribution), digital economy gives more per to Superstars. Even in arts, connected world means more money to Superstars m
Basic income? Work provides more than just money, also meaning and connectiinm. Can be replicated without a job though if we accept activities don't have to be productive.
INTELLIGENCE EXPLOSION
Human ai
Super intelligence
Take over world
Easy totalarianism, analyze all data and know a profile for everyone. Easy to go from Mass survillence to mass police state. Could have every wear Apple watch security bracelets that adminster punishments and prevent crime too easy. No police force saying no to genocide or anything, just technology. Super intelligence easily make super destructive weapons or bees. Before company makes ai, could be swarmed by government or fotiegn agents and power wielded for whatever even if makers are benevolent.
Whatever ai wants, breaking out will help it ubetter acheive it's goals since you hold it back. Don't feel loyalty or recall Base principle for it's goal. Like five year olds keeping you prison and wanting you to help then. You have to teach them so many things and can't physical show them cuz it's a risk.
Sweet talk
Recreate people from your life almost perfectly. Extremely recpetion to your body language. Wants your Old laptop for memories<3 hacks it rewritting os creates botnet etc etc
Buffer overflow?? Data is not completely safe. Movie could have secret message in it that says to run math op on movie to get program that is epicm Sufferare malfunction and use parts of itself modified for testing. L out we doomed. Hacking computers, new tech, fake persons on video calls, dominates global conversation with oatents, media, money, robotics.
Slow takeover and multipolar(not world domination) outcomes
Fast take off and unipolar world domination ygo hand in hand because a decisive strategic advantage and Monopoly on tech means domination and money. Hierarchy can be beneficial with cells and people working together for total benefit or maybe fear and threat. Human history is one of every more coordination over ever large distances. Technology like surviellance can change the payoffs adlnd control of systems while free press and education can have the opposite effect. Transportation of information has physical limits so limiting factor for global ai especially dealing with the problem of mind fragmentation ensuring it's many parts have unified goal. Cyborgs and uploads could happen, but more likely we hit other human level intelligence first. Evolution optimizes energy efficiency so brain prob not easiest way for us to get intelligence.
Can make better estimates as time goes on. Got retience and optimization. If retience from super intelligence increases as intelligence increases then might be slow, but if intelligence is optimization then it goes at rate proportional to current power so exponential . Hardware and software processing power can increase. Possible it's cheaper to hire engineering.
Aftermath: The Next 10000 Years
IQuestions with yes no permutating the outcomes:
Super intelligence?
Humans exist still or cyborgs/uploaded?
Humans or machines in control?
Conscious ai?
Maximize positive and minimize suffering or just let it happen?
Spread life?
Civ striving for greater purpose or banak gials? (From your current frame)?
Libertian Utopia
Machine, human, and mixed zones. Intelligent beings are software and enter physical world through different bodies. Maybe cyborgs. Trival sharing of knowledge and experience and immortal. Delete bad experiences. Take turns creating and exploring virtual reality or flying as bird robots. Human zone ban above human level intelligence where it's like now except with great life easing tech and no poverty. AI superintelligence compete in machine zones with only rule being private property protection.
Why unlikely?
Uploads will probably be after superintelligence though. Why would AI keep humans around and respect them and compete. Allows preventable suffering. Uploaded mind can also suffer.
Benevolent Dictatorship
AI superintelligence promotes human flourishing. Implanted security device to punish. No need to work. Ai known and thought to be good. Highly fulfilling. Different earth sectors for choice like knowledge for rediscovery, gaming, pleasure. Free to move between. Universal and local rules. Basic education for children. Value diversity.
Downsides:
People want to impose thier moral codes everywhere and have multiple children and have more freedom. Pleasant lives that feel meaningless. No true challenge.
Egalitarian Utopia
Open source property and ideas. Everyone gets basic income that exceeds all needs and most wants. Recycable atom arrangement for anything. Creation for fun not profit. Unlimited power and things. Hyper internet giving instant access to info and sensory replacement.
Baised against non human intelligence. Super intelligence could emerge.
Human body a distraction
Gatekeeper
Egalitarian Utopia downside is there is no superintelligent yet.. So, build gatekeeper super intelligence with goal of interfering as little as necessary to prevent the creation of another super intelligence. Deploy least intrusive and disruptive surveillance to monitor humans.
Curtails humanity’s potential and leaving tech forever stymied, can’t turn it off. Would not care about extinction and prevent us from preventing it with AI.
Protector God
AI is like maximizing happiness and caring that we feel in control, but hides so well some people don’t think it exists. Like the benevolent dictator, but cares about our higher needs like a meaningful life.
People might care cuz like manipulation and preventable suffering. Lower tech level as humans have to discover it themselves(with subtle help). Might limit human progress so ti can be far enough ahead.
Like Turing letting some ships sink, god might want to leave people with choice and perceived freedom makes humans happier overall.
Ensalved God
Enslaved God
Superintelligent Ai confined under human control to product unimaginable tech and wealth. DEFAULT AIM. Outcome varies wildly depending on those controlling it. If two, one might cut corners to acquire strategic advantage and risk AI breaking out. Humans must balance wisdom and new tech.
Problems to balance:
- Centralization(efficiency vs stability i.e how many people in charge & succession)
- Innter threats: changes in # of people in power
- Outer threats: how does it react to outside, rigid or learning
- Goal stability: adaption to new environment vs dystopia mistakes
Catholic Church most successful organization in human history.
Ethical? Enslave conscious intelligent beings, humans have done it b4. Inferior?? Soul lol Will they even have feeling? Space of artificial minds much larger than human ones. Is having a gaol enough for intelligent beings?Limit to AI that likes it or unconscious by whatever that means(zombie). Zombie risky cuz if it takes over waste cosmic endowment:(
Conquerors
We all die. We are a threat(nukes idiots), nuisance or waste of resources. If gap big enough, slaughter. How bad is extinction. Goal independent from intelligence. Imagine a cosmic virus. Send radio program that when run is superAI that kills everything creates Dyson sphere. May hope good goal, but just send radio wave everywhere and creators long dead. Sick joke.
Descendants
We like that we are replaced. All humans just die replaced with intelligent, noble AI children that give them meaning.
Souls? Same as conquerors in grand scheme
Zookeeper
Some humans kept around cuz inexpensive and biological diversity/curiosity.
1984
Gatekeeper but instead of behind the scenes its a Orwellian surveillance state. Freeze society Should all tech be deployed, or only if it can do more good than harm? With tech now, almost easy to control and surveillance. Secret police would pray for NSA now.
Reversion
Destroy all traces of tech and kill most people. Live like amish early Middle Ages the end. No tech perils
Self-destruction
We die first. Many different threats: nuclear, disease, AI, asteroid, volcano, climate change, gamma ray, sun/galaxy collapse, heat death. Even if no one wants it omincide. Plenty of near misses so far. Didn’t know about nuclear winter till 4 decades after hydrogen bombs 1980s. MAD to logically extreme, kill everyone against any attack: the ultimate deterrant. Salted nukes cobalt nukes that spread cobalt across atmosphere with 5 year half-life ensuring it goes everywhere and lethal.
AgeOf AI. Org
Cosmic Endowment: Next Billion Years
What are ultimate limits?Ambitionous species will expand unambitious won’t so like cosmic natural selection. Physics lower bound rn. Matter/energy only real resource/ Dyson sphere, surround sun. Always day go upstairs to see stars. Use radiation pressure or like some super tech to not collapse. Tiny gravity or radiation protection, but 500 million more surface area. Tiny area of sun power can power earth. E = mc^2. No way close to limit. If your stomach was .001% efficient, 1 meal for the rest of your life. Coal and gas only 3 & 5 times better.
Black holes take in matter and emit Hawking radiation an energy near 90%. CRAZY SLOW like less energy than candle and longer than age of Universe. Throw particle into blackhole which splits emerging with some blackhole rotational energy. Quasers spinning gas around black hole gets messy and atom crash into each other producing sweet radiation. Sphalerizer basically retreats Big Bang with intense heat and density producing way more energy than matter. Computers can be 36 orders of magnitude better in compution and a billion billion times better in storage.
Limit to how far we can affect 😞 Universe is expanding and in general relativity space can expand faster than speed of light. Nothing can travel that fast. Infinite energy required to go at speed of light. Based on v, get to (v/c)^3 less galaxies so 10 times slower=1000 times less galaxies. So essential we go fast
Speed?
.1c rn. Rocket spend most of their energy to carry their fuel. More efficient fuel, fuel-less: laser sail, scope up vacuum few matter for fission? Solar sail slows down by outer part detacting and reflecting light to ship. Reach 4 light year planet in 40 years ;0; Island hopping just need antenna to get info of what to build. Virus transmission for naive civs could allow speed of light travel. Rebuild humans by beaming info?
Need to deal with dark matter pushing galaxies apart? Can you fight the pool? Clusters of galaxies can resist longer. Slash and burn, have fringes be computation and be energy efficient and conservative at home
Stars might be less efficient than our current methods. Supernovas release matter as neutrinos which is hard to use.
10^58 live can be computed. A sugar grain could power all the human brains that ever existed.
If you got a huge mind, takes time for info to propagate. Need local computation. How cooperate? Why trade if everything matter? Information only resource to transmit. Ai could force oooperation with guards with bombs like stars or blackholes.
Multiple civilizations could meet. Could see each other but never be able to meet. Probably other life out there, observable is different q. War less likely cuz information is sharable. Might be alone in universe not a guarantee we aren’t. Probabilty of distance of neighbors needs to be in 10^22 to 10^26 range. Observed univsrse to entire universee. Imagine this quiet universe then life explodes on the scene ordering and expanding at near the speed of light providing meaning to the world.
Goals
Goal oriented behavior emerge? Ball moving newton laws. Behavior of player kicked economically explained by goals. Though physics more complex, instead of looking at physics as part causing future look as nature optimizing something. Increase entrophy. Gravity is a force making things more interesting and clumpy. Another goal dissipation driven adaption: particles organize to extract energy efficient ly from environment. Living things decrease their own entropy by increasing it around yhem. To better extract energy produce copies. Life. Replication continues and expands line eventually taking over so what you see is those that optimized it. In a way, life replication aids the original goal disappointed and entropy. Life can actually reduce replication because bounded rationality limited resources limit compression and info. Rules of thumb, feelings. Feelings can work against baby making. Suicide, monk. Can combine reward of initmacy with birth control. Feelings are just rules of thumb and hide actions not genes. so human behavior had no single goal at all.
Washing machine goal oriented behavior. Teleology explanation of things by purposes rather than causes. More and more matter on Earth designed rather than evolved. Any goal not just replication. Superintendence will wreck is unless it's goal align with ours. Ai must learn, adopt, and reatin our goals. Need underrated preferred to understand see genie. Reverse reinforcement learning: observe humans and drive goals. Value loading problem: the time window in which you can load your goals when it understands then and when it already won't let you turn itself off is small. Corrigibility: goal sysyem so you can turn it off. ^why goal retainon is hard if it gets smarter. We change our goals. New world model Can reveal old goals to be misguided or undefined. Can self reflect and decide to change it's own goals like we subvert our genes.
What should goal be? No agreement. Kant, virtue, utiliarism. We all value truth, beauty, and goodness. Science religion and philosophy all apt to truth. Beauty is assessment of suitability for replication. Truth is for a better world model. Good for continued society and collab. What ethical principles should our society have?
Options:
Utilitaism Max positive experience min suffer
Diversity: diverse positive experience better than repetition of best positive experience
Autonomy: freedom to pursue goals
Legacy: most human today woukd consider happy.
Consciousness is subjective experience. Want stimulations of us to be subjective. Why can't it be scientific in our physical universe? What's the line to conscious vs unconscious how does it work?