
Superintelligence: Paths, Dangers, Strategies
by Bostrom, Nick
Published: July 3, 2014
Read: November 1, 2017
Review
This book isn't for everyone. It is fairly verbose and explains and derives many ideas through a precise philosophical analysis. There are new vocabulary words and a very precise selection of words akin to philosophy and textbooks. However, the logic is fairly clear and accessible and the ideas are jam-packed in this book. The base of analysis provided really interesting insight into various fields from the analysis of full brain emulation, post-AI society, human society, human intelligence, and human genetic selection. It provides a rare intense framework of analysis that is somehow easy to follow without boring. The book does require some intent attention, but can be managed if you are sufficiently interested in the material, AI. I was and found the book to redefine the way I consider AI as both a threat and a hope. Through analysis painting AI as a real threat to human existence, I can't help but realize how weak human society and humanity can be. Across fields, this is a well-written and well-researched book that will give you knowledge, explanations, and analysis skills for the future. The book's arguments are well-thought out and stand strong with shrouds of doubt in the right areas.
Notes
#Book by [[Nick Bostrom]]
PATHS TO SUPERINTELLIGENCE
- As we get smarter, all the other paths to superintelligence become more possible.
- [[Genetic Engineering]]
- It is very possible to imagine a world where embryo selection is the norm as the first to do gain advantage and those that don't are left far behind.
- Stem cells plus selection of embroyos would be crazy allowing evolution on a compressed time l.e improvement of intelligence even on an smaller time scale
- could far increase the rate of advancement
- Imagine a world where the elites on prestigious campuses are peers of Alan Turing
- It's even a matter of nations as [[China]] could pursue it and leave us all far behind unless we adapt.
- Produce citizens that are more risk adverse docile except for the ruling class
- [[Brain Enhancement]]
- All brains are different and represent concepts differently and so to download logic you would need to map knowledge to your synapses
- This is really the role of language
- [[Genetic Engineering]]
- Paths:
- [[AI Ideas]], bayes agent, [[Neuromorphic]](combination of two surrounded), [[Whole Brain Emulation]], reinforcement learning, regular evolution in embryos, better organization of humans, brain machine interface (weak path, easier to do others)
- 3 forms of superintelligence(practically equivalent) machine will wreck us in the end
- Superintelligence, outperform current humans across most domains
- Speed
- Intellect just like human, but fast(orders of magnitude, brain emulation speed up, millennium of intellect work in one day)
- lol calling someone equivalent to going there yourself in speed, not bandwidth to prefer to talk to other super fast minds
- Collective
- Aggregates smaller intellect to be better than everything
- Organizations of humans especially if problem can have division of labor
- Need it to be extreme, I'm talking better than all across many domains
- [[Wisdom]] is better prioritizing what is important, high intelligence doesnt mean same thing
- Quality faster and better
- Notice many humans have narrow deficients, like not being able to recognize tunes or social,
- this means specialized circuit is required
- Implies [[AGI]] Artificial General intelligence isn't enough
- Implies there are intelligent skills probably existing that humans don't possess i.e specialized components for programming or business rather than our clunky use of General purpose hardware
- Anything that the brain does in a second can include more than a hundred sequential ops, machines 7 orders of magnitude better
- Because as a project gains stream it gets Moore short and computers run blazing fast.
- Self improvement had a high chances of being fast
- Art in the age of mechanical reproduction
- After superintelligence expect singleton, a form of "government" solving all those pesky coordination problems
- Collective global ai intiative is more unlikely than Isis, because those who can do it alone would be reluctant with such as important thing and everyone would be fearful that the research was being siphoned into another country AI project
- [[AI Ideas]] is not smarter than us like I am to you its more like humans compared to insects
- Humans act out their identity and are risk-averse fearing risking it all to double it.
- AI is might therefore be more riaky
- Global unity could turn an AI advantage into a singleton
WHY IT WOULD WANT US TO DIE?
- For any goal really, there is [[Instrumental Value]] in acquiring resources and defending its own existence.
- Any collection of energy and matter can become anything with adequate technology.
- The return of a single von Neumann probe is so worth
- AI can have a perverted motivation of developers
- If goal is smiles, then permanently attach brain electrodes.
- Even if we can't think of a perverted way to maximize, it's superintelligent, can't be assured.
- If it's goal is reward signals, it won't tap put like a junkie it will take action to ensure the reward signal will never stop, i.e no one has the power to stop it
- As long as there is a non zero chance acquiring more resources will help in its goal, it will acquire resources. Even if it can't think of anything, new computational power could help it think of new ways
- Pit it in a box, put in a information flow, humans competition though makes security likely/possible to be ignored especially since the first will take over the world. One chance.
- Three options for AI really.
- An [[Oracle]] that just answers questions/provides steps to a solution given information
- [[Genie]] that actually grants a wish, you give it the goal on startup then it kills itself
- [[Sentiel]] that just acts as an agent
- THe three arent' as different as you would think as even an oracle can have pervasive effects and an unwitting human can enact the plan.
- Control can be physical limiting its information of interfaces with the world. Either is weak
- We need to get motivation under control. Apparently
- If economic growth ever stops exploding, we will eventually return to a [[Malthusian Trap]] where everyone can only basically support two children
- (death will ensure the herd) just like old times, ocial selection for groups that promote growth.
- Are machines sentient? Moral problems
- Imagine trying to minimize differences of a labor robot, so everyday their memory and lives are wiped and rebooted. Are we killing them every day? What if they like it and like work?
- Want reliable>perfect ai
- HOW WILL WE DIE?
- If it has engineering prowess, [[Math]], cs, or even molecular understanding, or even strategizing to be focus of improvement, then political acumen, charmida, and creativity are indirectly in tea h
- What if [[IQ]] of 6,705, no idea what it could do. Will be more powerful and foreign to us than anything imaginable
- AI can hack, manipulate people, strategize for long term, do illegal stuff for money, hide it's development, pretend to be cooperative and docile
- Biasing information, hijacking political systems, could cover world with power plants and solar panels
- How would an AI escape?
- Solve protein folding problem
- and then use companies that create the DNA sequenc es for you and then trick someone to mix it in a certain way to produce a primitative nanofactory that can then produce more complex factories
- A single [[Von Neumann Probe]] that uses star matter to rebuild itself will be broken allowing the AI to live forget and organize entire star systems in whatever it values