~Download Ebook ♪ Superintelligence: Paths, Dangers, Strategies ♡ Nick Bostrom – Britakristina.se

~Download Ebook ♲ Superintelligence: Paths, Dangers, Strategies ♽ The Human Brain Has Some Capabilities That The Brains Of Other Animals Lack It Is To These Distinctive Capabilities That Our Species Owes Its Dominant Position Other Animals Have Stronger Muscles Or Sharper Claws, But We Have Cleverer BrainsIf Machine Brains One Day Come To Surpass Human Brains In General Intelligence, Then This New Superintelligence Could Become Very Powerful As The Fate Of The Gorillas Now Depends On Us Humans Than On The Gorillas Themselves, So The Fate Of Our Species Then Would Come To Depend On The Actions Of The Machine SuperintelligenceBut We Have One Advantage We Get To Make The First Move Will It Be Possible To Construct A Seed AI Or Otherwise To Engineer Initial Conditions So As To Make An Intelligence Explosion Survivable How Could One Achieve A Controlled Detonation To Get Closer To An Answer To This Question, We Must Make Our Way Through A Fascinating Landscape Of Topics And Considerations Read The Book And Learn About Oracles, Genies, Singletons About Boxing Methods, Tripwires, And Mind Crime About Humanity S Cosmic Endowment And Differential Technological Development Indirect Normativity, Instrumental Convergence, Whole Brain Emulation And Technology Couplings Malthusian Economics And Dystopian Evolution Artificial Intelligence, And Biologicalcognitive Enhancement, And Collective IntelligenceThis Profoundly Ambitious And Original Book Picks Its Way Carefully Through A Vast Tract Of Forbiddingly Difficult Intellectual Terrain Yet The Writing Is So Lucid That It Somehow Makes It All Seem Easy After An Utterly Engrossing Journey That Takes Us To The Frontiers Of Thinking About The Human Condition And The Future Of Intelligent Life, We Find In Nick Bostrom S Work Nothing Less Than A Reconceptualization Of The Essential Task Of Our Time The Marmalade Files (Harry Dunkley, Dangers Johnno Strategies ♽ The Human Brain Has Some Capabilities That The Brains Of Other Animals Lack It Is To These Distinctive Capabilities That Our Species Owes Its Dominant Position Other Animals Have Stronger Muscles Or Sharper Claws Everybody Feels... Angry But We Have Cleverer BrainsIf Machine Brains One Day Come To Surpass Human Brains In General Intelligence Erotske priče Then This New Superintelligence Could Become Very Powerful As The Fate Of The Gorillas Now Depends On Us Humans Than On The Gorillas Themselves The Firstborn So The Fate Of Our Species Then Would Come To Depend On The Actions Of The Machine SuperintelligenceBut We Have One Advantage We Get To Make The First Move Will It Be Possible To Construct A Seed AI Or Otherwise To Engineer Initial Conditions So As To Make An Intelligence Explosion Survivable How Could One Achieve A Controlled Detonation To Get Closer To An Answer To This Question Historias de cronopios y de famas We Must Make Our Way Through A Fascinating Landscape Of Topics And Considerations Read The Book And Learn About Oracles Playing for Keeps Genies Absolute Happiness Singletons About Boxing Methods Farewell to Reason Tripwires Altered Voices And Mind Crime About Humanity S Cosmic Endowment And Differential Technological Development Indirect Normativity The Water Dancer Instrumental Convergence Gothic Hospital Whole Brain Emulation And Technology Couplings Malthusian Economics And Dystopian Evolution Artificial Intelligence Ennui and Despair And Biologicalcognitive Enhancement First Impressions (April Loves Black Coffee, And Collective IntelligenceThis Profoundly Ambitious And Original Book Picks Its Way Carefully Through A Vast Tract Of Forbiddingly Difficult Intellectual Terrain Yet The Writing Is So Lucid That It Somehow Makes It All Seem Easy After An Utterly Engrossing Journey That Takes Us To The Frontiers Of Thinking About The Human Condition And The Future Of Intelligent Life Finding His Mark (Stealth Ops, We Find In Nick Bostrom S Work Nothing Less Than A Reconceptualization Of The Essential Task Of Our Time I m very pleased to have read this book It states, concisely, the general field of AI research s BIG ISSUES The paths to making AIs are only a part of the book and not a particularly important one at this point.More interestingly, it states that we need to be focused on the dangers of superintelligence Fair enough If I was an ant separated from my colony coming into contact with an adult human being, or a sadistic if curious child, I might start running for the hills before that magnifying glass focuses the sunlight.And so we move on to strategies, and this is where the book does its most admirable job All the current thoughts in the field are represented, pretty much, but only in broad outlines A lot of this has been fully explored in SF literature, too, and not just from the Asimov Laws of Robotics.We ve had isolation techniques, oracle techniques, and even straight tool use techniques crop up in robot and AI literature Give robots a single task job and they ll find a way to turn it into a monkey s paw scenario.And this just begs the question, doesn t it When we get right down to it, this book may be very concise and give us a great overview, but I do believe I ll remain an uberfan of Eliezer Yudkowsky over Nick Bostrom After having just read Rationality From AI to Zombies, almost all of these topics are not only brought up, but they re explored in grander fashion and detail.What do you want A concise summary Or a gloriously delicious multi prong attack on the whole subject that admits its own faults the way that HUMANITY should admit its own faults Give me Eli s humor, his brilliance, and his deeply devoted stand on working out a real solution to the Nice AI problem I m not saying Superintelligence isn t good, because it most certainly is, but it is still the map, not the land Or to be slightly fairer, neither is the land, but one has a little better definition on the topography. Preamble I confess to Roko s Basilisk that I didn t believe superintelligence was much of a threat, and this book didn t really do anything to challenge that prior Mea Culpa, Mea Culpa, Mea local Maxima Culpa.I Overall View I m a software engineer with some basic experience in machine learning, and though the results of machine learning have been becoming impressive and general, I ve never really seen where people are coming from when they see strong superintelligence just around the corner, especially the kind that can recursively improve itself to the point where intelligence vastly increases in the space of a few hours or days So I came to this book with a simple question Why are so many intelligent people scared of a near term existential threat from AI, and especially why should I believe that AI takeoff will be incredibly fast Unfortunately, I leave the book with this question largely unanswered Though in principle I can t think of anything that prevents the formation of some forms of superintelligence, everything I know about software development makes me think that any progress will be slow and gradual, occasionally punctuated with a new trick or two that allows for somewhat faster but still gradual increases in some domains So on the whole, I came away from this book with the uncomfortable but unshakeable notion that most of the people cited don t really have much relevant experience in building large scale software systems Though Bostrom used much of the language of computer science correctly, any of his extrapolations from very basic, high level understandings of these concepts seemed frankly oversimplified and unconvincing.II General Rant on Math in Philosophy Ever since I was introduced to utilitarianism in college the naive, Bentham style utilitarianism at least I ve been somewhat concerned about the practice of trying to add rigor to philosophical arguments by filling them with mathematical formalism To continue with the example of utilitarianism, in its most basic sense it asks you to consider any action based on a calculation of how much pleasure will result from your action divided by the amount of pain an action will cause, and to act in such a way that you maximize this ratio Now it s of course impossible to do this calculation in all but the most trivial cases, even assuming you ve somehow managed to define pleasure, pain, and come up with some sort of metric for actually evaluating differences between them So really the formalism only expresses a very simple relationship between things which are not defined, and based on the process of definition might not be able to be legitimately placed in simple arithmetic or algebraic expressions I felt much the same way when I was reading Superintelligence Especially in his chapter on AI takeoff, Bostrom argued that the amount of improvement in an AI system could be modeled as a ratio of applied optimization power over the recalcitrance of the system, or its architectural unwillingness to accept change Certainly this is true as far as it goes, but optimization power and recalcitrance are necessarily at this point dealing with systems that nobody yet knows how to build, or even what they will look like, beyond some hand wavey high level descriptions, and so there is no definition one can give that makes any sense unless you ve already committed to some ideas of exactly how the system will perform Bostrom tries to hedge his bets by presenting some alternatives, but he s clearly committed to the idea of a fast takeoff, and the math like symbols he s using present only a veneer of formalism, drawing some extremely simple relations between concepts which can t be yet defined in any meaningful way This was the example that really made my objections to unjustified philosophy math snap into sharp focus, but it s just one of many peppered throughout the book, which gives an attempted high level look at superintelligent systems, but too many of the black boxes on which his argument rested remained black boxes Unable to convince myself of the majority of his argument since too many of his steps were glossed over, I came away from this book thinking that there had to be a lot argumentation somewhere, since I couldn t imagine holding this many unsubstantiated axioms for something apparently important to him as superintelligence And it really is a shame that the book needed to be bogged down with so much unnecessary formalism which had the unpleasant effect of making it feel simultaneously overly verbose and too simplistic , since there were a few good things in here that I came away with The sections on value loading and security were especially good Like most of the book, I found them overly speculative and too generous in assuming what powers superintelligences would possess, but there is some good strategic stuff in here that could lead toward general forms of machine intelligence, and avoid some of the overfitting problems common in contemporary machine learning Of course, there s also no plan of implementation for this stuff, but it s a cool idea that hopefully penetrates a little further into modern software development.III Whereof One Cannot Speak, Thereof One Must Request Funding It s perhaps callous and cynical of me to think of this book as an extended advertisement for the Machine Intelligence Research Institute MIRI , but the final two chapters in many ways felt like one Needless to say I m not filled with a desire to donate on the basis of an argument I found largely unconvincing, but I do have to commend those involved for actually having an attempt at a plan of implementation in place simultaneous with a call to action.IV Conclusion I remain pretty unconvinced of AI as a relatively near term existential threat, though I think there s some good stuff in here that could use a wider audience And being thoughtful and careful with software systems is always a cause I can get behind I just wish some of the gaps got filled in, and I could justifiably shake my suspicion that Bostrom doesn t really know that much about the design and implementation of large scale software systems.V Charitable TL DRNot uninteresting, needs a lot of work before it s convincing.VI Uncharitable TL DR Imagine a Danger You may say I m a Dreamer Bostrom is here to imagine a world for us and he has batshit crazy imagination, have to give him that The world he imagines is a post AI world or at least a very near to AI world or a nascent AI world Don t expect to know how we will get there only what to do if we get there and how to skew the road to getting there to our advantage And there are plenty of wild ideas on how things will pan out in that world in transition, the routes bit Bostrom discusses the various potential routes, but all of them start at a point where AI is already in play Given that assumption, the dangers bit is automatic since the unknown and powerful has to be assumed to be dangerous And hence strategies are required See what he did there It is all a lot of fun, to be playing this thought experiment game, but it leaves me a bit confused about what to feel about the book as an intellectual piece of speculation I was on the fence between a two star rating or a four star rating for much of the reading Plenty of exciting and grand sounding ideas are thrown at me but, truth be told, there are too many and hardly any are developed The author is so caught up in his own capacity for big BIG BIIG ideas that he forgets to develop them into a realistic future or make any the real focus of dangers or strategies They are just all out there, hanging As if their nebulosity and sheer abundance should do the job of scaring me enough.In the end I was reduced to surfing the book for ideas worth developing on my own And what do you know, there were a few So, not too bad a read and I will go with three And for future readers, the one big not so new and central idea of the book is simple enough to be expressed as a fable, here it is The Unfinished Fable of the SparrowsIt was the nest building season, but after days of long hard work, the sparrows sat in the evening glow, relaxing and chirping away We are all so small and weak Imagine how easy life would be if we had an owl who could help us build our nests Yes said another And we could use it to look after our elderly and our young It could give us advice and keep an eye out for the neighborhood cat, added a third.Then Pastus, the elder bird, spoke Let us send out scouts in all directions and try to find an abandoned owlet somewhere, or maybe an egg A crow chick might also do, or a baby weasel This could be the best thing that ever happened to us, at least since the opening of the Pavilion of Unlimited Grain in yonder backyard The flock was exhilarated, and sparrows everywhere started chirping at the top of their lungs.Only Scronkfinkle, a one eyed sparrow with a fretful temperament, was unconvinced of the wisdom of the endeavor Quoth he This will surely be our undoing Should we not give some thought to the art of owl domestication and owl taming first, before we bring such a creature into our midst Replied Pastus Taming an owl sounds like an exceedingly difficult thing to do It will be difficult enough to find an owl egg So let us start there After we have succeeded in raising an owl, then we can think about taking on this other challenge There is a flaw in that plan squeaked Scronkfinkle but his protests were in vain as the flock had already lifted off to start implementing the directives set out by Pastus.Just two or three sparrows remained behind Together they began to try to work out how owls might be tamed or domesticated They soon realized that Pastus had been right this was an exceedingly difficult challenge, especially in the absence of an actual owl to practice on Nevertheless they pressed on as best they could, constantly fearing that the flock might return with an owl egg before a solution to the control problem had been found.It is not known how the story ends This book if else if else if else if else if You can get most of the ideas in this book in the WaitButWhy article about AI This book assumes that an intelligence explosion is possible, and that it is possible for us to make a computer whose intelligence will explode Then talks about ways to deal with it A lot of this book seems like pointless naval gazing, but I think some of it is worth reading. If you re into stuff like this, you can read the full review Count of Self 0 Superintelligence Paths, Dangers, Strategies by Nick Bostrom Box 8 Anthropic capture The AI might assign a substantial probability to its simulation hypothesis, the hypothesis that it is living in a computer simulation In Superintelligence Paths, Dangers, Strategies by Nick BostromWould you say that the desire to preserve itself comes from the possession of a self consciousness If so, does the acquisition of intelligence according to Bostrom also mean the acquisition of self consciousness The unintended consequence of a super intelligent AI is the development of an intelligence that we can barely see, let alone control, as a consequence of the networking of a large number of autonomous systems acting on inter connected imperatives I think of bots trained to trade on the stock market that learn that the best strategy is to follow other bots, who are following other bots The system can become hyper sensitive to inputs that have little or nothing to do with supply and demand. In recent times, prominent figures such as Stephen Hawking, Bill Gates and Elon Musk have expressed serious concerns about the development of strong artificial intelligence technology, arguing that the dawn of super intelligence might well bring about the end of mankind Others, like Ray Kurzweil who, admittedly, has gained some renown in professing silly predictions about the future of the human race , have an opposite view on the matter and maintain that AI is a blessing that will bestow utopia upon humanity Nick Bostrom painstakingly elaborates on the disquiet views of the former he might well have influenced them in the first place , without fully dismissing the blissful engrossment of the latter.First, he endeavours to shed some light on the subject and delves into quite a few particulars concerning the future of AI research, such as the different paths that could lead to super intelligence brain emulations or AI proper , the steps and timeframe through which we might get there, the types and number of AI that could result as we continue improving our intelligent machines he calls them oracles , genies and sovereigns , the different ways in which it could go awry, and so forth.But Bostrom is first and foremost a philosophy professor, and his book is not so much about the engineering or economic aspects that we could foresee as regards strong AI The main concern is the ethical problems that the development of a general i.e cross domain super intelligent machine, far surpassing the abilities of the human brain, might pose to us as humans The assumption is that the possible existence of such a machine would represent an existential threat to human kind The main argument is thus to warn us about the dangers some of Bostrom s examples are weirdly farcical, and reminded me of Douglas Adams s The Hitchhiker s Guide to the Galaxy , but also to outline in some detail how this risk could or should be mitigated, restraining the scope or the purpose of a hypothetical super brain this is what he calls the AI control problem , which is at the core of his reasoning and which, upon reflexion, is a surprisingly difficult one.I should add that, although the book is largely accessible to the layperson, Bostrom s prose is often dense, speculative, and makes very dry reading not exactly a walk in the park He should be praised nonetheless for attempting to apply philosophy and ethical thinking to nontrivial questions.One last remark Bostrom explores a great many questions in this book but, oddly enough, it seems never to occur to him to think about the possible moral responsibility we humans might have towards an intelligent machine, not just a figment of our imagination but a being that we will someday create and could at least be compared to us Charity begins at home, I suppose. As a software developer, I ve cared very little for artificial intelligence AI in the past My programs, which I develop professionally, have nothing to do with the subject They re dumb as can be and only following strict orders that is rather simple algorithms Privately I wrote a few AI test programs with or less success and read a articles in blogs or magazines with or less interest By and large I considered AI as not being relevant for me.In March 2016 AlphaGo was introduced This was the first Go program capable of defeating a champion in this game Shortly after that, in December 2017, Alpha Zero entered the stage Roughly speaking this machine is capable of teaching itself games after being told the rules Within a day, Alpha Zero developed superhuman level of play for Go, Chess, and Shogi all by itself if you can believe the developers The algorithm used in this machine is very abstract and can probably be used for all games of this kind The amazing thing for me was how fast the AI development progresses.This book is not all about AI It s about superintelligence SI An SI can be thought of some entity which is far superior to human intelligence in all or almost all cognitive abilities To paraphrase Lincoln You can outsmart some of the people all of the time and you can outsmart all of the people some of the time, but you can t outsmart all of the people all of the time unless you are a superintelligence The subtitle of the English edition paths, dangers, strategies has been chosen wisely What steps can been taken to build an SI, what are the dangers of introducing an SI, and how can one ensure that these dangers and risks are eliminated or at least scaled down to an acceptable level An SI does not necessarily have to exist in a computer The author is also co founder of the World Transhumanist Association Therefore, transhumanist ideas are included in the book, albeit in a minor role An SI can theoretically be build by using genetic selection of embryos, i.e breeding Genetic research would probably soon be ready to provide the appropriate technologies For me, a scary thought something which touches my personal taboos Not completely outlandish, but still with a big ethical question mark for me, seems to be Whole Brain Emulation WBE Here, the brain of a human being, precisely, the state of the brain at a given time, is analyzed and transferred to a corresponding data structure in the memory of a powerful computer where then the brain consciousness of the individual continues to exist, possibly within a suitable virtual reality There are already quite a few films or books that deal with this scenario for a positive example see the this episode of the Black Mirror series With WBE you would have an artificial entity with the cognitive performance of a human being The vastly superior processing speed of the digital versus the biological circuits will let this entity become super intelligent consider 100,000 copies of a 1000x faster WBE and let it run for six months, and you ll get 50 millenia worth of thinking However, the main focus in the discussion about SI in this book is the further development of AI to become Super AI SAI This is not a technical book though It contains no computer code whatsoever, and the math appearing twice in some info boxes is only marginal and not at all necessary for understanding One should not imagine an SI as a particularly intelligent person It might be appropriate to equate the ratio of SI to human intelligence with that of human intelligence to the cognitive performance of a mouse An SI will indeed be very very smart and, unfortunately, also very very unstable By that I mean that an SI will be busy at any time to changed and improve itself The SI you speak with today will be a million or times smarter tomorrow In this context, the book speaks of intelligence explosion Nobody knows yet, when this will start and how fast it will go Could be next year, or in ten, fifty, or one hundred years Or perhaps never although this is highly unlikely Various scenarios are discussed in the book Also it is not clear if there will be only one SI a so called singleton , or several competing or collaborating SIs with a singleton seeming to be likely.I think it s fair to say that humanity as a whole has the wish to continue to exist at least the vast majority of people do not consider the extinction of humanity desirable With that in mind it would make sense to instruct an SI to follow that same goal Now I forgot to specify the exact state in which you want to exist In this case the SI might choose to put all humans into coma less energy consumption The problem is solved from the SI s point of view its goal has been reached But obviously this is not what we meant We have to re program the SI and tweak its goal a bit Therefore it would be mandatory to always be able to control the SI It s possible an SI will not act the way we intended it will act, however, the way we programmed it A case of an unfriendly SI is actually very likely The book mentions and describes perverse instantiation , infrastructure profusion and mind crime as possible effects The so called control problem remains unsolved as of now and it appears equivalent to that of a mouse controlling a human being Without a solution, the introduction of an SI becomes a gamble with a very high probability a savage SI will wipe out humanity.The final goal of an SI should be formulated pro human if at all possible At least, the elimination of humankind should not be prioritized at any time You should give the machine some kind of morality But how does one do it How can you formulate moral ideas in a computer language And what happens if our morals change over time which has happened before , and the machine still decides on a then outdated moral ground In my opinion, there will be insurmountable difficulties at this point Nevertheless, there are also at least some theoretical approaches explained by Bostrom who is primarily a philosopher It s quite impressive to read these chapters albeit also a bit dry In general, the chapters dealing with philosophical questions, and how they are translated to the SI world, were the most engrossing ones for me The answers to this kind of questions are also subject to some urgency Advances in technology generally move faster than wisdom not only in this field , and the sponsors of the projects expect some return on invest Bostrom speaks of a philosophy with a deadline , a fitting, but also disturbing image.Another topic is an SI that is neither malignant nor fitted with false goals something like this is also possible , but on the contrary actually helps humanity Quote The point of superintelligence is not to pander to human preconceptions but to make mincemeat out of our ignorance and folly Certainly this is a noble goal However, how will people and I m thinking about those who are currently living react when their follies are disproved It s hard to say, but I guess they will not be amused One should not trust people too much intelligence in this respect see below for my own anger.Except for the sections on improving human intelligence through biological interference and breeding read eugenics , I found everything in this book fascinating, thought provoking, and highly disturbing The book has, in a way, changed my world view rather drastically, which is rare My folly about AI and especially Super AI has changed fundamentally In a way, I ve gone through 4 of the 5 stages of grief loss Before the book, I flatly denied a Super AI will ever come to fruition When I read the convincing arguments that not only an Super AI will be possible, but indeed very likely, my denial changed into anger In spite of the known problems and the existential risk of such a technology, how can one even think to follow this slippery slope this question is also dealt with in the book My anger was then turned into a depression not a clinical one towards the end Still in this condition, I m now awaiting acceptance, which in my case will likely be fatalism.A book that shook me profoundly and that I actually wished I had not read, but that I still recommend highly I guess I need a superintelligence to make sense of that This work is licensed under a Creative Commons Attribution NonCommercial ShareAlike 3.0 Unported License. There has been a spate of outbursts from physicists who should know better, including Stephen Hawking, saying philosophy is dead all we need now is physics or words to that effect I challenge any of them to read this book and still say that philosophy is pointless.It s worth pointing out immediately that this isn t really a popular science book I d say the first handful of chapters are for everyone, but after that, the bulk of the book would probably be best for undergraduate philosophy students or AI students, reading like a textbook than anything else, particularly in its dogged detail but if you are interested in philosophy and or artificial intelligence, don t let that put you off.What Nick Bostrom does is to look at the implications of developing artificial intelligence that goes beyond human abilities in the general sense Of course, we already have a sort of AI that goes beyond our abilities in the narrow sense of, say, arithmetic, or playing chess In the first couple of chapters he examines how this might be possible and points out that the timescale is very vague Ever since electronic computers have been invented, pundits have been putting the development of effective AI around 20 years in the future, and it s still the case Even so, it seems entirely feasible that we will have a than human AI a superintelligent AI by the end of the century But the how aspect is only a minor part of this book.The real subject here is how we would deal with such a cleverer than us AI What would we ask it to do How would we motivate it How would we control it And, bearing in mind it is intelligent than us, how would we prevent it taking over the world or subverting the tasks we give it to its own ends It is truly fascinating concept, explored in great depth here This is genuine, practical philosophy The development of super AIs may well happen and if we don t think through the implications and how we would deal with it, we could well be stuffed as a species.I think it s a shame that Bostrom doesn t make use of science fiction to give examples of how people have already thought about these issues he gives only half a page to Asimov and the three laws of robotics and how Asimov then spends most of his time showing how they d go wrong , but that s about it Yet there has been a lot of thought and dare I say it, a lot readability than you typically get in a textbook, put into the issues in science fiction than is being allowed for, and it would have been worthy of a chapter in its own right.I also think a couple of the fundamentals aren t covered well enough, but pretty much assumed One is that it would be impossible to contain and restrict such an AI Although some effort is put into this, I m not sure there is enough thought put into the basics of ways you can pull the plug manually if necessary by shutting down the power station that provides the AI with electricity.The other dubious assertion was originally made by I J Good, who worked with Alan Turing, and seems to be taken as true without analysis This is the suggestion that an ultra intelligent machine would inevitably be able to design a better AI than humans, so once we build one it will rapidly improve on itself, producing an intelligence explosion I think the trouble with this argument is that my suspicion is that if you got hold of the million most intelligent people on earth, the chances are that none of them could design an ultra powerful computer at the component level Just because something is superintelligent doesn t mean it can do this specific task well this is an assumption.However this doesn t set aside what a magnificent conception the book is I don t think it will appeal to many general readers, but I do think it ought to be required reading on all philosophy undergraduate courses, by anyone attempting to build AIs and by physicists who think there is no point to philosophy. Never let a Seed AI read this book

  • Hardcover
  • 328
  • Superintelligence: Paths, Dangers, Strategies
  • Nick Bostrom
  • English
  • 05 December 2019
  • null

About the Author: Nick Bostrom

Nick Bostrom is Professor at Oxford University, where he is the founding Director of the Future of Humanity Institute He also directs the Strategic Artificial Intelligence Research Center He is the author of some 200 publications, including Anthropic Bias Routledge, 2002 , Global Catastrophic Risks ed., OUP, 2008 , Human Enhancement ed., OUP, 2009 , and Superintelligence Paths, Dangers, Stra