http://www.winterintelligence.org/ Winter Intelligence 2012 Oxford University
Video thanks to Adam Ford, http://www.youtube.com/user/TheRationalFuture
Extended Abstract: One possible route to creating Artificial General Intelligence (AGI) is by creating detailed models of human brains which can substitute for those brains at various tasks, i.e. human Whole Brain Emulation (WBE) (Sandberg and Bostrom, 2008). If computation was abundant, WBEs could undergo an extremely rapid population explosion, operate at higher speeds than humans, and create even more capable successors in an "intelligence explosion" (Good, 1965; Hanson, 1994). Thus, it would be reassuring if the first widely deployed emulations were mentally stable, loyal to the existing order or human population, and humane in their moral sentiments.
However, incremental progress may make it possible to create productive but unstable or inhuman emulations first (Yudkowsky, 2008). For instance, early emulations might loosely resemble brain-damaged amnesiacs that have been gradually altered (often in novel ways) to improve performance, rather than digital copies of human individuals selected for their stability, loyalties, and ethics. A business or state that waited to develop more trustworthy emulations would then delay and risk losing an enormous competitive advantage. The longer the necessary delay, the more likely untrustworthy systems would be widely deployed.
Could low-fidelity brain emulations, intelligent but untrusted, be used to greatly reduce that delay? Trustworthy high-quality emulations could do anything that a human development team could do, but could do it a hundredfold more quickly given a hundredfold hardware improvement (perhaps with bottlenecks from runtime of other computer programs, etc). Untrusted systems would need their work supervised by humans to prevent escape or sabotage of the project.
This supervision would restrict the productivity of the emulations, and introduce bottlenecks for human input, reducing the speedup.
This paper discusses tools for human supervision of untrusted brain emulations, and argues that supervised, untrusted brain emulations could result in large speedups in research progress.
Video thanks to Adam Ford, http://www.youtube.com/user/TheRationalFuture
Extended Abstract: One possible route to creating Artificial General Intelligence (AGI) is by creating detailed models of human brains which can substitute for those brains at various tasks, i.e. human Whole Brain Emulation (WBE) (Sandberg and Bostrom, 2008). If computation was abundant, WBEs could undergo an extremely rapid population explosion, operate at higher speeds than humans, and create even more capable successors in an "intelligence explosion" (Good, 1965; Hanson, 1994). Thus, it would be reassuring if the first widely deployed emulations were mentally stable, loyal to the existing order or human population, and humane in their moral sentiments.
However, incremental progress may make it possible to create productive but unstable or inhuman emulations first (Yudkowsky, 2008). For instance, early emulations might loosely resemble brain-damaged amnesiacs that have been gradually altered (often in novel ways) to improve performance, rather than digital copies of human individuals selected for their stability, loyalties, and ethics. A business or state that waited to develop more trustworthy emulations would then delay and risk losing an enormous competitive advantage. The longer the necessary delay, the more likely untrustworthy systems would be widely deployed.
Could low-fidelity brain emulations, intelligent but untrusted, be used to greatly reduce that delay? Trustworthy high-quality emulations could do anything that a human development team could do, but could do it a hundredfold more quickly given a hundredfold hardware improvement (perhaps with bottlenecks from runtime of other computer programs, etc). Untrusted systems would need their work supervised by humans to prevent escape or sabotage of the project.
This supervision would restrict the productivity of the emulations, and introduce bottlenecks for human input, reducing the speedup.
This paper discusses tools for human supervision of untrusted brain emulations, and argues that supervised, untrusted brain emulations could result in large speedups in research progress.
- Category
- Academic
Sign in or sign up to post comments.
Be the first to comment