OTHER LIFE

Subscribe

RSS Feeds

Archives

A machine superintelligence might never display itself

This seems to me a crucial point not often discussed by the AI Risk folks such as Bostrom and Yudkowsky. Whether it’s a bug or a feature of the AI Risk industry is harder to know, a thorn in the side of their project, or beneficial for (potentially endless) fundraising? Only time will tell, or it won’t.

This is from Superintelligence cannot be contained: Lessons from Computability Theory (Alfonseca et al. 2016):

Another lesson from computability theory is the following: we may not even know when superintelligent machines have arrived, as deciding whether a machine exhibits intelligence is in the same realm of problems as the containment problem. This is a consequence of Rice’s theorem [24], which states that, any non-trivial property (e.g. “harm humans” or “display superintelligence”) of a Turing machine is undecidable.

I have a short article coming out soon in an IEEE publication, which builds on this insight.

Stay up to date on all my projects around the web. No spam, don't worry.

This site participates in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. The Privacy Policy can be found here. The content of this website is licensed under a CREATIVE COMMONS ATTRIBUTION 4.0 INTERNATIONAL LICENSE.

rss-square
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram