MSI DELICIOUS

From Artificial Intelligence to Superintelligence - The Future of Humanity

 


Artificial Superintelligence or ASI, sometimes referred to as digital superintelligence is the advent of a hypothetical agent that possesses intelligence far surpassing that of the smartest and most gifted human minds. AI is a rapidly growing field of technology with the potential to make huge improvements in human wellbeing. However, the development of machines with intelligence vastly superior to humans will pose special, perhaps even unique risks.

One only needs to accept three basic assumptions to recognize the inevitability of superintelligent AI:


Intelligence is a product of information processing in physical systems.
 We will continue to improve our intelligent machines.
 We do not stand on the peak of intelligence or anywhere near it.




Philosopher Nick Bostrom expressed concern about what values a superintelligence should be designed to have. Any type of AI superintelligence could proceed rapidly to its programmed goals, with little or no distribution of power to others. It may not take its designers into account at all. The logic of its goals may not be reconcilable with human ideals. The AI’s power might lie in making humans its servants rather than vice versa. If it were to succeed in this, it would “rule without competition under a dictatorship of one”.

Elon Musk has also warned that the global race toward AI could result in a third world war. To avoid the ‘worst mistake in history’, it is necessary to understand the nature of an AI race, as well as escape the development that could lead to unfriendly Artificial Superintelligence. 

To ensure the friendly nature of artificial superintelligence, world leaders should work to ensure that this ASI is beneficial to the entire human race.





A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

University of Oxford philosopher Nick Bostrom defines superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest". The program Fritz falls short of superintelligence even though it is much better than humans at chess because Fritz cannot outperform humans in other tasks. Following Hutter and Legg, Bostrom treats superintelligence as general dominance at goal-oriented behavior, leaving open whether an artificial or human superintelligence would possess capacities such as intentionality (cf. the Chinese room argument) or first-person consciousness (cf. the hard problem of consciousness).




(AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification.

Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence. The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may give them the opportunity to either as a single being or as a new species become much more powerful than humans, and to displace them.

A number of scientists and forecasters argue for prioritizing early research into the possible benefits and risks of human and machine cognitive enhancement, because of the potential social impact of such technologies.





Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when this will likely happen. At the 2006 AI@50 conference, 18% of attendees reported expecting machines to be able "to simulate learning and every other aspect of human intelligence" by 2056; 41% of attendees expected this to happen sometime after 2056; and 41% expected machines to never reach that milestone.





In a survey of the 100 most cited authors in AI (as of May 2013, according to Microsoft academic search), the median year by which respondents expected machines "that can carry out most human professions at least as well as a typical human" (assuming no global catastrophe occurs) with 10% confidence is 2024 (mean 2034, st. dev. 33 years), with 50% confidence is 2050 (mean 2072, st. dev. 110 years), and with 90% confidence is 2070 (mean 2168, st. dev. 342 years). These estimates exclude the 1.2% of respondents who said no year would ever reach 10% confidence, the 4.1% who said 'never' for 50% confidence, and the 16.5% who said 'never' for 90% confidence. Respondents assigned a median 50% probability to the possibility that machine superintelligence will be invented within 30 years of the invention of approximately human-level machine intelligence.


                                                                                    Thanks to Wikipedia - Superintelligence                                                                         

Previous
Next Post »