The annual GMIC conference was held today at the Beijing National Convention Center. At the Leaders Summit in the morning, world-renowned physicist Stephen William Hawking gave a speech and answered some questions about artificial intelligence.
Although Hawking’s speech was titled “Making Artificial Intelligence Benefit People and Their Homes for Survival,†Hawking repeatedly expressed his concerns about the development of artificial intelligence.
Hawking said that the most profound social change I have witnessed today is the rise of artificial intelligence. Hawking believes that the rise of artificial intelligence has either become the best thing in human history or the worst thing. Therefore, Hawking called on humans to study artificial intelligence must avoid risk, and warned that human intelligence transformed by human will may destroy humanity.
In order to support this argument, Hawking explained that everything that civilization produces is a product of human intelligence. From the results of chess and chess and human-machine wars, Hawking believes that the biological brain can reach and the computer can achieve, without essence. the difference. Therefore, it follows the principle that "computers can theoretically imitate human intelligence and then transcend". Once artificial intelligence is out of bounds, it redesigns itself in an accelerated state. Human beings cannot compete with them due to the limitations of long biological evolution and will be replaced.
Hawking called for researchers to be able to create controllable artificial intelligence. On specific issues, Hawking said that his short-term concerns are about driverlessness, from civilian drones to autonomous driving. For example, in an emergency, a driverless car has to choose between a small risky accident and a high probability of a small accident. Another concern is the deadly intelligence of autonomous weapons. Should they be banned? If so, how exactly should "autonomy" be defined. If not, how should any accountability for misuse and failure be accountable. There are other concerns that artificial intelligence can gradually interpret the privacy and concerns caused by large amounts of monitoring data and how to manage the economic impact of replacing jobs with artificial intelligence. Long-term concerns are mainly the potential risks of out-of-control of artificial intelligence systems. With the rise of super-intelligence that does not follow human will, that powerful system threatens humanity.
"The success of artificial intelligence may be the biggest event in the history of human civilization, but artificial intelligence may also be the end of the history of human civilization." Hawking concluded.
The following is a speech by Hawking on GMIC2017:
In my life, I have witnessed profound changes in society. The most profound of these, as well as the ever-increasing impact on humans, is the rise of artificial intelligence. In short, I think the rise of powerful artificial intelligence is either the best in human history or the worst. I have to say that we are still uncertain whether it is good or bad. But we should do everything we can to ensure that its future development benefits us and our environment. We have no choice. I believe that the development of artificial intelligence is itself a problem with problems that must be resolved now and in the future.
The research and development of artificial intelligence is rapidly advancing. Perhaps all of us should pause for a while, shifting our research from the promotion of artificial intelligence to the maximization of the social benefits of artificial intelligence. Based on this consideration, the American Artificial Intelligence Association (AAAI) established the long-term future general fundraising forum for artificial intelligence from 2008 to 2009. They recently invested a lot of attention in the purpose-oriented neutral technology. But our artificial intelligence system needs to work according to our will. Interdisciplinary research is a possible way forward: from economics, law, philosophy to computer security, formal methods, and of course the various branches of artificial intelligence itself.
Everything that civilization produces is a product of human intelligence. I believe that there is no essential difference between what a biological brain can achieve and what a computer can achieve. Therefore, it follows the principle that "computers can theoretically imitate human intelligence and then transcend". But we are not sure, so we can't know if we will get the help of artificial intelligence infinitely, or be despised and marginalized, or it may be destroyed by it. Indeed, we are worried that smart machines will be able to replace the work that humans are doing and quickly eliminate millions of jobs.
While artificial intelligence has evolved from its original form and proved to be very useful, I am also concerned about the result of creating something that can equal or transcend humans: once artificial intelligence is out of bondage, it redesigns itself in an accelerating state. Human beings cannot compete with them due to the limitations of long biological evolution and will be replaced. This will bring great damage to our economy. In the future, artificial intelligence can develop self-will and a will to conflict with us. Although I have always been optimistic about humans, others believe that humans can control the development of technology for a long time, so that we can see the potential of artificial intelligence to solve most of the world's problems. But I am not sure.
In January 2015, I signed an open letter on artificial intelligence with technology entrepreneur Elon Musk and many other artificial intelligence experts to promote serious research on the impact of artificial intelligence on society. Prior to this, Elon Musk warned people that superhuman artificial intelligence may bring immeasurable benefits, but if not properly deployed, it may have the opposite effect on humans. I am with him at the Scientific Advisory Board of the Institute for the Future of Life, an organization that mitigates the risks that humanity faces, and the aforementioned open letter was drafted by this organization. This public signalling campaign can prevent direct research on potential problems, as well as the potential benefits of artificial intelligence, while working to make artificial intelligence developers more concerned with artificial intelligence security. In addition, for decision makers and the general public, this open letter is informative and not alarmist. Everyone knows that artificial intelligence researchers are seriously thinking about these concerns and ethical issues, which we think is very important. For example, artificial intelligence has the potential to eradicate disease and poverty, but researchers must be able to create controllable artificial intelligence. The four-word, open-ended letter entitled "Priority to Study Strong and Beneficial Artificial Intelligence," detailed the study's prioritization in its accompanying twelve-page document.
Over the past 20 years, artificial intelligence has focused on the problems that arise around building intelligent agents, that is, systems that can be perceived and acted upon in a particular environment. In this case, intelligence is a rational concept related to statistics and economics. In a nutshell, this is an ability to make good decisions, plans, and inferences. Based on these efforts, a large amount of integration and cross-prediction is applied in artificial intelligence, machine learning, statistics, cybernetics, neuroscience, and other fields. The establishment of a shared theoretical framework, combined with data supply and processing capabilities, has achieved significant success in various subdivisions. Examples include speech recognition, image classification, autopilot, machine translation, gait movement, and question and answer systems.
With the development of these fields, a virtuous circle is formed from laboratory research to technologies of economic value. Even small performance improvements can bring huge economic benefits, which in turn encourage longer-term, greater investment and research. It is widely recognized that the research on artificial intelligence is developing steadily, and its impact on society is likely to expand. The potential benefits are enormous. Since everything produced by civilization is a product of human intelligence; we cannot predict that we may obtain What results, when this intelligence is magnified by artificial intelligence tools. But, as I said, eradicating disease and poverty is not entirely impossible. Because of the enormous potential of artificial intelligence, it is important to study how to benefit from artificial intelligence and avoid risks.
Research on artificial intelligence is now rapidly evolving. This research can be discussed in the short and long term. Some short-term concerns are about driverlessness, from civilian drones to autonomous driving. For example, in an emergency, a driverless car has to choose between a small risky accident and a high probability of a small accident. Another concern is the deadly intelligence of autonomous weapons. Should they be banned? If so, how exactly should "autonomy" be defined. If not, how should any accountability for misuse and failure be accountable. There are other concerns that artificial intelligence can gradually interpret the privacy and concerns caused by large amounts of monitoring data and how to manage the economic impact of replacing jobs with artificial intelligence.
Long-term concerns are mainly the potential risks of out-of-control of artificial intelligence systems. With the rise of super-intelligence that does not follow human will, that powerful system threatens humanity. Is the result of such misplacement possible? If so, how did these situations occur? What kind of research should we invest in to better understand and solve the dangers of the rise of dangerous super-smart, or the emergence of intelligent outbreaks?
Current tools for controlling artificial intelligence technologies, such as reinforcement learning, and simple and practical functions, are not enough to solve this problem. Therefore, we need further research to find and confirm a reliable solution to control this problem.
Recent milestones, such as the autonomous driving cars mentioned earlier and the artificial intelligence winning the game of chess, are signs of future trends. A huge investment is devoted to this technology. Our current achievements are inevitably dwarfed by what we might achieve in the coming decades. And we are far from predicting what we can achieve, when our minds are magnified by artificial intelligence. Perhaps with the help of this new technological revolution, we can solve some of the damage that industrialization has caused to nature. Everything about our lives is about to be changed. In short, the success of artificial intelligence is probably the biggest event in the history of human civilization.
But artificial intelligence may also be the end of the history of human civilization, unless we learn how to avoid danger. I have said that the all-round development of artificial intelligence may lead to the demise of human beings, such as maximizing the use of intelligent autonomous weapons. Earlier this year, I and some scientists from all over the world jointly supported their ban on nuclear weapons at United Nations conferences. We are anxiously awaiting the outcome of the negotiations. At present, the nine nuclear powers can control about 14,000 nuclear weapons. Any one of them can razing the city to the ground. Radioactive waste will pollute the farmland on a large scale. The most terrible harm is the nuclear winter, fire and smoke. Leading to the small ice age of the world. This result has caused the collapse of the global food system and the turmoil of the end, which is likely to cause most people to die. As scientists, we have a special responsibility for nuclear weapons, because it was the scientists who invented them and found that their effects were more terrible than originally thought.
At this stage, my discussion of the disaster may have scared everyone in the room. very sorry. But as a participant today, it is important that you recognize your position in the future development of the current technology. I believe that we are united to call for the support of international treaties or to sign open letters to governments. Science and technology leaders and scientists are doing everything they can to avoid the rise of uncontrollable artificial intelligence.
A new institution is trying to solve some inconclusive problems that arise in the rapid development of artificial intelligence research. The Liverpool Smart Future Center is an interdisciplinary research institute dedicated to the future of intelligence, which is critical to the future of our civilization and species. We spend a lot of time learning history and going deeper – most of it is about stupid history. So people turn to the future of intelligence is a welcome change. Although we are aware of the potential dangers, I am still optimistic in my heart. I believe that the potential benefits of creating intelligence are enormous. Perhaps with the tools of this new technological revolution, we will be able to reduce the damage done by industrialization to nature.
Every aspect of our lives will be changed. My colleague at the Institute, Hugh Prince, admitted that the "Liver Hume Center" could be established, in part because the university established the "Existing Risk Center." The latter has examined the potential problems of humanity more extensively, and the focus of the “Liver Hume Center†is relatively narrow.
Recent advances in artificial intelligence, including the European Parliament's call for drafting a series of regulations to manage innovation in robotics and artificial intelligence. What is surprising is that it involves a form of electronic personality to ensure the rights and responsibilities of the most capable and advanced artificial intelligence. A spokesperson for the European Parliament commented that as more and more areas of everyday life are increasingly influenced by robots, we need to ensure that robots serve humanity, now and in the future. The report to the members of the European Parliament clearly states that the world is at the forefront of the new industrial robot revolution. Whether the analysis provides the robot with the right to be an electronic person, which is equivalent to the legal person, may be possible. The report emphasizes that at all times, research and design personnel should ensure that each robot design includes a termination switch. In Kubrick's film "2001 Space Roaming", the failed supercomputer Hal did not let scientists enter the space capsule, but that is science fiction. What we have to face is the fact. Lorna Blazel, a partner at Osborne Clark's multinational law firm, said in the report that we don't recognize the personality of whales and gorillas, so there is no need to rush to accept a robotic personality. But worry has always existed. The report acknowledges that artificial intelligence may transcend human intelligence over a period of decades, and artificial intelligence may transcend human intelligence and challenge human-machine relationships. The report concludes by calling for the establishment of European robotics and artificial intelligence institutions to provide technical, ethical and regulatory expertise. If a member of the European Parliament votes in favor of legislation, the report will be submitted to the European Commission. It will decide which legislative steps to take in three months.
We should also play a role in ensuring that the next generation not only has the opportunity but also the determination to fully participate in scientific research at an early stage so that they can continue to realize their potential and help humanity create a better world. This is what I mean when I first talk about the importance of learning and education. We need to jump out of the theoretical discussion of “how things should be†and take action to ensure that they have the opportunity to participate. We stand at the entrance to a beautiful new world. This is an exciting world full of uncertainty, and you are the forerunners. I bless you.
Thank you!
The following is a question and answer from the Chinese science and technology big coffee, scientists, investors and netizens and Hawking:
Li Kaifu, CEO of Innovation Workshop (Q): Internet giants have huge amounts of data, and these data will give them a variety of opportunities to exchange profits and interests for users. Under the temptation of huge interests, they cannot be self-disciplined. Moreover, this behavior can also make it harder for small companies and entrepreneurs to innovate. You often talk about how to constrain artificial intelligence, but what is more difficult is how to bind people themselves. How do you think we should constrain these giants?
Hawking: As far as I know, many companies only use this data for statistical analysis, but any use involving private information should be banned. What will help privacy protection is that if all the information on the Internet is encrypted by quantum technology, the Internet company can't crack it in a certain period of time. But security services will oppose this approach.
The second question comes from Cheetah Mobile CEO Fu Sheng (Q): Will the soul be a form of existence of quantum? Or is it another manifestation in a high dimensional space?
Hawking: I think that the recent development of artificial intelligence, such as the computer's victory over the human brain in the game of chess and Go, shows that there is no essential difference between the human brain and the computer. At this point, my colleague Roger Penrose is the opposite. Does anyone think that computers have a soul? For me, the term soul is a Christian concept that is associated with the afterlife. I think this is a fairy tale.
The third question comes from Zhang Yaqin, the president of Baidu (Q): The way human observation and the abstract world evolved, from early observation and estimation, to Newton's law and Einstein's equation, to today's data-driven computing and artificial intelligence, What is one?
Hawking: We need a new quantum theory that integrates gravity with other forces in nature. Many people claim that this is string theory, but I doubt it. The only speculation at the moment is that there are ten dimensions in time and space.
The fourth question comes from Zhang Shouqi, a professor of physics at Stanford University. (Q): If you tell the aliens that our humanity has achieved the highest achievement, written on the back of a postcard, what would you write?
Hawking: It is unhelpful to tell aliens about beauty, or any art form that may represent the highest artistic achievement, because it is unique to humans. I will tell them the Gödel incomplete theorem and the Fermat's theorem. This is what aliens can understand.
Wenchu ​​(Q): We hope to promote the spirit of science, running through the GMIC Global Nine Stations, and recommending three books to let the friends of the Science and Technology Society better understand the future of science and science.
Hawking: They should write books instead of reading books. Only when a person can write a book about something can he fully understand the matter.
Weibo users (Q): What do you think is the most important thing a person should do in his life and the last thing he should do? â€
Hawking: We should never give up, we should all understand as much as possible (this world).
Weibo users (Q): Human beings have repeated revolutions and movements in the long history. From stone tools, steam, electricity... What do you think will be driven by the next revolution?
Hawking: (I think) is the development of computer science, including artificial intelligence and quantum computing. Technology has become an important part of our lives, but in the coming decades it will gradually penetrate every aspect of society, providing us with intelligent support and advice in many areas such as healthcare, work, education and technology. But we must make sure that we are in control of artificial intelligence, not it.
The last question comes from the musician and investor Hu Haiquan (Q): If the maturity window of the interstellar immigration technology is late, can there be an internal disaster that can't be completely solved, leading to human extinction? Let go of the alien disaster like the comet hitting the earth.
Hawking: Yes. With overpopulation, disease, war, famine, climate change and water scarcity, human beings have the ability to resolve these crises. But unfortunately, these crises are still a serious threat to our survival on the earth. These crises can be solved, but they are not yet.
NiCad and Ni-MH Battery are two quite different kinds of batteries. Both types have to be dealt with differently from one another in relation to charging and charging competencies and procedures. In general, NiMH batteries cannot handle the high rate of charges or discharges (typically over 1.5-2 amps) that Ni-Cd Battery can.NiMH (Nickel Metal Hydride) batteries are a rechargeable alternative to lithium or alkaline batteries. A NiMH battery usually has 2-3 times the capacity of an equivalent sized nickel-cadmium battery and will last longer in high-performance electronics.NiMH (Nickel Metal Hydride) batteries are a rechargeable alternative to lithium or alkaline batteries. A NiMH battery usually has 2-3 times the capacity of an equivalent sized nickel-cadmium battery and will last longer in high-performance electronics. NiMH batteries are used in conjunction with a smart charger,...NiMH batteries have a wide operating temperature range and typically have a cycle life of about 3000 cycles. Also, they can tolerate overcharge and over discharge conditions, simplifying maintenance requirements. Another advantage to a Nickel Metal Hydride battery is that it's environmentally friendly because a NiMH battery is made without...
Ni-Mh Rechargeable Battery,Nimh Cell Pack 7.2V,A Nimh Battery Pack,4.8V Ni-Mh Battery
Shenzhen Glida Electronics Co., Ltd. , https://www.szglida.com