當前位置

首頁 > 英語閱讀 > 雙語新聞 > 機器打敗人類不靠模仿 Machines will do jobs in new ways not by copying humans

機器打敗人類不靠模仿 Machines will do jobs in new ways not by copying humans

推薦人: 來源: 閱讀: 1.3W 次

TThere are many ways of being smart that aren’t smart like us.” These are the words of Patrick Winston, a leading voice in the field of artificial intelligence. Although his idea is simple, its significance has been lost on most people thinking about the future of work. Yet this is the feature of AI that ought to preoccupy us the most.

機器打敗人類不靠模仿 Machines will do jobs in new ways not by copying humans

“(人工智能)有很多與人類不同的智能方式。”這是人工智能領域的領軍人物帕特里克•溫斯頓(Patrick Winston)說過的話。儘管他的觀點很簡單,但多數思考工作未來的人沒有領悟到它的含義。然而,他所說的是我們應該最爲關注的人工智能的一個特徵。

From the 1950s to the 1980s, during the “first wave” of AI research, it was generally thought that the best way to build systems capable of performing tasks to the level of human experts or higher was to copy the way that experts worked. But there was a problem: human experts often struggled to articulate how they performed many tasks.

從上世紀50年代到80年代,在人工智能研究“第一次浪潮”時期,人們一般認爲,創建能夠將任務執行到達到人類專家水平或更高水平的系統的最佳方法,是複製專家們的工作方式。但問題是:對於很多任務,人類專家都常常難以說出他們是如何執行的。

Chess-playing was a good example. When researchers sat down with grandmasters and asked them to explain how they played such fine chess, the answers were useless. Some players appealed to “intuition”, others to “experience”. Many said they did not really know at all. How could researchers build a chess-playing system to beat a grandmaster if the best players themselves could not explain how they were so good?

下棋是一個很好的例子。當研究人員與大師們坐下來,請他們解釋如何把棋下得這麼好時,答案都是毫無用處的。一些大師認爲是“直覺”,還有一些人則歸因於“經驗”。很多人表示,他們根本不知道原因。如果最優秀的棋手自己都不能解釋他們爲何如此出色,那麼研究人員如何能夠創建一個可以打敗大師的下棋系統?

A turning point came in 1997. Garry Kasparov, the then world chess champion, was beaten by IBM’s supercomputer, Deep Blue. What was most remarkable was how the system did it. Deep Blue did not share Mr Kasparov’s “intuition” or “experience”. It won by dint of sheer processing power and massive data storage capability.

1997年,一個轉折點出現了。當時的國際象棋世界冠軍加里•卡斯帕羅夫(Garry Kasparov)被IBM的超級計算機“深藍”(Deep Blue)擊敗。最引人矚目的是這個電腦系統擊敗人類的方法。“深藍”沒有卡斯帕羅夫的“直覺”或“經驗”。它是憑藉強大的處理能力和大規模數據存儲能力獲勝的。

There then followed AI’s “second wave”, which we are in today. Google’s AI, AlphaGo, has just finished a five-game series of Go against Lee Se-dol, perhaps the best player of the game alive. Until recently, most researchers thought we were at least ten years away from a machine victory. Yet AlphaGo beat Mr Lee in four of the five games. It did not have his genius or strategic insight; it relied on what are known as “deep neural networks”, driven, once again, by processing power and data storage. Like Deep Blue, AlphaGo was in a sense playing a different game.

接着出現了人工智能的“第二次浪潮”,就是現在。谷歌(Google)的人工智能程序AlphaGo剛剛在5局的圍棋對弈中擊敗或許稱得上目前最優秀的棋手李世石(Lee Se-dol)。不久以前,多數研究人員還認爲,我們距離機器獲勝至少還有10年的時間。然而,AlphaGo在與李世石的5局交鋒中,有4局獲勝。它沒有李世石的天賦或戰略眼光;它憑藉的是被稱爲“深度神經網絡”的系統,同樣的,該系統是由處理能力和數據存儲能力驅動。與“深藍”一樣,從某種程度上來說,AlphaGo玩的是不同的遊戲。

In retrospect, we can see that early researchers made the mistake we now call the “AI fallacy”: they assumed that the only way to perform a task to the standard of a human expert is to replicate the approach of human specialists. Today, many commentators are repeating the same mistake in thinking about the future of work. They fail to realise that in the future systems will out-perform human beings not by copying the best human experts, but by performing tasks in very different ways.

回過頭來看,我們能夠看出早期的研究人員犯下了我們現在稱之爲“人工智能謬論”的錯誤:他們認爲,要把一項任務執行到達到人類專家的標準,唯一途徑是複製人類專家的方法。如今,很多評論人士在思考工作的未來時也在重複同樣的錯誤。他們未能意識到,將來系統戰勝人類不是通過模仿最優秀的人類專家,而是通過以截然不同的方式執行任務。

Consider the legal world. Daniel Martin Katz, a law professor, has designed a system to predict the voting behaviour of the US Supreme Court. It can perform as well as most specialists, but it does not mirror the judgement of a human being. Instead it draws on data that captures six decades of Court behaviour.

以法律界爲例。法學教授丹尼爾•馬丁•卡茨(Daniel Martin Katz)設計了一個預測美國最高法院投票行爲的系統。它可以預測得與多數專家一樣好,但它並不是模仿人類的判斷。它利用的是記錄美國最高法院60年行爲的數據。

We see similar developments in other parts of the economy. Millions of people in the US use online tax preparation software, not a personal interaction with an accountant, to file their returns. Autodesk’s “Project Dreamcatcher” generates computerised designs, not by mimicking the creativity of an architect, but by sifting through a vast number of possible designs and selecting the best option. And IBM’s Watson helps to diagnose cancer, not by copying the reasoning of a doctor, but by trawling enormous bodies of medical data.

我們還在其他經濟領域看到了類似的事情。在美國,數百萬人利用在線報稅軟件,而不是親自與會計師會面,來提交納稅申報表。Autodesk的“Project Dreamcatcher”會通過篩選大量可能的設計以及選擇最佳方案(而不是模仿建築師的創意)來生成電腦化設計。IBM的超級計算機“沃森”(Watson)通過查閱海量醫療數據(而非複製醫生的推理方法)幫助診斷癌症。

All this does not herald the “end of work”. Rather, it points to a future that is very different from the one most experts are predicting. It is often said that because machines cannot “think” like human beings, they can never be creative; that because they cannot “reason” like human beings, they can never exercise judgement; or that because they cannot “feel” like human beings they can never be empathetic. For these reasons, it is claimed, there are a great many tasks that will always require human beings to perform them.

所有這一切都沒有預示“工作的終結”。它只是表明未來與多數專家的預測截然不同。人們經常說,因爲機器無法像人類那樣“思考”,所以它們永遠無法變得有創意;因爲它們無法像人類那樣“推理”,所以它們永遠無法做出判斷;因爲它們無法像人類那樣“感受”,所以它們永遠無法變得有同情心。出於這些原因,有人聲稱,很多任務永遠需要人類去執行。

But this is to fail to grasp that tomorrow’s systems will handle many tasks that today require creativity, judgement or empathy, not by copying us, but by working in entirely different, unhuman ways. The set of tasks reserved exclusively for human beings is likely to be much smaller than many expect.

但這未能意識到,未來的系統將處理很多現在需要創意、判斷或同情的任務,不是通過模仿我們,而是通過用一種完全不同的非人類的方式工作。人類專屬的任務可能會比很多人預測的少得多。