1 00:00:00,620 --> 00:00:04,460 我細個嗰陣係個書蟲 2 00:00:05,320 --> 00:00:07,496 我估你哋一啲人都係 3 00:00:07,520 --> 00:00:08,736 (笑聲) 4 00:00:08,760 --> 00:00:11,976 仲有你呀,阿生 笑得最大聲嗰個,睇嚟宜家都仲係 5 00:00:12,000 --> 00:00:14,256 (笑聲) 6 00:00:14,280 --> 00:00:17,776 我喺德州北部平原一個鎮仔大嘅 7 00:00:17,800 --> 00:00:21,136 我阿爺係牧師,我老豆係警察 8 00:00:21,160 --> 00:00:23,510 所以自細就係乖乖仔 9 00:00:24,040 --> 00:00:27,296 睇微積分書當娛樂 10 00:00:27,320 --> 00:00:28,856 (笑聲) 11 00:00:28,880 --> 00:00:30,576 你都係咧 12 00:00:30,600 --> 00:00:34,336 於是我自己整 激光啦、電腦啦、火箭模型啦 13 00:00:34,360 --> 00:00:37,360 最後仲喺房裏邊整埋火箭燃料 14 00:00:37,960 --> 00:00:41,616 用我哋宜家科學嘅講法 15 00:00:41,640 --> 00:00:44,896 係搞搞震無幫襯 16 00:00:44,920 --> 00:00:46,136 (笑聲) 17 00:00:46,160 --> 00:00:48,170 就喺嗰陣 18 00:00:48,170 --> 00:00:51,500 Stanley Kubrick 拍嗰部 《2001太空漫遊》上畫 19 00:00:51,500 --> 00:00:53,800 我嘅生活從此改寫 20 00:00:54,280 --> 00:00:56,336 我係部片嘅忠實擁躉 21 00:00:56,360 --> 00:00:58,896 特別中意哈爾 9000 22 00:00:58,920 --> 00:01:00,976 哈爾係一部機械人 23 00:01:01,000 --> 00:01:03,456 專為太空船導航而設計 24 00:01:03,480 --> 00:01:06,016 指引太空船由地球飛往木星 25 00:01:06,040 --> 00:01:08,096 但哈爾都有缺點 26 00:01:08,120 --> 00:01:12,400 佢最後為實現目的而罔顧人類安全 27 00:01:12,840 --> 00:01:14,936 哈爾雖然係虛構 28 00:01:14,960 --> 00:01:17,616 但佢反映咗我哋嘅恐懼 29 00:01:17,640 --> 00:01:24,786 就係怕無感情嘅人工智能 以後會操控人類 30 00:01:25,880 --> 00:01:28,456 我認為呢種恐懼唔成立 31 00:01:28,480 --> 00:01:32,496 講真,人類呢一刻嘅歷史 可以話係最輝煌嘅 32 00:01:32,760 --> 00:01:37,736 我哋冇向身體同大腦限制低頭 33 00:01:37,760 --> 00:01:43,126 反而製造出精良、複雜又美觀嘅機器 34 00:01:43,126 --> 00:01:47,136 協助人類逹至超乎想象嘅領域 35 00:01:47,726 --> 00:01:51,760 我以前喺空軍學校同太空指揮部做嘢嘅 36 00:01:51,760 --> 00:01:53,976 咁而家,我做咗系統工程師 37 00:01:54,000 --> 00:01:59,386 最近我著手於美國太空總署嘅 火星任務嘅一個工程問題 38 00:01:59,386 --> 00:02:01,856 到目前為止,所有上月球嘅太空船 39 00:02:01,880 --> 00:02:07,036 我哋都可以留喺休斯頓控制中心控制 40 00:02:07,040 --> 00:02:10,576 不過,火星比月球遠 200 倍 41 00:02:10,576 --> 00:02:13,992 於是乎由地球傳送到火星嘅訊號 42 00:02:14,000 --> 00:02:17,000 平均要花 13 分鐘先可以送到 43 00:02:17,000 --> 00:02:18,690 所以如果中途出咗咩問題 44 00:02:18,690 --> 00:02:20,840 我哋都唔夠時間解決 45 00:02:20,840 --> 00:02:23,336 所以解決方法係 46 00:02:23,360 --> 00:02:28,986 我哋要將任務控制台 裝喺太空船獵戶號嘅墻板裏邊 47 00:02:29,000 --> 00:02:31,896 另一個方法係 48 00:02:31,920 --> 00:02:36,716 喺人類登陸之前 喺火星表面放置一個機械人 49 00:02:36,720 --> 00:02:38,376 機械人前期負責興建設施 50 00:02:38,400 --> 00:02:41,760 後期就加入科學團隊擔當協助 51 00:02:43,400 --> 00:02:46,136 而當我企喺工程師嘅角度嚟睇 52 00:02:46,160 --> 00:02:51,546 我好清楚我要打造嘅 係一個高智能、富團隊精神 53 00:02:51,566 --> 00:02:53,936 同善於交際嘅人工智能 54 00:02:53,960 --> 00:02:58,256 換句話嚟講,我要整到好似哈爾咁樣 55 00:02:58,280 --> 00:03:00,696 不過係唔會當人類 係冚家鏟要消滅嘅機械人 56 00:03:00,720 --> 00:03:02,080 (笑聲) 57 00:03:02,920 --> 00:03:04,736 等陣先 58 00:03:04,760 --> 00:03:08,656 咁係咪真係有可能整個咁嘅機械人先? 59 00:03:08,680 --> 00:03:10,136 其實,係可以嘅 60 00:03:10,160 --> 00:03:14,926 好多時,設計人工智能元素係好難嘅 61 00:03:14,926 --> 00:03:19,616 咁唔係講緊整機械人頭髮 62 00:03:19,640 --> 00:03:22,296 學 Alan Turning 話齋 63 00:03:22,320 --> 00:03:24,696 我無興趣整隻有感覺嘅機器 64 00:03:24,720 --> 00:03:26,296 亦都唔係生產另一個哈爾 65 00:03:26,320 --> 00:03:28,736 我所追求嘅係一個會簡單思考 66 00:03:28,760 --> 00:03:32,210 同有少少智慧嘅機械人 67 00:03:33,000 --> 00:03:35,970 自從哈爾喺電影出現之後 68 00:03:35,970 --> 00:03:37,656 計算科學已經發展咗好多 69 00:03:37,680 --> 00:03:40,896 我想像得到,如果發明佢嘅 Chandra 博士今日喺度嘅話 70 00:03:40,920 --> 00:03:43,256 佢實有好多問題問我哋 71 00:03:43,280 --> 00:03:49,470 我哋有冇可能 從數以億計嘅設備裡面讀取數據 72 00:03:49,470 --> 00:03:53,176 同時預測機器犯嘅錯誤 同埋及早更正呢? 73 00:03:53,200 --> 00:03:54,416 可以 74 00:03:54,440 --> 00:03:57,616 我哋可唔可以 整隻機械人出嚟識講人話? 75 00:03:57,640 --> 00:03:58,550 可以 76 00:03:58,550 --> 00:04:00,960 我哋可唔可以整隻機械人 77 00:04:00,960 --> 00:04:05,256 識得識別物體、辨別情感 自帶情感、打機,甚至識讀唇語? 78 00:04:05,280 --> 00:04:06,210 可以 79 00:04:06,210 --> 00:04:09,446 我哋可唔可以整隻機械人識訂立目標 80 00:04:09,446 --> 00:04:12,296 實現目標兼且喺過程中自學? 81 00:04:12,320 --> 00:04:13,410 可以 82 00:04:13,410 --> 00:04:16,896 我哋可唔可以整隻 有思維邏輯嘅機械人? 83 00:04:16,920 --> 00:04:18,416 我哋宜家嘗試緊 84 00:04:18,440 --> 00:04:22,470 我哋可唔可以整隻機械人 明白道德觀念同底線? 85 00:04:22,480 --> 00:04:24,870 呢個任務我哋責無旁貸 86 00:04:25,360 --> 00:04:26,736 咁姑且我哋有可能 87 00:04:26,760 --> 00:04:31,836 為呢個任務或者其他任務 整隻咁樣嘅機械人 88 00:04:31,840 --> 00:04:33,230 跟住你實會問 89 00:04:33,230 --> 00:04:35,856 咁嘅人工智能會唔會造成威脅? 90 00:04:35,880 --> 00:04:40,796 時至今日,每項新科技面世 唔多唔少都會帶嚟不安 91 00:04:40,800 --> 00:04:42,496 以前啲人第一次見到汽車時 92 00:04:42,520 --> 00:04:46,536 就驚車禍會造成家破人亡 93 00:04:46,560 --> 00:04:49,256 以前第一次見到電話時 94 00:04:49,280 --> 00:04:52,176 啲人就驚人同人之間嘅交流會受到破壞 95 00:04:52,200 --> 00:04:56,136 曾幾何時啲人見到文字可以傳送 96 00:04:56,160 --> 00:04:58,656 又驚人類嘅記憶力會喪失 97 00:04:58,680 --> 00:05:00,736 呢啲不安喺一定程度上嚟講無錯 98 00:05:00,760 --> 00:05:03,080 但同時呢啲科技 99 00:05:03,080 --> 00:05:08,620 拓闊咗人類嘅體驗 100 00:05:09,840 --> 00:05:12,180 我哋不如再講遠啲 101 00:05:13,120 --> 00:05:17,856 我唔驚呢啲人工智能面世 102 00:05:17,880 --> 00:05:21,696 因為佢最終會接受人類嘅一啲價值 103 00:05:21,720 --> 00:05:23,900 試下咁唸:製造感知機械人 104 00:05:23,900 --> 00:05:27,100 同製造以前傳統嘅軟件密集型機械人 105 00:05:27,100 --> 00:05:28,560 係有根本性分別 106 00:05:28,560 --> 00:05:29,840 我哋而家唔係編程機械人 107 00:05:29,840 --> 00:05:31,300 我哋係教機械人 108 00:05:31,300 --> 00:05:33,696 為咗教機械人識別花 109 00:05:33,720 --> 00:05:36,736 我攞幾千種我鍾意嘅花畀佢睇 110 00:05:36,760 --> 00:05:39,016 為咗教曉機械人點打機—— 111 00:05:39,040 --> 00:05:42,600 我真係教㗎。你都會咁做 112 00:05:42,610 --> 00:05:44,640 我真係鍾意花呢—— 113 00:05:45,440 --> 00:05:48,296 為咗教識機械人打「殺出重圍」 114 00:05:48,320 --> 00:05:50,376 我畀佢打幾千局遊戲 115 00:05:50,400 --> 00:05:54,536 不過呢個過程,我又會教佢 辨別好局同劣局 116 00:05:54,536 --> 00:05:58,216 如果我想整隻法律助理機械人 117 00:05:58,240 --> 00:06:00,236 我除咗會教佢法律 118 00:06:00,236 --> 00:06:05,896 我仲會教佢法律嘅寛容同公義 119 00:06:06,560 --> 00:06:09,536 套用科學術語 呢啲係我哋所謂嘅參考標準 120 00:06:09,560 --> 00:06:11,576 而且重要嘅係: 121 00:06:11,600 --> 00:06:13,056 製造呢啲機械人時 122 00:06:13,080 --> 00:06:16,496 我哋將自己嘅價值觀灌輸畀佢 123 00:06:16,520 --> 00:06:19,656 最終,我相信人工智能 124 00:06:19,680 --> 00:06:23,520 會同一個受過專業訓練嘅人一樣 125 00:06:24,080 --> 00:06:25,296 不過,你哋可能會問 126 00:06:25,320 --> 00:06:31,336 如果呢啲人工智能被非法分子利用呢? 127 00:06:31,336 --> 00:06:35,136 雖然我哋唔可能杜絕所有暴力事件發生 128 00:06:35,160 --> 00:06:39,696 但我唔擔心人工智能落入一啲壞人手中 129 00:06:39,720 --> 00:06:44,886 因為人工智能需要持續同細微嘅改進 130 00:06:44,886 --> 00:06:47,296 單憑個人資源唔使旨意做到 131 00:06:47,320 --> 00:06:51,866 而且,唔好當成植入網絡病毒咁簡單 132 00:06:51,866 --> 00:06:54,570 唔好以為隨時隨地㩒個掣 133 00:06:54,570 --> 00:06:57,416 全世界嘅電腦瞬間就會爆炸 134 00:06:57,440 --> 00:07:00,256 人工智能係複雜好多嘅嘢 135 00:07:00,280 --> 00:07:02,515 但我相信遲早有一日人工智能會出現 136 00:07:02,520 --> 00:07:08,266 我驚唔驚人工智能會威脅全人類? 137 00:07:08,280 --> 00:07:12,450 電影《黑客帝國》、《大都會》 138 00:07:12,450 --> 00:07:15,856 《終結者》、電視劇《西部世界》 139 00:07:15,880 --> 00:07:18,016 呢類影視作品都係刻畫呢種恐懼 140 00:07:18,040 --> 00:07:23,926 無錯,哲學家 Nick Bostrom 喺《超人工智能》一書裏邊都認為 141 00:07:23,926 --> 00:07:27,936 超人工智能唔單止危險 142 00:07:27,936 --> 00:07:31,536 而且仲危及全人類 143 00:07:31,840 --> 00:07:34,056 佢嘅基本論點有︰ 144 00:07:34,080 --> 00:07:40,096 咁樣嘅機械人 最終唔會滿足於眼前擁有嘅資訊 145 00:07:40,120 --> 00:07:43,016 機械人可能會因而自己鑽研學習方法 146 00:07:43,040 --> 00:07:47,526 以至到最後發現 自己有啲目標同人類需要有矛盾 147 00:07:48,000 --> 00:07:49,856 有人支持博森博士嘅觀點 148 00:07:49,880 --> 00:07:54,200 包括 Elon Musk 同霍金 149 00:07:54,880 --> 00:08:02,460 我想指出其實幾位智者諗錯咗 150 00:08:02,480 --> 00:08:05,656 Nick Bostrom 嘅理論有好多錯誤 151 00:08:05,680 --> 00:08:07,816 但我無時間講曬所有 152 00:08:07,840 --> 00:08:14,316 但簡單嚟講,可以咁理解: 超智能唔代表超萬能 153 00:08:14,320 --> 00:08:16,576 哈爾對於成個太空探索團隊嘅威脅 154 00:08:16,576 --> 00:08:20,656 僅限於佢可以對探索任務落命令 155 00:08:20,680 --> 00:08:23,176 所以任務需要一個超智能機器 156 00:08:23,200 --> 00:08:25,696 落命令嘅需要有統治世界嘅能力 157 00:08:25,720 --> 00:08:27,520 電影《未來戰士 2018》 裡面嘅 Skynet 158 00:08:27,520 --> 00:08:30,840 嗰個可以操控人類意志嘅 超人工智能防禦系統 159 00:08:30,840 --> 00:08:35,696 控制曬全世界所有嘅機器同裝置 160 00:08:35,720 --> 00:08:37,176 即係話 161 00:08:37,200 --> 00:08:39,296 電影情節係唔會發生 162 00:08:39,320 --> 00:08:43,940 我哋唔係整隻機械人出嚟呼風喚雨 163 00:08:43,940 --> 00:08:47,136 操控喜怒無常、陷於鬥爭嘅人類 164 00:08:47,160 --> 00:08:51,056 再者,如果咁樣嘅機械人真係存在 165 00:08:51,080 --> 00:08:54,016 佢就會同人類嘅經濟鬥過 166 00:08:54,040 --> 00:08:56,880 甚至同人類爭資源 167 00:08:57,200 --> 00:08:58,416 最終結果係—— 168 00:08:58,440 --> 00:09:00,430 唔好話俾 Siri 聽—— 169 00:09:00,440 --> 00:09:01,826 我哋可以隨時熄咗佢哋 170 00:09:01,826 --> 00:09:03,960 (笑聲) 171 00:09:05,360 --> 00:09:10,226 我哋同機械人係共同進化 172 00:09:10,226 --> 00:09:15,449 未來嘅人類同今日嘅我哋 唔可以同日而語 173 00:09:15,449 --> 00:09:18,910 人類擔心人工智能帶嚟威脅 174 00:09:18,910 --> 00:09:24,290 只會令人類唔去真正關心 科技崛起帶嚟嘅人文同社會問題 175 00:09:24,290 --> 00:09:28,846 而呢啲問題正正係 我哋需要著手解決嘅 176 00:09:29,360 --> 00:09:30,600 問題例如有︰ 177 00:09:30,600 --> 00:09:34,556 當我哋唔再需要勞動力嘅時候 我哋要點去調控社會? 178 00:09:34,560 --> 00:09:40,206 點樣向全世界傳播知識同教育 同時又尊重當地嘅差異? 179 00:09:40,206 --> 00:09:44,456 點樣通過認知式保健幫人類延年益壽? 180 00:09:44,480 --> 00:09:49,756 點樣利用電腦幫人類踏足外太空? 181 00:09:49,760 --> 00:09:51,800 諗下都覺得興奮 182 00:09:52,400 --> 00:09:59,706 利用計算科學去開拓人類經歷嘅機會 就喺手裏邊 183 00:09:59,706 --> 00:10:01,996 而我哋只係啱啱捉緊到 184 00:10:02,296 --> 00:10:03,496 多謝大家 185 00:10:03,496 --> 00:10:05,032 (掌聲)