国产一精品一AV一免费,亚洲成AV人片不卡无码,AV永久天堂一区二区三区,国产激情久久久久影院

立即打開
人工智能需要什么?同理心和監(jiān)管

人工智能需要什么?同理心和監(jiān)管

McKenna Moore 2018年12月18日
隨著此項技術(shù)變得越發(fā)普及,應(yīng)該把監(jiān)管和同理心擺在首要位置。

AI革命就要來了。

采用機器學習這一關(guān)鍵人工智能技術(shù)的公司數(shù)量已經(jīng)超過大家的想象。隨著此項技術(shù)變得越發(fā)普及,應(yīng)該把監(jiān)管和同理心擺在首要位置。

上周三,在加州拉古納尼格召開的《財富》2018年最具影響力下一代女性峰會上,情感人工智能公司Affectiva的聯(lián)合創(chuàng)始人兼首席執(zhí)行官拉娜·埃爾·卡柳比表示,在技術(shù)領(lǐng)域里,情商和智商同樣重要。她指出,考慮到人與技術(shù)互動的頻率以及技術(shù)在人們生活中越來越大的影響,把同理心融入技術(shù)之中很重要。

埃爾·卡柳比說,要做到這一點,途徑之一就是讓多元化團隊來進行技術(shù)開發(fā)。對此她舉了個例子:中年白人男性在創(chuàng)建和訓練面部識別AI時用的都是和他們長相類似的照片,這就意味著這樣的AI在面對有色人種女性時往往不能正常發(fā)揮作用,甚至根本派不上用場。

“這要歸結(jié)到設(shè)計相關(guān)算法的團隊身上,如果不是多元化團隊,他們就不會考慮這樣的AI在戴頭巾的女性面前表現(xiàn)如何?!卑枴た日f:“他們只是解決了自己所知范圍內(nèi)的問題?!?/p>

微軟AI部門的首席產(chǎn)品負責人納維里娜·辛格說,她在一個電子商務(wù)網(wǎng)站項目中遇到過一個完美的以同理心思維開發(fā)技術(shù)的例子。這個網(wǎng)站想讓印度消費者更方便地購買他們的商品。由于印度的識字率較低,這家公司就為不識字的用戶提供了語音轉(zhuǎn)文字功能。他們事先統(tǒng)一行動,用印度各地的方言和文化對AI進行了訓練,原因是在不同背景下,用戶說話時表達的意圖和內(nèi)容都不一樣。IBM沃森部門的客戶關(guān)系總經(jīng)理Inhi Cho Suh認為,意圖識別是目前AI面臨的最主要挑戰(zhàn)和機遇之一。

現(xiàn)在,機器學習的另一大焦點是監(jiān)管。與會者認為,隨著機器人和其他相關(guān)技術(shù)日臻完善,必需有法律來制約這種力量。Suh指出,應(yīng)通過技術(shù)和監(jiān)管來防止不正當使用。埃爾·卡柳比則強調(diào),大學計算機科學和工程專業(yè)學生必須接受倫理教育。

辛格提出用F.A.T.E.這個縮略語來代表開發(fā)和監(jiān)管此類技術(shù)時應(yīng)注意的關(guān)鍵問題。它們是公平(fairness)、負責(accountability)、透明(transparency)和倫理(ethics)。反恐技術(shù)公司Moonshot CVE的創(chuàng)始人維迪亞·拉瑪林漢姆說,雖然有很多關(guān)于AI技術(shù)的負面新聞,比如英國政治咨詢機構(gòu)Cambridge Analytica非法獲取8700萬Facebook用戶數(shù)據(jù)的丑聞,但我們不能讓恐懼主導輿論。

她指出:“出臺政策的動機不應(yīng)該是害怕,而是應(yīng)該在掌握相關(guān)知識并了解相關(guān)信息后制定政策?!保ㄘ敻恢形木W(wǎng))

譯者:Charlie

審校:夏林

The AI revolution is upon us.

Machine learning, one of key artificial intelligence technologies, has already been deployed within more companies than you would expect. As it gains even greater adoption, regulation and empathy should be at the forefront.

Rana el Kaliouby, co-founder and CEO of emotional AI company Affectiva, said at Fortune’s Most Powerful Women Next Gen 2018 in Laguna Niguel, Calif. on last Wednesday that EQ is just as important in technology as IQ. Because of the frequency with which people interact with technology and its growing impact on our lives, it’s important that empathy be built into it, she said.

One way to do that, el Kaliouby said, is to have diverse teams work on the technology. In a example of the problem, she said that middle-aged white men usually create and train face recognition AI using images of people who look like themselves, which means the technology often doesn’t work as well, if at all, on women of color.

“It goes back to the teams designing these algorithms, and if your team isn’t diverse they aren’t going to be thinking about how this will work on a woman wearing hijab,” she said. “You solve for the problems you know.”

Navrina Singh, principal product lead of Microsoft AI, said that a perfect example of building technology with empathy in mind came to her during a project with an e-commerce site that trying to make it easier for customers in India to buy it products. Due to the low literacy rate in the country, the company built speech-to-text functionality for users who couldn’t read. Beforehand, the company made a concerted effort to train its AI in dialects and cultures from all around India, because the intent and meaning of speech varies based on background. Deciphering intent is one of the greatest challenges and opportunities in AI right now, Inhi Cho Suh, general manager of customer engagement at IBM Watson, said.

Regulation is another big topic in machine learning at the moment. With bots and other related technology becoming more sophisticated, laws are necessary to check that power, the panelists agreed. Suh said that technology and regulation should be used to prevent misuse, while el Kaliouby stressed the need for mandatory ethics training for college computer science and engineering majors.

Singh shared the acronym F.A.T.E., which stands for fairness, accountability, transparency and ethics, to sum up the key ideas to keep in mind when creating and regulating this technology. Although there is a lot of bad news about technology, like the Cambridge Analytica scandal, in which a British political firm accessed personal data on up to 87 million Facebook users, we must not let fear guide the debate, said Vidhya Ramalingham, founder of counter-terrorism technology company Moonshot CVE.

“Policy should not be written out of fear, it should be written in an educated and informed manner,” she said.

掃碼打開財富Plus App
龙泉市| 犍为县| 奉化市| 南江县| 清水县| 石狮市| 临海市| 定安县| 屯留县| 周口市| 玛沁县| 金湖县| 克东县| 天津市| 信阳市| 塔城市| 宿州市| 读书| 京山县| 赤水市| 田东县| 会宁县| 高碑店市| 上林县| 固原市| 晋中市| 福鼎市| 洛浦县| 民乐县| 巴林右旗| 合川市| 仁布县| 黎川县| 保康县| 互助| 前郭尔| 沽源县| 桃园县| 四子王旗| 鲜城| 香河县|