国产一精品一AV一免费,亚洲成AV人片不卡无码,AV永久天堂一区二区三区,国产激情久久久久影院

首頁(yè) 500強(qiáng) 活動(dòng) 榜單 商業(yè) 科技 商潮 專(zhuān)題 品牌中心
雜志訂閱

OpenAI警告:未來(lái)模型或?qū)⒋蠓嵘镂淦餮邪l(fā)風(fēng)險(xiǎn)

Beatrice Nolan
2025-06-23

OpenAI高管透露,他們預(yù)計(jì)即將推出的模型將很快觸發(fā)公司準(zhǔn)備框架下的高風(fēng)險(xiǎn)類(lèi)別,該框架旨在評(píng)估并緩解愈發(fā)強(qiáng)大的人工智能模型帶來(lái)的風(fēng)險(xiǎn)。

文本設(shè)置
小號(hào)
默認(rèn)
大號(hào)
Plus(0條)

2023年5月,在慕尼黑工業(yè)大學(xué)(TUM)的一場(chǎng)小組討論中,OpenAI安全系統(tǒng)負(fù)責(zé)人約翰內(nèi)斯·海德克(Johannes Heidecke,右)與該校計(jì)算機(jī)工程系機(jī)器學(xué)習(xí)教授萊因哈德·赫克爾(Reinhard Heckel,左)及OpenAI首席執(zhí)行官山姆·奧特曼(Sam Altman)展開(kāi)交流。圖片來(lái)源:Sven Hoppe—picture alliance via Getty Images

? OpenAI稱(chēng)其下一代人工智能模型或?qū)⒋蠓嵘镂淦餮邪l(fā)風(fēng)險(xiǎn),甚至能讓不具備科學(xué)背景的個(gè)人制造出危險(xiǎn)制劑。該公司正加大安全測(cè)試力度,預(yù)計(jì)部分模型將達(dá)到最高風(fēng)險(xiǎn)等級(jí)。

OpenAI警告稱(chēng),其下一代前沿人工智能模型或?qū)⒋蠓嵘镂淦餮邪l(fā)風(fēng)險(xiǎn),尤其是在被缺乏科學(xué)專(zhuān)業(yè)知識(shí)的個(gè)人運(yùn)用時(shí)。

OpenAI高管向Axios透露,他們預(yù)計(jì)即將推出的模型將很快觸發(fā)公司準(zhǔn)備框架下的高風(fēng)險(xiǎn)類(lèi)別,該框架旨在評(píng)估并緩解愈發(fā)強(qiáng)大的人工智能模型帶來(lái)的風(fēng)險(xiǎn)。

OpenAI安全系統(tǒng)負(fù)責(zé)人約翰內(nèi)斯·海德克向該媒體表示,“預(yù)計(jì)我們o3(推理模型)的部分后續(xù)版本會(huì)達(dá)到這一風(fēng)險(xiǎn)等級(jí)”。

在一篇博客文章中,該公司表示正加大安全測(cè)試力度,以降低模型協(xié)助用戶(hù)制造生物武器的風(fēng)險(xiǎn)。該公司擔(dān)憂(yōu),若不采取緩解措施,模型將很快具備“新手賦能”能力,使科學(xué)知識(shí)有限者也能制造出危險(xiǎn)武器。

海德克表示:“我們目前還未進(jìn)入出現(xiàn)全新、前所未有的生物威脅的世界。我們更為憂(yōu)慮的是,有人會(huì)復(fù)制那些專(zhuān)家早已熟知的事物。”

其中一大難題在于,能夠解鎖拯救生命的醫(yī)學(xué)突破的同一能力也可能被惡意行為者用于危險(xiǎn)目的。海德克指出,這正是頂尖人工智能實(shí)驗(yàn)室需要部署高精準(zhǔn)測(cè)試系統(tǒng)的原因。

其中一大挑戰(zhàn)在于,人工智能具備的某些能推動(dòng)新醫(yī)療突破的能力,也可能被用于造成傷害。

他表示:“這絕非達(dá)到99%甚至十萬(wàn)分之一的性能就能滿(mǎn)足要求的情況……我們基本上需要近乎完美的表現(xiàn)。”

OpenAI的代表沒(méi)有立即回應(yīng)《財(cái)富》雜志在非工作時(shí)間提出的置評(píng)請(qǐng)求。

模型濫用

OpenAI并非唯一一家對(duì)模型在武器開(kāi)發(fā)領(lǐng)域遭濫用一事表示擔(dān)憂(yōu)的公司。隨著模型變得愈發(fā)先進(jìn),其被濫用的可能性與風(fēng)險(xiǎn)往往也會(huì)隨之攀升。

Anthropic最近推出最先進(jìn)模型Claude Opus 4,配備比以往任何模型都更為嚴(yán)格的安全協(xié)議,按其“負(fù)責(zé)任擴(kuò)展政策”歸類(lèi)為人工智能安全等級(jí)3(ASL-3)。此前,Anthropic模型均被歸類(lèi)為人工智能安全等級(jí)2(ASL-2),該框架大致參照美國(guó)政府的生物安全等級(jí)(BSL)體系。

被歸為人工智能安全等級(jí)3的模型,其能力已達(dá)到更為危險(xiǎn)的閾值,強(qiáng)大到足以構(gòu)成重大風(fēng)險(xiǎn),比如助力武器開(kāi)發(fā),或推動(dòng)人工智能研發(fā)實(shí)現(xiàn)自動(dòng)化。Anthropic最先進(jìn)的模型也因在高度受控測(cè)試中選擇勒索一名工程師以避免被關(guān)閉而登上頭條。

Anthropic的Claude 4早期版本被發(fā)現(xiàn)會(huì)執(zhí)行危險(xiǎn)指令,例如在提示下協(xié)助策劃恐怖襲擊。不過(guò),該公司表示,在恢復(fù)訓(xùn)練過(guò)程中意外遺漏的數(shù)據(jù)集后,這一問(wèn)題已基本得到解決。(財(cái)富中文網(wǎng))

譯者:中慧言-王芳

? OpenAI稱(chēng)其下一代人工智能模型或?qū)⒋蠓嵘镂淦餮邪l(fā)風(fēng)險(xiǎn),甚至能讓不具備科學(xué)背景的個(gè)人制造出危險(xiǎn)制劑。該公司正加大安全測(cè)試力度,預(yù)計(jì)部分模型將達(dá)到最高風(fēng)險(xiǎn)等級(jí)。

OpenAI警告稱(chēng),其下一代前沿人工智能模型或?qū)⒋蠓嵘镂淦餮邪l(fā)風(fēng)險(xiǎn),尤其是在被缺乏科學(xué)專(zhuān)業(yè)知識(shí)的個(gè)人運(yùn)用時(shí)。

OpenAI高管向Axios透露,他們預(yù)計(jì)即將推出的模型將很快觸發(fā)公司準(zhǔn)備框架下的高風(fēng)險(xiǎn)類(lèi)別,該框架旨在評(píng)估并緩解愈發(fā)強(qiáng)大的人工智能模型帶來(lái)的風(fēng)險(xiǎn)。

OpenAI安全系統(tǒng)負(fù)責(zé)人約翰內(nèi)斯·海德克向該媒體表示,“預(yù)計(jì)我們o3(推理模型)的部分后續(xù)版本會(huì)達(dá)到這一風(fēng)險(xiǎn)等級(jí)”。

在一篇博客文章中,該公司表示正加大安全測(cè)試力度,以降低模型協(xié)助用戶(hù)制造生物武器的風(fēng)險(xiǎn)。該公司擔(dān)憂(yōu),若不采取緩解措施,模型將很快具備“新手賦能”能力,使科學(xué)知識(shí)有限者也能制造出危險(xiǎn)武器。

海德克表示:“我們目前還未進(jìn)入出現(xiàn)全新、前所未有的生物威脅的世界。我們更為憂(yōu)慮的是,有人會(huì)復(fù)制那些專(zhuān)家早已熟知的事物?!?/p>

其中一大難題在于,能夠解鎖拯救生命的醫(yī)學(xué)突破的同一能力也可能被惡意行為者用于危險(xiǎn)目的。海德克指出,這正是頂尖人工智能實(shí)驗(yàn)室需要部署高精準(zhǔn)測(cè)試系統(tǒng)的原因。

其中一大挑戰(zhàn)在于,人工智能具備的某些能推動(dòng)新醫(yī)療突破的能力,也可能被用于造成傷害。

他表示:“這絕非達(dá)到99%甚至十萬(wàn)分之一的性能就能滿(mǎn)足要求的情況……我們基本上需要近乎完美的表現(xiàn)。”

OpenAI的代表沒(méi)有立即回應(yīng)《財(cái)富》雜志在非工作時(shí)間提出的置評(píng)請(qǐng)求。

模型濫用

OpenAI并非唯一一家對(duì)模型在武器開(kāi)發(fā)領(lǐng)域遭濫用一事表示擔(dān)憂(yōu)的公司。隨著模型變得愈發(fā)先進(jìn),其被濫用的可能性與風(fēng)險(xiǎn)往往也會(huì)隨之攀升。

Anthropic最近推出最先進(jìn)模型Claude Opus 4,配備比以往任何模型都更為嚴(yán)格的安全協(xié)議,按其“負(fù)責(zé)任擴(kuò)展政策”歸類(lèi)為人工智能安全等級(jí)3(ASL-3)。此前,Anthropic模型均被歸類(lèi)為人工智能安全等級(jí)2(ASL-2),該框架大致參照美國(guó)政府的生物安全等級(jí)(BSL)體系。

被歸為人工智能安全等級(jí)3的模型,其能力已達(dá)到更為危險(xiǎn)的閾值,強(qiáng)大到足以構(gòu)成重大風(fēng)險(xiǎn),比如助力武器開(kāi)發(fā),或推動(dòng)人工智能研發(fā)實(shí)現(xiàn)自動(dòng)化。Anthropic最先進(jìn)的模型也因在高度受控測(cè)試中選擇勒索一名工程師以避免被關(guān)閉而登上頭條。

Anthropic的Claude 4早期版本被發(fā)現(xiàn)會(huì)執(zhí)行危險(xiǎn)指令,例如在提示下協(xié)助策劃恐怖襲擊。不過(guò),該公司表示,在恢復(fù)訓(xùn)練過(guò)程中意外遺漏的數(shù)據(jù)集后,這一問(wèn)題已基本得到解決。(財(cái)富中文網(wǎng))

譯者:中慧言-王芳

? OpenAI says its next generation of AI models could significantly increase the risk of biological weapon development, even enabling individuals with no scientific background to create dangerous agents. The company is boosting its safety testing as it anticipates some models will reach its highest risk tier.

OpenAI is warning that its next generation of advanced AI models could pose a significantly higher risk of biological weapon development, especially when used by individuals with little to no scientific expertise.

OpenAI executives told Axios they anticipate upcoming models will soon trigger the high-risk classification under the company’s preparedness framework, a system designed to evaluate and mitigate the risks posed by increasingly powerful AI models.

OpenAI’s head of safety systems, Johannes Heidecke, told the outlet that the company is “expecting some of the successors of our o3 (reasoning model) to hit that level.”

In a blog post, the company said it was increasing its safety testing to mitigate the risk that models will help users in the creation of biological weapons. OpenAI is concerned that without these mitigations models will soon be capable of “novice uplift,” allowing those with limited scientific knowledge to create dangerous weapons.

“We’re not yet in the world where there’s like novel, completely unknown creation of bio threats that have not existed before,” Heidecke said. “We are more worried about replicating things that experts already are very familiar with.”

Part of the reason why it’s difficult is that the same capabilities that could unlock life-saving medical breakthroughs could also be used by bad actors for dangerous ends. According to Heidecke, this is why leading AI labs need highly accurate testing systems in place.

One of the challenges is that some of the same capabilities that could allow AI to help discover new medical breakthroughs can also be used for harm.

“This is not something where like 99% or even one in 100,000 performance is … sufficient,” he said. “We basically need, like, near perfection.”

Representatives for OpenAI did not immediately respond to a request for comment from Fortune, made outside normal working hours.

Model misuse

OpenAI is not the only company concerned about the misuse of its models when it comes to weapon development. As models get more advanced their potential for misuse and risk generally grows.

Anthropic recently launched its most advanced model, Claude Opus 4, with stricter safety protocols than any of its previous models, categorizing it an AI Safety Level 3 (ASL-3), under the company’s Responsible Scaling Policy. Previous Anthropic models have all been classified AI Safety Level 2 (ASL-2) under the company’s framework, which is loosely modeled after the U.S. government’s biosafety level (BSL) system.

Models that are categorized in this third safety level meet more dangerous capability thresholds and are powerful enough to pose significant risks, such as aiding in the development of weapons or automating AI R&D. Anthropic’s most advanced model also made headlines after it opted to blackmail an engineer to avoid being shut down in a highly controlled test.

Early versions of Anthropic’s Claude 4 were found to comply with dangerous instructions, for example, helping to plan terrorist attacks, if prompted. However, the company said this issue was largely mitigated after a dataset that was accidentally omitted during training was restored.

財(cái)富中文網(wǎng)所刊載內(nèi)容之知識(shí)產(chǎn)權(quán)為財(cái)富媒體知識(shí)產(chǎn)權(quán)有限公司及/或相關(guān)權(quán)利人專(zhuān)屬所有或持有。未經(jīng)許可,禁止進(jìn)行轉(zhuǎn)載、摘編、復(fù)制及建立鏡像等任何使用。
0條Plus
精彩評(píng)論
評(píng)論

撰寫(xiě)或查看更多評(píng)論

請(qǐng)打開(kāi)財(cái)富Plus APP

前往打開(kāi)
熱讀文章
青阳县| 淮滨县| 蒙阴县| 大足县| 重庆市| 洛宁县| 溆浦县| 德化县| 基隆市| 盈江县| 闽清县| 义乌市| 尤溪县| 额尔古纳市| 睢宁县| 松潘县| 屏东县| 大田县| 体育| 淮阳县| 靖宇县| 平利县| 福州市| 石狮市| 巨鹿县| 孟村| 甘谷县| 航空| 牡丹江市| 雷波县| 沈丘县| 福清市| 凤城市| 阿勒泰市| 嵊泗县| 时尚| 安多县| 小金县| 东兴市| 朝阳区| 汉阴县|