share_log

AI Falters In Identifying Depression Cues From Facebook Posts Of Black Americans: Report

AI Falters In Identifying Depression Cues From Facebook Posts Of Black Americans: Report

報告稱,人工智能在從Facebook上發佈的美國黑人帖子中識別抑鬱症線索方面步履蹣跚
Benzinga ·  03/29 07:51

Artificial intelligence (AI) may not be as effective in detecting signs of depression in the social media posts of Black Americans as it is in those of their white counterparts, according to a new study.

一項新的研究表明,人工智能(AI)在檢測美國黑人的社交媒體帖子中的抑鬱症狀方面可能不如白人美國人的社交媒體帖子那麼有效。

What Happened: The study, conducted by researchers from the U.S., found that an AI model was over three times less predictive for depression when applied to Black individuals using Meta Platforms Inc.'s (NASDAQ:META) Facebook than for white individuals.

發生了什麼:這項由美國研究人員進行的研究發現,當使用Meta Platforms Inc.應用於黑人時,人工智能模型對抑鬱症的預測要低三倍多。”s(納斯達克股票代碼:META)Facebook比白人更好。

The findings were published in the Proceedings of the National Academy of Sciences.

研究結果發表在《美國國家科學院院刊》上。

Previous research suggested that people who frequently use first-person pronouns and certain categories of words are at a higher risk for depression. However, this study found that these language associations were related to depression exclusively for white individuals.

先前的研究表明,經常使用第一人稱代詞和某些類別的單詞的人患抑鬱症的風險更高。但是,這項研究發現,這些語言關聯僅與白人的抑鬱症有關。

"Race seems to have been especially neglected in work on language-based assessment of mental illness," the study said.

該研究說:“在基於語言的心理疾病評估工作中,種族似乎尤其被忽視。”

Lead author Sharath Chandra Guntuku of the Center for Insights to Outcomes at Penn Medicine stated, "We were surprised that these language associations found in numerous prior studies didn't apply across the board."

賓夕法尼亞醫學院結果洞察中心的主要作者莎拉斯·錢德拉·貢圖庫表示:“令我們驚訝的是,先前許多研究中發現的這些語言關聯並未全面適用。”

While Guntuku acknowledged that social media data cannot be used to diagnose a patient with depression, it could be used for risk assessment of an individual or group.

儘管Guntuku承認社交媒體數據不能用於診斷抑鬱症患者,但它可以用於個人或群體的風險評估。

Subscribe to the Benzinga Tech Trends newsletter to get all the latest tech developments delivered to your inbox.

訂閱 Benzinga 技術趨勢時事通訊,將所有最新的技術發展發送到您的收件箱。

Why It Matters: "Godfather of AI" and Meta's chief AI scientist, Yann LeCun, had previously said it's "absolutely not" possible to create an unbiased AI system. Venture capitalist Marc Andreessen had previously warned of "bias" in AI chatbots.

爲何重要:“人工智能教父” 和 Meta 的首席人工智能科學家 Yann LeCun 此前曾表示,創建無偏見的人工智能系統 “絕對不可能”。風險投資家馬克·安德森此前曾警告說,人工智能聊天機器人存在 “偏見”。

That said, this is not the first time that AI has been found to have biases.

話雖如此,這不是第一次發現人工智能存在偏見。

For instance, a Harvard University article from 2020 highlighted the racial bias in face recognition technology, particularly against Black Americans.

例如,哈佛大學2020年的一篇文章強調了人臉識別技術中的種族偏見,特別是針對美國黑人的種族偏見。

The study's findings also underscore the importance of inclusive technology, such as Google's Real Tone, which aims to bring accuracy to cameras and the images they produce, particularly for diverse skin tones.

該研究的發現還突顯了包容性技術的重要性,例如谷歌的Real Tone,該技術旨在提高相機及其生成的圖像的準確性,特別是針對不同的膚色。

Check out more of Benzinga's Consumer Tech coverage by following this link.

通過以下方式查看 Benzinga 對消費科技的更多報道 點擊這個鏈接

Disclaimer: This content was partially produced with the help of Benzinga Neuro and was reviewed and published by Benzinga editors.

免責聲明: 此內容部分是在以下人員的幫助下製作的 本辛加神經 並由 Benzinga 編輯審閱和出版。

Photo courtesy: Shutterstock

照片來源:Shutterstock

声明:本內容僅用作提供資訊及教育之目的,不構成對任何特定投資或投資策略的推薦或認可。 更多信息
    搶先評論