share_log

Baker Tilly's Insight on How AI Is Revolutionizing the Healthcare and Life Sciences Industry

Baker Tilly's Insight on How AI Is Revolutionizing the Healthcare and Life Sciences Industry

贝克·蒂利对人工智能如何彻底改变医疗保健和生命科学行业的见解
Accesswire ·  03/25 09:10

NORTHAMPTON, MA / ACCESSWIRE / March 25, 2024 / Baker Tilly:

马萨诸塞州北安普敦/ACCESSWIRE/2024 年 3 月 25 日/贝克·蒂利:

Authored by Arun Parekkat

由 Arun Parekkat 撰写

Use of artificial intelligence (AI), including applications such as machine learning (ML), where AI software is trained to form its own decision-making criteria based on previous examples of a particular task in life sciences has the potential to transform how we better human health and conduct medical research. According to the Artificial Intelligence Report 2023 prepared by Stanford University in 2022, medical and healthcare was the AI focus area with the most investment with $6.1 billion.

人工智能(AI)的使用,包括机器学习(ML)等应用程序,在这些应用中,人工智能软件经过训练,以根据生命科学中特定任务的先前例子形成自己的决策标准,有可能改变我们改善人类健康和进行医学研究的方式。根据斯坦福大学在2022年编写的《2023年人工智能报告》,医疗和医疗保健是人工智能的重点领域,投资额为61亿美元。

To better understand the potential for how AI can revolutionize the life sciences industry, let's first explore the concept of AI.

为了更好地了解人工智能如何彻底改变生命科学行业的潜力,让我们首先探讨人工智能的概念。

What is AI? A definitional treatment

什么是人工智能?定义性治疗

AI is a term that most of us are now familiar with, but its interpretation is extremely varied. At a high level, the term "artificial intelligence" encompasses the use of technology to perform tasks typically associated only with human beings, such as learning and decision making. This can be seen anywhere from "strong AI" or Artificial General Intelligence (AGI), which is where a machine could have intelligence equivalent to a human, to "weak AI," the version we are most familiar with, from voice assistants and driverless cars, where software is trained to perform focused and specific tasks.

人工智能是我们大多数人现在都熟悉的术语,但它的解释千差万别。总体而言,“人工智能” 一词包括使用技术执行通常仅与人类相关的任务,例如学习和决策。从 “强人工智能” 或人工通用智能(AGI)(机器可以拥有与人类相当的智能),到我们最熟悉的 “弱人工智能”,从语音助手和无人驾驶汽车,在这些版本中,软件经过训练,可以执行有针对性的特定任务。

According to the AI Act, EU regulatory framework for AI - 2021, AI is defined as "software that is developed with techniques and approaches that can generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with." Annex I of the act outlines approaches such as ML, explicit logic-based approaches as well as more general statistical techniques.

根据欧盟2021年人工智能监管框架《人工智能法案》,人工智能被定义为 “采用技术和方法开发的软件,这些技术和方法可以生成内容、预测、建议或影响其交互环境的决策等输出。”该法的附件一概述了机器学习、基于逻辑的明确方法以及更通用的统计技术等方法。

The current U.S. Food and Drug Administration (FDA) definition of AI describes it as the "science and engineering of making intelligent machines, especially intelligent computer programs." Wherein, AI can use different techniques, including models based on statistical analysis of data, expert systems that primarily rely on if-then statements, and ML. The FDA also states that ML is a subset technique of AI that can be used to design and train software algorithms to learn from and act on data. Software developers can use ML to create an algorithm that is 'locked' so that its function does not change, or 'adaptive' so its behavior can change over time based on new data.

美国食品药品监督管理局 (FDA) 目前对人工智能的定义将其描述为 “制造智能机器,尤其是智能计算机程序的科学和工程”。其中,人工智能可以使用不同的技术,包括基于数据统计分析的模型、主要依赖于if-then语句的专家系统以及机器学习。美国食品和药物管理局还指出,机器学习是人工智能的子集技术,可用于设计和训练软件算法,以从数据中学习和采取行动。软件开发人员可以使用机器学习创建 “锁定” 算法,使其功能不变,或者使用 “自适应” 算法,使其行为可以根据新数据随着时间的推移而发生变化。

What does AI mean in medical technology (medtech) and in pharmaceutical terms?

人工智能在医疗技术(medtech)和制药术语中意味着什么?

Given the significant expenditure associated with drug development and delivery for burgeoning global populations, it is unsurprising that AI is sought as a tool to increase productivity and efficiency in healthcare. We must go back to 1995 to find the first ML technology approved by the FDA. Since then, more than 500 medical devices have gained 510k status aiding image analysis and diagnosis of diseases such as cancer with AI led software devices. These devices seek to optimize delivery of surgery and post-operative care for orthopedic patients receiving an implant such as an artificial hip.

鉴于与新兴全球人口的药物研发和交付相关的巨额支出,人们寻求将人工智能作为提高医疗保健生产力和效率的工具也就不足为奇了。我们必须追溯到 1995 年,才能找到第一项获得 FDA 批准的机器学习技术。从那时起,已有500多台医疗设备获得了51万台的地位,通过人工智能主导的软件设备帮助对癌症等疾病进行图像分析和诊断。这些设备旨在优化接受人工髋关节等植入物的骨科患者的手术和术后护理。

Within pharmaceutical research and development, there are multiple instances where AI is being used across the discovery and development pipeline. The first drugs to have been developed "in silico," or by computer, are now entering human clinical trials. In its broadest sense, AI is being used to improve the identification of candidate molecules and aiding in the recruitment and retention of patients for Phase I to III clinical trials. For marketed drugs, AI technology such as Large Language Model chatbots are being used as symptom assessment tools to improve awareness of rare diseases amongst the public and primary care providers.

在药物研发中,有多种在发现和开发管道中使用人工智能的情况。第一批通过 “计算机模拟” 或计算机开发的药物现已进入人体临床试验。从最广泛的意义上讲,人工智能被用来改善候选分子的识别,并帮助招募和留住一至三期临床试验的患者。对于上市药物,大型语言模型聊天机器人等人工智能技术被用作症状评估工具,以提高公众和初级保健提供者对罕见疾病的认识。

Nature of managing the risk of AI - an assessment

管理人工智能风险的本质——评估

While the promises of AI in life sciences are undeniable, potential issues and ethical considerations warrant much attention. Data privacy and security is a concern as AI relies on vast amounts of sensitive patient data. Thus, ensuring the confidentiality and protection of this information is paramount. The potential for data breaches or unauthorized access poses significant risks to patient privacy and could erode public trust in AI-driven healthcare solutions.

尽管人工智能在生命科学中的前景不可否认,但潜在的问题和伦理考虑值得高度关注。由于人工智能依赖大量敏感的患者数据,因此数据隐私和安全性是一个问题。因此,确保这些信息的机密性和保护至关重要。数据泄露或未经授权访问的可能性对患者隐私构成重大风险,并可能削弱公众对人工智能驱动的医疗解决方案的信任。

Another ethical challenge pertains to the "black box" nature of some AI algorithms, especially given the importance in the life sciences industry to be able to consistently explain the clinical benefit and value of a product with clear understandable evidence. Complex ML models may arrive at conclusions without providing transparent explanations for their decisions. In the medical world, where accountability and transparency are crucial, potential biases and lack of interpretability could pose ethical problems. Clinicians and regulators need to understand how AI arrives at its conclusions to ensure patient safety and maintain high ethical standards.

另一个伦理挑战与某些人工智能算法的 “黑匣子” 性质有关,特别是考虑到生命科学行业必须能够用清晰易懂的证据持续解释产品的临床益处和价值。复杂的机器学习模型可能得出结论,而不会为其决策提供透明的解释。在问责制和透明度至关重要的医学界,潜在的偏见和缺乏可解释性可能会带来伦理问题。临床医生和监管机构需要了解人工智能如何得出结论,以确保患者安全并维持较高的道德标准。

Solutions such as data de-identification have been proposed to address some data privacy issues that may come from AI in healthcare and life sciences. Proposals for addressing data privacy concerns in AI for healthcare and life sciences include de-identification of data, which involves controlling data access based on patient consent and tracking usage purposes over time. Additionally, strategies such as encryption, differential privacy (sharing group attributes without revealing individual ones), federated learning (avoiding centralized data aggregation), and data minimization (limiting personal data based on application scope) may also address the concern of data privacy.

已经提出了诸如数据去识别之类的解决方案,以解决医疗保健和生命科学领域可能来自人工智能的一些数据隐私问题。解决医疗保健和生命科学人工智能中数据隐私问题的提案包括去识别数据,这涉及根据患者同意控制数据访问以及跟踪一段时间内的使用目的。此外,诸如加密、差异隐私(在不泄露个人属性的情况下共享群组属性)、联邦学习(避免集中数据聚合)和数据最小化(根据应用范围限制个人数据)等策略也可以解决数据隐私问题。

A key development has been European Union's AI Act. A first of its kind regulatory framework for AI in which AI systems are analyzed and classified according to the risk they pose to users. It creates a risk pyramid with an outright ban for certain AI applications, stringent requirements for AI systems classified as high risk, and a more limited set of (transparency) requirements for AI applications with a lower risk. The stated goal of the EU's AI act is "a balanced and proportionate approach limited to the minimum necessary requirements to address the risks linked to AI without unduly constraining technological development."

一项关键进展是欧盟的《人工智能法》。这是人工智能监管框架中的第一个,在该框架中,根据人工智能系统对用户构成的风险进行分析和分类。它形成了一个风险金字塔,彻底禁止某些人工智能应用程序,对归类为高风险的人工智能系统提出了严格的要求,对风险较低的人工智能应用程序提出了更有限的(透明度)要求。欧盟人工智能法案的既定目标是 “采取平衡和相称的方法,仅限于最低必要要求,以应对与人工智能相关的风险,同时不对技术发展造成不当限制。”

Risk assessments in AI-enabled healthcare products

支持人工智能的医疗保健产品的风险评估

While there is considerable interest from clinicians and regulators in being able to understand to what degree AI is being utilized within a product, particularly where it concerns the long-term efficacy and accuracy of 'adaptive' technologies, fundamental product development and engineering principles apply equally to devices which use 'true' AI as those which do not.

尽管临床医生和监管机构对能够了解产品中人工智能的使用程度,尤其是涉及 “自适应” 技术的长期疗效和准确性的设备表现出浓厚的兴趣,但基本的产品开发和工程原理同样适用于使用 “真正” 人工智能的设备,也适用于不使用 “真正” 人工智能的设备。

Assessing whether a product is using AI as per the various regulations under consideration and in place across the globe, and subsequently the measures needed to robustly assess it for its readiness to meet market approval requirements, will need to include the underlying software prediction models, the data used to train and validate it as well as the way it is delivered to relevant end users (whether HCPs or individual patients) within a clearly described patient journey. Recent FDA guidance has provided greater clarity around how manufacturers can safely build adaptive products which have the capacity to learn and improve as they are exposed to increasing volumes of data once placed on the market.

评估产品是否根据正在考虑和在全球范围内实施的各种法规使用人工智能,以及随后采取有力评估其是否准备好满足市场批准要求所需的措施,将需要包括基础软件预测模型、用于训练和验证该产品的数据,以及在明确描述的患者旅程中将其交付给相关最终用户(无论是HCP还是个体患者)的方式。美国食品和药物管理局最近的指导方针进一步明确了制造商如何安全地制造自适应产品,这些产品一旦投放市场,就会面临越来越多的数据,从而有学习和改进的能力。

Developers can take comfort in the fact that clear intended use definitions, use of robust evidence-based methodologies, and total product lifecycle approaches, including Agile, that are applied during development and post-launch will remain the cornerstones of the regulatory standard.

开发人员可以欣慰地看到,在开发和发布后采用的明确的预期用途定义、稳健的循证方法的使用以及包括敏捷在内的整个产品生命周期方法,仍将是监管标准的基石。

The task of identifying and assessing risks that could arise specifically from AI technologies is not trivial and the following paper will explore this domain in more detail, with examples that can help manufacturers ensure safety, efficacy as well benefits unique to this technology such as personalization, autonomy and mitigations for bias including healthcare inequalities.

识别和评估人工智能技术可能特别产生的风险并非易事,以下论文将更详细地探讨这一领域,举例说明可以帮助制造商确保该技术的安全性、有效性以及特有的优势,例如个性化、自主权和缓解包括医疗保健不平等在内的偏见。

For more insights, visit Baker Tilly's healthcare & life sciences page.

欲了解更多见解,请访问贝克·天利的医疗保健和生命科学页面。

View additional multimedia and more ESG storytelling from Baker Tilly on 3blmedia.com.

在 3blmedia.com 上查看 Baker Tilly 讲述的更多多媒体和更多 ESG 故事。

Contact Info:

联系信息:

Spokesperson: Baker Tilly

发言人:贝克·蒂利

SOURCE: Baker Tilly

来源:Baker Tilly


声明:本内容仅用作提供资讯及教育之目的,不构成对任何特定投资或投资策略的推荐或认可。 更多信息
    抢沙发