Friday, July 2, 2021

AI Weekly: WHO outlines steps for creating inclusive AI health care systems

Where is your company on the AI ​​introduction curve? Take our AI survey to find out.

This week the World Health Organization (WHO) released its first global report on AI in healthcare along with six guiding principles for design, development and deployment. The result of two years of consultations with WHO-appointed experts, the work warns against overestimating the benefits of AI while showing how it could be used to improve disease early detection, support clinical care, and more.

The healthcare industry produces enormous amounts of data. An IDC study estimates that the amount of health data generated annually, which exceeded 2,000 exabytes in 2020, will continue to grow at a rate of 48% year over year. The trend has enabled significant advances in AI and machine learning that rely on large data sets to make predictions ranging from the capacity of hospital beds to the presence of malignant tumors in MRIs. But unlike other areas where AI has been applied, the sensitivity and scope of health data make collecting and using that data a daunting challenge.

The WHO report acknowledges this and points out that the opportunities offered by AI are associated with risks. There is the harm that prejudice encoded in algorithms can cause patients, communities, and caregivers. Systems trained primarily on data from people in high-income countries, for example, may not work well with low- and middle-income patients. Furthermore, the unregulated use of AI could undermine patients’ rights for the benefit of commercial interests or the governments involved in surveillance.

The datasets used to train AI systems that can predict the onset of diseases such as Alzheimer’s, diabetes, diabetic retinopathy, breast cancer, and schizophrenia come from a number of sources. However, in many cases, patients are not fully aware that their information is included. In 2017, UK regulators concluded that the Royal Free London NHS Foundation Trust, a division of the UK’s National Health Service based in London, had provided Google DeepMind with data from 1.6 million patients without their consent.

Regardless of the source, this data can contain biases that perpetuate inequalities in AI algorithms trained to diagnose disease. A team of British scientists found that almost all eye disease records come from patients in North America, Europe and China, which means that algorithms for diagnosing eye diseases are less reliable for racial groups from underrepresented countries. In another study, researchers from the University of Toronto, the Vector Institute, and MIT showed that widespread chest X-ray records contain racial, gender, and socioeconomic biases.

To further illustrate this point, Stanford researchers found that some U.S. Food and Drug Administration (FDA) -approved AI-powered medical devices are prone to data drift and bias toward underrepresented patients. While AI is being incorporated into more and more medical devices – the FDA approved over 65 AI devices last year – the accuracy of these algorithms is not necessarily well studied because they are not assessed through prospective studies.

Experts argue that prospective studies that collect test data before and not at the same time as they are used are particularly necessary for AI medical devices, as their actual use may differ from their intended use. For example, most computerized diagnostic systems are designed as decision support tools rather than primary diagnostic tools. A prospective study could show that doctors misuse a device for diagnosis, leading to results that may differ from expectations.

Aside from the dataset challenges, models without peer review can encounter obstacles when deployed in the real world. Scientists at Harvard found that algorithms trained to recognize and classify CT scans might gravitate towards the scan formats of certain CT machine manufacturers. Meanwhile, a white paper published by Google revealed challenges in implementing an eye disease prediction system in Thai hospitals, including issues with scanning accuracy.

To limit the risks and maximize the health benefits of AI, WHO recommends taking steps to protect autonomy, ensure transparency and accountability, promote responsibility and accountability, and work towards inclusion and equity. Recommendations also include promoting well-being, safety, and the public interest, as well as responsive and sustainable AI.

The WHO says that people affected by decisions based on algorithms should take remedial action and that designers should “continuously” evaluate AI apps to see if they are meeting expectations and requirements. In addition, WHO recommends both governments and businesses to address workplace disruptions caused by automated systems, including training health workers to adapt to the use of AI.

“AI systems should … be carefully designed to reflect the diversity of socio-economic and health environments,” the WHO said in a press release. “They should be accompanied by training in digital skills, community engagement, and awareness, especially for millions of healthcare workers who need digital skills or retraining when their roles and functions are automated, and who struggle with machines that make decision-making and autonomy of providers and patients. “

As new examples of problematic AI in health care emerge, from widely adopted but untested algorithms to skewed dermatological datasets, it becomes critical that stakeholders follow accountabilities like those outlined by the WHO. Not only will it build trust in AI systems, but it could also improve the care of the millions of people who may be exposed to AI-powered diagnostic systems in the future.

“Machine learning is really a powerful tool when it is properly designed – when problems are properly formalized and methods are identified that really provide new insights into understanding these diseases,” says Dr. Mihaela van der Schaar, Turing Fellow and Professor of Machine Learning, AI, and Health at the University of Cambridge and UCLA, said during a keynote speech at the ICLR conference in May 2020. “Of course we are at the beginning of this revolution, and it is there is still a long way to go. But it’s an exciting time. And it is an important time to focus on such technologies. “

If you’d like to cover AI, send news tips to Kyle Wiggers – and subscribe to the AI ​​Weekly newsletter and bookmark our AI channel, The Machine.

Thank you for reading,

Kyle Wiggers

Author of AI staff

VentureBeat

VentureBeat’s mission is to be a digital marketplace for tech decision makers to gain knowledge of transformative technologies and transactions. Our website provides essential information on data technologies and strategies to help you run your organization. We invite you to become a member of our community to gain access:

  • current information on the topics of interest to you
  • our newsletters
  • protected thought leader content and discounted access to our valuable events, such as Transform 2021: Learn more
  • Network functions and more

become a member



source https://dailyhealthynews.ca/ai-weekly-who-outlines-steps-for-creating-inclusive-ai-health-care-systems/

No comments:

Post a Comment