• Live
    • Audio Only
  • google plus
  • facebook
  • twitter
News > U.S.

Google System Could Improve Breast Cancer Detection

  • The study is the latest to prove the accuracy of screening for the pathology that affects one in eight females globally.

    The study is the latest to prove the accuracy of screening for the pathology that affects one in eight females globally. | Photo: Reuters

Published 1 January 2020
Opinion

The AI system identified cancers with a similar degree of accuracy to expert radiologists while reducing the number of false-positive results.

A series of research made by Alphabet Inc’s DeepMind AI unit together with Google Health has shown the potential of artificial intelligence in the early detection of women breast cancer, according to an article published Wednesday in the journal Nature.

RELATED:

Brazil Fines Facebook US$1.6m in Cambridge Analytica Case

The study is the latest to prove the accuracy of screening for the pathology that affects one in eight females globally. Radiologists miss about 20 percent of the disease in mammograms, the American Cancer Society said, and half of all women who get the screenings over a 10-year period have a false-positive result.

The investigation team, which included researchers at Imperial College London and Britain’s National Health Service, trained the system to identify breast cancer on tens of thousands of mammograms and then compared the system’s performance with the actual results from a set of 25,856 mammograms in the United Kingdom and 3,097 from the United States.

As a result, the AI system identified cancers with a similar degree of accuracy to expert radiologists while reducing the number of false-positive results by 5.7 percent in the U.S.-based group and by 1.2 percent in the British-based group.

Chief of the Breast Imaging Department at Harvard’s Massachusetts General Hospital Connie Lehman explained that using computers to improve cancer diagnostics is decades old, and computer-aided detection (CAD) systems are commonplace in mammography clinics, yet CAD programs have not an improved performance in clinical practice.

Those CAD programs were trained to identify things human radiologists can see, whereas, with artificial intelligence, computers learn to spot cancers based on the actual results of thousands of mammograms.  This has the potential to “exceed human capacity to identify subtle cues that the human eye and brain aren’t able to perceive,” Lehman added.

Nevertheless, the study has some limitations, as most of the tests were done using the same type of imaging equipment, and the U.S. group contained a lot of patients with confirmed breast cancers.

The Hidden Risks Behind AI in Cancer Research

At the same time, experts and other scientists worry that the private expansion of healthcare research through AI systems might infringe on ethical and legal violations, especially due to the vast amount of data needed to operate such systems. 

“It is critical that health systems and clinicians require AI providers to demonstrate explicitly what values are encoded in the development choices they have made, including the goals they have set for algorithms,” according to another academic article to be published on Feb. 2020 in The Breast scientific Journal.

The experts from the National Health and Medical Research Council and National Breast Cancer Foundation, among other organizations; argue that although AI can bring positive results a legal and ethical framework must be set up first.

“Once artificial intelligence becomes institutionalized, it may be difficult to reverse: a proactive role for government, regulators and professional groups will help ensure introduction in robust research contexts and the development of a sound evidence base regarding real-world effectiveness,” the document reads.

For the physicians, the worry comes from the use of non-explainable AI - meaning algorithms that are not public - as they should arguably be prohibited in healthcare, where medicolegal and ethical requirements to inform are already high. 

Since building useful AI requires access to vast quantities of high-quality data; the research argues that this data sharing creates significant opportunities for data breaches, harm, and failure to deliver public goods in return, risks are exacerbated by a private market. 

While significant conflicts of interest—for both clinicians and proprietary developers— have the potential to skew AI use in several ways. On one side doctors and professional bodies might see their personal interests affected better and more accurate technologies and might want to regulate or eliminate such advances. And on the other side, current financial interests in AI are hyping potential benefits, and risk tying radiology infrastructure to particular proprietary algorithms might future intractable conflicts of interest.

“It is critical that health systems and clinicians require AI providers to demonstrate explicitly what values are encoded in the development choices they have made, including the goals they have set for algorithms,” they conclude, adding that “rather than simply accepting what is offered, detailed public discussion about the acceptability of different options is required.”

Comment
0
Comments
Post with no comments.