Rofo 2019; 191(01): 73-78
DOI: 10.1055/a-0808-7772
100. Deutscher Röntgenkongress
© Georg Thieme Verlag KG Stuttgart · New York

Artificial Intelligence with Radiology as a Trailblazer for Super-Diagnostics: An Essay

Artikel in mehreren Sprachen: English | deutsch
Michael Forsting
Weitere Informationen

Correspondence

Prof. Dr. med. Michael Forsting
Direktor des Instituts für diagnostische und interventionelle Radiologie und Neuroradiologie, Medizinischer Direktor der Zentralen IT der Universitätsmedizin Essen
Universitätsklinikum Essen
Hufelandstraße 55
45147 Essen

Publikationsverlauf

Publikationsdatum:
13. Dezember 2018 (online)

 

There are currently two major trends in medicine. The first is digitalization: In radiology and conventional laboratory medicine, the daily routine has already been digital for quite some time. Patient files, pathology, microbiology, and virology are all digitalized to varying degrees but complete digitalization can soon be expected.

The second major trend in medicine is “personalization”: At present, this primarily refers to personalized pharmacotherapy which takes the individual physiological constitution as well as molecular-biological constellations into consideration.

If you look outside of the medical field, there is a further megatrend: Artificial intelligence (AI). For reasons that will be explained below, this trend has not yet properly arrived in the medical field but has become indispensable in industrial manufacturing and in the processing of large amounts of data. Facebook, Amazon and Google would not function without AI.

The challenge now is to implement digitalization and AI in medicine in such a way that personalized medicine becomes routine. Personalized medicine cannot become a reality without AI: The amount of data is so immense that it will no longer be possible to rely on the personal knowledge of a few experts.

Why an essay?

You were probably expecting a scientific review describing the possibilities for using AI in radiology and medicine. I selected the essay format because black swans cannot be convincingly presented on a scientific basis. The black swan is a symbol for the unexpected (if you would like to learn more about this topic, read the book “The Black Swan” by TALEB). Let’s look at the terror attacks on September 11, 2001 as an example of a black swan: Any security expert could have predicted this terror scenario in an academic paper. It was possible to hijack a plane. It was possible for a hijacker to have a pilot’s license. It was possible to switch off the airplane’s transponder so that its altitude, call sign, and speed were no longer being recorded. Even if a security expert had predicted this scenario in a paper, it probably would not have been accepted for publication. Thinking the unthinkable causes too great a disruption of our world view.

The automobile was also a black swan. The route from Mannheim to Pforzheim would have been traveled much faster by horse while avoiding the hassle of having to buy fuel at a pharmacy. An article regarding the potential benefits of the automobile would have been rejected by reviewers as implausible. Bertha Benz’s historic journey was the more important story.

Other black swans include the demise of the Siemens telephone branch, the disappearance of typewriter manufacturers from the market, the abandoning of atomic energy driven by the CDU, the ascent of Google and the downfall of Blackberry. What can we learn from this? Black swans are actually not all that rare. They are repeatedly underestimated by science and can turn the world upside-down.

OK, but we are in the medical field. Black swans are not taboo. Let’s take a look at the gastric and duodenal ulcer. It was considered psychosomatic until the middle of the 1980 s. A hypothesis was developed and confirmation was sought until everyone believed – that is until two Australian colleagues proved their theory regarding the bacterial origin of the ulcer in a self-experiment, in a manner that was completely unmedical and essayistic in a way. Or let’s talk about anorexia, a highly monomorphic disease caused, based on current understanding, by a combination of mental and social factors. At some point we find the true cause with the help of hypothesis-free research and AI.

By the end of this section you should be mentally prepared to recognize the existence of black swans and consider it possible for black swans to change medicine. And you should be prepared to accept an essay in a scientific journal.


#

What is the difference between CAD and AI?

With digitalization, radiology began developing algorithms to make diagnosis easier. Computer-aided diagnosis (CAD) is the keyword here. Many AI skeptics refer to the lack of breakthrough of CAD systems as justification for their skepticism.

As a rule, CAD systems require a hypothesis and this hypothesis must then be mathematically modeled. Let’s use CAD systems for MR mammography as an example. The hypothesis is that contrast enhancement and washout are the decisive characteristics of breast cancer. Therefore, techniques were developed to clearly visualize these contrast kinetics on MR, standard populations were examined, and the differences between normal tissue and cancerous tissue were mathematically modeled. If the hypothesis is correct, good results can be achieved with such CAD systems. An AI system learns without hypotheses, i. e., not only the contrast kinetics but also many other types of image data (T2, SWI, diffusion, everything the system contains) are applied to the algorithm. Clinical data, laboratory results, and genetic information can also be entered into the system for learning purposes. The algorithm then uses all of this data to determine the parameters that indicate or rule out breast cancer with the greatest probability. Contrast kinetics will certainly ultimately play a role in the decision but the algorithm will also find that other parameters are of prognostic importance with varying degrees of weighting. Without a hypothesis. My intention here is not to discuss the various forms of AI with keywords like “machine learning”, “deep learning”, “supervised”, and “unsupervised”. The point is that AI learns without a hypothesis. And as I will discuss later, the quality of AI applications depends on the quality of the data used to train the system.


#

Routine uses of AI in radiology

An advantage of computers is that they do not tire and can perform routine tasks in a highly reliable manner. What are some of these tasks in radiology? There are none? Well in my opinion counting MS plaques, determining ventricular size following SAB, measuring liver metastases during treatment, assessing hand bone age, and measuring angles in scoliosis are not particularly exciting tasks. AI will quickly take over such routine tasks. All we need is algorithms that were trained to reliably detect the target organ and the lesion. And then the algorithms must be able to quantify. These applications are already available but largely still in the initial stages. OK, it’s not perfect, but we must remind ourselves that the first road trip was also slow. Ultimately, instead of a flowery description of MS plaques at various locations in the brain with or without subtle contrast enhancement, the finding will simply be “plus 19” (treatment is not going well) or “minus 4” (looking good). The radiologist can then concentrate on minimizing one of the most common errors in radiology – satisfaction of search: An aneurysm of the anterior communicating branch or a pituitary adenoma is overlooked because of all of the plaque counting. Another application that is currently being pursued at many places around the world is the evaluation of conventional chest X-rays. An incorrect endotracheal tube position, pacemaker electrodes, a central venous catheter, a pneumothorax, tuberculosis, pneumonia in the medial lobe as well as 40 other pathologies can already be reliably detected. There are algorithms that can detect and automatically interpret 70 or even 90 of these pathologies. One problem with these applications is that they are currently poorly integrated in the workflow of the radiology department or the radiologist. But this can quickly improve with our help. Finally, a small company presented a product capable of diagnosing 40 typical pathologies on conventional chest X-rays. The algorithm was perfect for simple things (central venous catheter, endotracheal tube, etc.) but catastrophic in the case of pulmonary fibrosis. Why? The training data came from a third-class hospital. Therefore, another and much more serious problem with respect to establishing AI in radiology is the training data, as I will discuss in the following.

I will conclude this section with a brief discussion of screening. Imagine that China started a mammography screening program. It could never be run by radiologists. There are far too few radiologists and far too many women requiring screening. Of course, such screening in China would have to be performed by properly trained algorithms. And now imagine a lung screening program in the USA and Germany. What radiologist would enjoy spending the day looking at screening CT scans of the lung? During a presentation of AI applications at the ECR 2018, a discussion was held as to whether it would be better to use the “AI plus 1 radiologist” principle instead of the “four eyes principle” with 2 radiologists in mammography screening. In other words, the black swan was discussed. Of course, we can resist. But will that make a difference?


#

Added diagnostic value of AI: Radiomics

Image data may be able to provide more information than previously thought, i. e., a black swan. However, the amount of data is so large that this additional information is missed by the radiologist. In radiomics, an algorithm is again trained with known data. Let’s use glioblastomas as an example. The algorithm receives all MR image data (from T1 native to T2, SWI, DWI, and contrast-enhanced images) as well as molecular-genetic information. The algorithm can then detect not only the molecular differentiation of the tumor but also the extent of the blood-brain barrier dysfunction in unknown MR data sets without the administration of contrast agent. Utopia? No, it’s already a reality. Not everywhere but yes in some departments.

Let me give you a few more examples: We trained an algorithm to detect with over 95 % probability whether a patient with cervical cancer has organ metastases simply by viewing the tumor on PET-MRI scans. With appropriate training data, AI can detect the molecular-biological typing of tumors without biopsy, predict whether a tumor will respond to radiotherapy, and predict the probability of response to chemotherapies. All just a Utopian dream? At this point allow me to briefly deviate from the essay style and refer to the list of references: It provides an alphabetized list of various examples of the use of AI in radiology [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39].


#

Why has AI not yet become properly established in medicine?

AI is already the standard in many areas of manufacturing and robotics. But we only recently started talking about AI in medicine. Why is the medical field lagging behind when it comes to AI? Perhaps because medicine (including radiology) is much too complicated? Certainly not. Pattern recognition – and this applies to more than just radiology (as I will explain later) – is a strength of AI. Military applications of AI are largely based on the recognition of patterns. A fundamental requirement for training intelligent algorithms is the use of valid training datasets. If we take a look at the players on the AI scene (Google, Microsoft, Amazon, etc.), it becomes clear that they do not actually have access to valid medical datasets. Google probably knows better than any public health officer where the next flu epidemic will occur because people increasingly use the search engine to search for symptoms. However, valid radiology datasets are not so easy to obtain. But what about Siemens, GE and Philips? OK, they could theoretically look at numerous CT and MRI datasets (for our purposes here we will disregard the issue of data protection) but, in simple terms, the diagnoses don’t have to necessarily be correct. “Old economy” and “new economy” cannot introduce AI in medicine (and radiology) without some effort.

OK then: Who has the necessary knowledge and materials? Textbook publishers would be one answer. For many years now, publishers have required authors to illustrate a disease with more than one image. Authors are expected to illustrate an epidural hematoma with a 3 D dataset. Therefore, publishers have textbook knowledge, also known as “ground truth” in IT speak. We don’t need “big data” to train intelligent algorithms but rather only valid data and well annotated images (N > 1). Publishers probably simply do not have enough data to train an algorithm, for example, to determine bone age in children. And that brings us to the solution: Large hospitals and especially university hospitals have such data. Unfortunately that data is disorganized and poorly sorted and sometimes even incorrect. Therefore, we need something like “data validation” departments in large hospitals to check data (radiology, lab, pathology, electronic patient file, etc.) for validity and add it to a data pool with a structure that is accessible for AI. Not possible? We’ll see. When hospitals were still reimbursed by health insurance based on the number of occupied beds, we didn’t need any coding departments.

And now? DRG made it possible. OK, you can’t forget about the economic pressure. True. But AI creates economic opportunity! Imagine that you have significant data regarding prostate cancer and you train an algorithm with image data, lab results, and molecular-biological, genetic, and pathological data. And then you also have the follow-up data from hundreds of patients. If there is a pattern for the benign and malignant prostate cancer variants, AI will find it in the sea of data. And then you have an algorithm that no longer contains patient data but only the results of the training and can be sold as a product together with a strategic partner. Great idea, but what if everyone does it? We can’t all do everything! We could do the “liver” but not “rheumatism”. Everyone has their area of expertise. Moreover, there is more than one company manufacturing cars and MRI units. In the end, one product is perhaps better integrated into the workflow, another is faster, the third is trained with a different number of parameters and a varying degree of data validation. Therefore, there are sufficient features that differentiate individual applications from one another. And independent of economic opportunity: AI will be a major research area in medicine and radiology. A noteworthy secondary effect for university hospitals.

One last thing to conclude this section: Above I said that Google and Amazon have difficulty obtaining valid medical datasets. So far this is true. However, in the future, both of these companies could operate their own hospitals, perhaps initially only for employees and only on an outpatient basis. Then they’d have data structured according to their needs. And the health care market is so enormous that the Googles of this world are going to enter the market. You don’t think anyone will go to an Amazon hospital because data protection could be a problem? How many people buy from Amazon every day?


#

Super diagnostics

Can you now imagine the major effect AI will have on radiology? Independently handle boring tasks and usher in a new era from detection to prediction with radiomics. Let’s take a quick look at other diagnostic areas: AI will learn to look at the microbiologist’s petri dish and identify that the dish contains streptococcus. And AI will learn to predict resistance to HIV medication. Digital pathology is ultimately like radiology with color, also pattern recognition. AI will learn routines quickly. Pathomics will be the term for deeper analyses. AI will bring genetic analysis to another level. And in the future it will be able to diagnose diseases from certain constellations based on laboratory results.

In a nutshell: We will soon have AI in all diagnostic disciplines and the correct personalized (see above) treatment decision will increasingly come from these super diagnostic centers. Is your head spinning? Data validation department and super diagnostic center.

Too much? But if you already have a data validation department, the step to becoming an SD center is very small. It needs to be structured but in a way it is like a tumor board, albeit a scaled-down version, because some diagnoses and treatment recommendations will be made automatically, e. g., the designated standard therapy will always be recommended for a glioblastoma with a specific mutation. Of course, this applies not only to tumors but also to a broad range of diseases. Who can create such super diagnostic centers? Only university hospitals because as a rule only they provide all diagnostic disciplines and are thus able to merge the data in a structured manner. Perhaps super diagnostics will become a main feature of university medicine.


#

Ethical and legal dimension of AI

When I occasionally give a lecture on AI there are not infrequently people in the audience who have studied both medicine and law and there is always someone in the audience who raises a warning finger and mentions the word “ethics”.

Let’s discuss ethics. We agree that medicine always has an ethical dimension. Therefore, it is important to determine the specific ethical dimension of AI. If the topic of the presentation was: “Making medicine better with more intelligence”, there wouldn’t be any ethical concerns. The issue arises because it is “artificial intelligence”. Let’s replace “artificial intelligence” with “computer intelligence”. Did anyone mention “ethics” when computed tomography replaced palpation of the liver? I can’t remember. In fact, I find it strange that an ethicist has not yet questioned the ethics of not using an AI technique when it can ensure faster and more reliable diagnoses. And now we need to talk about the law. Following one of my lectures, a judge sitting in the audience stood up and told me – well not me personally but certainly everyone with such heretical thoughts – there would be major legal problems because judges would not be able understand how AI determines a diagnosis. And what would happen if AI is wrong? Major consequences.

Time for some de-escalation. Google has been planning for some time to make online mammography interpretation possible. For one dollar. This is not yet a reality. Maybe it will cost 5 dollars. But it’s coming. Now imagine the following scenario: two German radiologists interpret a mammogram as normal and then half a year later the woman is diagnosed with liver metastases of undiagnosed breast cancer. The husband of the patient then uploads the scans to Google and the algorithm says that breast cancer was indeed present. Both radiologists and two reviewers stand by the initial assessment. And this happens 100 times. Of course, that would never happen because, based on the results of countless studies investigating this very issue, we radiologists already use the algorithm to support our findings. And then it would be radiologists who do not use the algorithm who have a “serious legal problem”. Whether and the extent to which judges understand the algorithm will not be the determining factor. The problem of misdiagnosis by the algorithm remains. There will be fewer misdiagnoses because the algorithm was always trained with expert knowledge. Not all physicians are experts in everything and not all radiologists are experts. Let’s look at an area of medicine in which results have long been determined fully automatically: laboratory medicine. How do you determine that a machine is out of control? Quality control. And so the entire AI diagnosis process is subject to strict quality control. The system must constantly process and correctly diagnose image data and microbiological or pathological samples whose results are known.

In summary: AI can have an ethical component but only if it is not used. And the law books don’t need to be rewritten.


#

Will only “technical areas” be affected by AI?

At the beginning of the year, I gave a lecture in a big city and the moderator, a neurologist and psychiatrist, introduced my presentation with a few glowing remarks about the value of narrative-based medicine and the impending demise of radiology as a result of AI. What can I say? I was well prepared. Why are we primarily talking about radiology with respect to AI? Because we have been digital for a long time and have data that AI can read. If you approach AI and medicine in an almost philosophical manner, it is necessary to ask where AI will provide the greatest added value. The answer is: where the most mistakes are made. And relatively few mistakes are made in radiology! This is really true. When we perform a CT or MRI examination of the liver and find a 3-cm lesion, then a lesion of approximately this size will in fact be present. Of course, the differential diagnosis can be wrong depending on the radiologist’s qualifications, but we know there is a problem in the liver. Most errors occur in narrative-based medicine. This usually causes a stir among neurologists and psychiatrists in the audience. Let’s look at the reality: A woman in her mid-50 s has back pain and visits a general practitioner, an orthopedist, and then a physiotherapist. In this scenario, the patient almost always undergoes more than one MRI examination of the lumbar spine over the course of 2 years (the lumbar spine shows age-typical changes that we – unfortunately – always also describe in detail) and she receives countless mud pack and fascia treatments and perhaps even undergoes lumbar spine surgery with fusion of 4 segments. But the back pain is still there. We are all familiar with such cases. And then after 2 years (on average), the patient is diagnosed with “depression”. If you analyze the photos this woman uploads on Instagram or better yet look at her “daily activity” on Facebook, you can immediately determine the diagnosis. Or you use the algorithm that was trained to detect the speech melody and pitch of voice of depressed patients. You have your answer in 2 minutes. Even for a radiologist. And as soon as digital data from narrative-based medicine is available – Google is feverishly working on it – narrative-based medicine will completely change.


#

But radiology will remain otherwise unchanged?

No, of course not. You have accepted that AI is pushing radiology into the radiomics dimension and super diagnostic centers are the future. But the radiology business model will also change. When I started in radiology in 1987 everything was under one roof. The physician sat at the unit and viewed the images on the monitor. And there was a film that had to be developed and was used for reporting. The images were interpreted in the radiology rooms. Today, some of my colleagues work from home. And today, MR units can even be operated remotely. Highly qualified technologists sit in a room far away from all MRI units and simultaneously perform two or three MRI examinations. In the unit and the waiting area, non-technologists perform patient management including positioning. Positioning is less complicated with new surface coils designed as blankets and thanks to AI the device detects the region to be examined increasingly independently.

And then there is also Amazon’s business model – the platform. You could also look at Uber or AirBnB. Images are uploaded to a broker platform and any radiologist with time and desire can interpret them in a quality controlled manner. This represents a huge opportunity for hospitals that can’t find radiologists and for large collaborative practices and hospital departments to grow without making an investment. And perhaps the indication for MRI examination of the lumbar spine in back pain is no longer exclusively determined by orthopedists but by AI which accesses the patient history and uses the results of the physical examination entered by the physician. Or perhaps this is done by a physician assistant. And then the patient goes to the MRI unit that is located at Aldi around the corner and is remotely operated. And the image is then interpreted on the platform.

OK, you’re right. This sounds like a Utopian dream and perhaps won’t be a reality in the near future. Or maybe things will happen a bit differently. But think about the black swan at the beginning. Don’t think about why it’s not possible (laws, ethics, data protection, unions, medical associations, power outage). I know that searching for and finding problems is viewed as a highly intellectual pursuit in Germany. The introduction of the RIS-PACS in 2001 was greeted almost entirely by skeptics and those predicting problems both within and outside radiology. And the transition phase – previous analog images in the archive and current images in the PACS – was not fun. It will be the same with AI. But the technology will come and you have the opportunity to actively help shape it. So start thinking of solutions.

Michael Forsting

Zoom Image
Prof. Dr. med. Michael Forsting

#
#
  • References

  • 1 Abajian A. et al. Predicting Treatment Response to Intra-arterial Therapies for Hepatocellular Carcinoma with the Use of Supervised Machine Learning-An Artificial Intelligence Concept. J Vasc Interv Radiol 2018; 29: 850-857 e851
  • 2 Abdollahi H. et al. Cochlea CT radiomics predicts chemoradiotherapy induced sensorineural hearing loss in head and neck cancer patients: A machine learning and multi-variable modelling study. Phys Med 2018; 45: 192-197
  • 3 Al’Aref SJ. et al. Clinical applications of machine learning in cardiovascular disease and its relevance to cardiac imaging. Eur Heart J 2018; DOI: 10.1093/eurheartj/ehy404.
  • 4 Al-Masni MA. et al. Simultaneous detection and classification of breast masses in digital mammograms via a deep learning YOLO-based CAD system. Comput Methods Programs Biomed 2018; 157: 85-94
  • 5 Al Ajmi E. et al. Spectral multi-energy CT texture analysis with machine learning for tissue classification: an investigation using classification of benign parotid tumours as a testing paradigm. Eur Radiol 2018; 28: 2604-2611
  • 6 AlBadawy EA. et al. Deep learning for segmentation of brain tumors: Impact of cross-institutional training and testing. Med Phys 2018; 45: 1150-1158
  • 7 Arbabshirani MR. et al. Advanced machine learning in action: identification of intracranial hemorrhage on computed tomography scans of the head with clinical workflow integration. Npj Digital Medicine 2018; DOI: doi.org/10.1038/s41746-017-0015-z.
  • 8 Baessler B. et al. Texture analysis and machine learning of non-contrast T1-weighted MR images in patients with hypertrophic cardiomyopathy-Preliminary results. Eur J Radiol 2018; 102: 61-67
  • 9 Bektas CT. et al. Clear Cell Renal Cell Carcinoma: Machine Learning-Based Quantitative Computed Tomography Texture Analysis for Prediction of Fuhrman Nuclear Grade. Eur Radiol 2018; DOI: 10.1007/s00330-018-5698-2.
  • 10 Bluemke DA. Radiology in 2018: Are You Working with AI or Being Replaced by AI?. Radiology 2018; 287: 365-366
  • 11 Caballo M. et al. An unsupervised automatic segmentation algorithm for breast tissue classification of dedicated breast computed tomography images. Med Phys 2018; 45: 2542-2559
  • 12 Chilamkurthy S. et al. Deep learning algorithms for detection of critical findings in head CT scans: a retrospective study. Lancet 2018; DOI: 10.1007/s00330-018-5698-2.
  • 13 Choi W. et al. Radiomics analysis of pulmonary nodules in low-dose CT for early detection of lung cancer. Med Phys 2018; 45: 1537-1549
  • 14 Choy G. et al. Current Applications and Future Impact of Machine Learning in Radiology. Radiology 2018; 288: 318-328
  • 15 Chung SW. et al. Automated detection and classification of the proximal humerus fracture by using deep learning algorithm. Acta Orthop 2018; 89: 468-473
  • 16 Couture HD. et al. Image analysis with deep learning to predict breast cancer grade, ER status, histologic subtype, and intrinsic subtype. NPJ Breast Cancer 2018; 4: 30
  • 17 Del GaizoJ. et al. Using machine learning to classify temporal lobe epilepsy based on diffusion MRI. Brain and Behavior 2017; 7 DOI: 10.1002/brb3.801.
  • 18 Dreyer KJ, Geis JR. When Machines Think: Radiology’s Next Frontier. Radiology 2017; 285: 713-718
  • 19 Hosny A. et al. Artificial intelligence in radiology. Nat Rev Cancer 2018; 18: 500-510
  • 20 Katzen J, Dodelzon K. A review of computer aided detection in mammography. Clin Imaging 2018; 52: 305-309
  • 21 Krupinski EA. Deep Learning of Radiology Reports for Pulmonary Embolus: Is a Computer Reading My Report?. Radiology 2018; 286: 853-855
  • 22 Kumar V. et al. Automated and real-time segmentation of suspicious breast masses using convolutional neural network. PLoS One 2018; 13: e0195816
  • 23 Lee H. et al. Deep feature classification of angiomyolipoma without visible fat and renal cell carcinoma in abdominal contrast-enhanced CT images with texture image patches and hand-crafted feature concatenation. Med Phys 2018; 45: 1550-1561
  • 24 Liu F. et al. Deep Learning MR Imaging-based Attenuation Correction for PET/MR Imaging. Radiology 2018; 286: 676-684
  • 25 Montoya JC. et al. 3D Deep Learning Angiography (3D-DLA) from C-arm Conebeam CT. Am J Neuroradiol 2018; 39: 916-922
  • 26 Nichols JA. et al. Machine learning: applications of artificial intelligence to imaging and diagnosis. Biophys Rev 2018; DOI: 10.1007/s12551-018-0449-9.
  • 27 Nishio M. et al. Computer-aided diagnosis of lung nodule using gradient tree boosting and Bayesian optimization. PLoS One 2018; 13: e0195875
  • 28 Orooji M. et al. Combination of computer extracted shape and texture features enables discrimination of granulomas from adenocarcinoma on chest computed tomography. J Med Imaging (Bellingham) 2018; 5: 024501
  • 29 Park SH. Regulatory Approval versus Clinical Validation of Artificial Intelligence Diagnostic Tools. Radiology 2018; 288: 910-911
  • 30 Pesapane F. et al. Artificial intelligence as a medical device in radiology: ethical and regulatory issues in Europe and the United States. Insights Imaging 2018; 9: 745-753
  • 31 Salem M. et al. A supervised framework with intensity subtraction and deformation field features for the detection of new T2-w lesions in multiple sclerosis. Neuroimage Clin 2018; 17: 607-615
  • 32 Savadjiev P. et al. Demystification of AI-driven medical image interpretation: past, present and future. Eur Radiol 2018; DOI: 10.1007/s00330-018-5674-x.
  • 33 Tajmir SH. et al. Artificial intelligence-assisted interpretation of bone age radiographs improves accuracy and decreases variability. Skeletal Radiol 2018; DOI: 10.1007/s00256-018-3033-2.
  • 34 Ting DSW. et al. Clinical Applicability of Deep Learning System in Detecting Tuberculosis with Chest Radiography. Radiology 2018; 286: 729-731
  • 35 Titano JJ. et al. Automated deep-neural-network surveillance of cranial images for acute neurologic events. Nat Med 2018; 24: 1337-1341
  • 36 van Rosendael AR. et al. Maximization of the usage of coronary CTA derived plaque information using a machine learning based algorithm to improve risk stratification; insights from the CONFIRM registry. J Cardiovasc Comput Tomogr 2018; 12: 204-209
  • 37 Yasaka K. et al. Deep learning for staging liver fibrosis on CT: a pilot study. Eur Radiol 2018; DOI: 10.1007/s00330-018-5499-7.
  • 38 Yepes-Calderon F. et al. Automatically measuring brain ventricular volume within PACS using artificial intelligence. PLoS One 2018; 13: e0193152
  • 39 Zhao X. et al. Agile convolutional neural network for pulmonary nodule classification using CT images. Int J Comput Assist Radiol Surg 2018; 3: 585-595

Correspondence

Prof. Dr. med. Michael Forsting
Direktor des Instituts für diagnostische und interventionelle Radiologie und Neuroradiologie, Medizinischer Direktor der Zentralen IT der Universitätsmedizin Essen
Universitätsklinikum Essen
Hufelandstraße 55
45147 Essen

  • References

  • 1 Abajian A. et al. Predicting Treatment Response to Intra-arterial Therapies for Hepatocellular Carcinoma with the Use of Supervised Machine Learning-An Artificial Intelligence Concept. J Vasc Interv Radiol 2018; 29: 850-857 e851
  • 2 Abdollahi H. et al. Cochlea CT radiomics predicts chemoradiotherapy induced sensorineural hearing loss in head and neck cancer patients: A machine learning and multi-variable modelling study. Phys Med 2018; 45: 192-197
  • 3 Al’Aref SJ. et al. Clinical applications of machine learning in cardiovascular disease and its relevance to cardiac imaging. Eur Heart J 2018; DOI: 10.1093/eurheartj/ehy404.
  • 4 Al-Masni MA. et al. Simultaneous detection and classification of breast masses in digital mammograms via a deep learning YOLO-based CAD system. Comput Methods Programs Biomed 2018; 157: 85-94
  • 5 Al Ajmi E. et al. Spectral multi-energy CT texture analysis with machine learning for tissue classification: an investigation using classification of benign parotid tumours as a testing paradigm. Eur Radiol 2018; 28: 2604-2611
  • 6 AlBadawy EA. et al. Deep learning for segmentation of brain tumors: Impact of cross-institutional training and testing. Med Phys 2018; 45: 1150-1158
  • 7 Arbabshirani MR. et al. Advanced machine learning in action: identification of intracranial hemorrhage on computed tomography scans of the head with clinical workflow integration. Npj Digital Medicine 2018; DOI: doi.org/10.1038/s41746-017-0015-z.
  • 8 Baessler B. et al. Texture analysis and machine learning of non-contrast T1-weighted MR images in patients with hypertrophic cardiomyopathy-Preliminary results. Eur J Radiol 2018; 102: 61-67
  • 9 Bektas CT. et al. Clear Cell Renal Cell Carcinoma: Machine Learning-Based Quantitative Computed Tomography Texture Analysis for Prediction of Fuhrman Nuclear Grade. Eur Radiol 2018; DOI: 10.1007/s00330-018-5698-2.
  • 10 Bluemke DA. Radiology in 2018: Are You Working with AI or Being Replaced by AI?. Radiology 2018; 287: 365-366
  • 11 Caballo M. et al. An unsupervised automatic segmentation algorithm for breast tissue classification of dedicated breast computed tomography images. Med Phys 2018; 45: 2542-2559
  • 12 Chilamkurthy S. et al. Deep learning algorithms for detection of critical findings in head CT scans: a retrospective study. Lancet 2018; DOI: 10.1007/s00330-018-5698-2.
  • 13 Choi W. et al. Radiomics analysis of pulmonary nodules in low-dose CT for early detection of lung cancer. Med Phys 2018; 45: 1537-1549
  • 14 Choy G. et al. Current Applications and Future Impact of Machine Learning in Radiology. Radiology 2018; 288: 318-328
  • 15 Chung SW. et al. Automated detection and classification of the proximal humerus fracture by using deep learning algorithm. Acta Orthop 2018; 89: 468-473
  • 16 Couture HD. et al. Image analysis with deep learning to predict breast cancer grade, ER status, histologic subtype, and intrinsic subtype. NPJ Breast Cancer 2018; 4: 30
  • 17 Del GaizoJ. et al. Using machine learning to classify temporal lobe epilepsy based on diffusion MRI. Brain and Behavior 2017; 7 DOI: 10.1002/brb3.801.
  • 18 Dreyer KJ, Geis JR. When Machines Think: Radiology’s Next Frontier. Radiology 2017; 285: 713-718
  • 19 Hosny A. et al. Artificial intelligence in radiology. Nat Rev Cancer 2018; 18: 500-510
  • 20 Katzen J, Dodelzon K. A review of computer aided detection in mammography. Clin Imaging 2018; 52: 305-309
  • 21 Krupinski EA. Deep Learning of Radiology Reports for Pulmonary Embolus: Is a Computer Reading My Report?. Radiology 2018; 286: 853-855
  • 22 Kumar V. et al. Automated and real-time segmentation of suspicious breast masses using convolutional neural network. PLoS One 2018; 13: e0195816
  • 23 Lee H. et al. Deep feature classification of angiomyolipoma without visible fat and renal cell carcinoma in abdominal contrast-enhanced CT images with texture image patches and hand-crafted feature concatenation. Med Phys 2018; 45: 1550-1561
  • 24 Liu F. et al. Deep Learning MR Imaging-based Attenuation Correction for PET/MR Imaging. Radiology 2018; 286: 676-684
  • 25 Montoya JC. et al. 3D Deep Learning Angiography (3D-DLA) from C-arm Conebeam CT. Am J Neuroradiol 2018; 39: 916-922
  • 26 Nichols JA. et al. Machine learning: applications of artificial intelligence to imaging and diagnosis. Biophys Rev 2018; DOI: 10.1007/s12551-018-0449-9.
  • 27 Nishio M. et al. Computer-aided diagnosis of lung nodule using gradient tree boosting and Bayesian optimization. PLoS One 2018; 13: e0195875
  • 28 Orooji M. et al. Combination of computer extracted shape and texture features enables discrimination of granulomas from adenocarcinoma on chest computed tomography. J Med Imaging (Bellingham) 2018; 5: 024501
  • 29 Park SH. Regulatory Approval versus Clinical Validation of Artificial Intelligence Diagnostic Tools. Radiology 2018; 288: 910-911
  • 30 Pesapane F. et al. Artificial intelligence as a medical device in radiology: ethical and regulatory issues in Europe and the United States. Insights Imaging 2018; 9: 745-753
  • 31 Salem M. et al. A supervised framework with intensity subtraction and deformation field features for the detection of new T2-w lesions in multiple sclerosis. Neuroimage Clin 2018; 17: 607-615
  • 32 Savadjiev P. et al. Demystification of AI-driven medical image interpretation: past, present and future. Eur Radiol 2018; DOI: 10.1007/s00330-018-5674-x.
  • 33 Tajmir SH. et al. Artificial intelligence-assisted interpretation of bone age radiographs improves accuracy and decreases variability. Skeletal Radiol 2018; DOI: 10.1007/s00256-018-3033-2.
  • 34 Ting DSW. et al. Clinical Applicability of Deep Learning System in Detecting Tuberculosis with Chest Radiography. Radiology 2018; 286: 729-731
  • 35 Titano JJ. et al. Automated deep-neural-network surveillance of cranial images for acute neurologic events. Nat Med 2018; 24: 1337-1341
  • 36 van Rosendael AR. et al. Maximization of the usage of coronary CTA derived plaque information using a machine learning based algorithm to improve risk stratification; insights from the CONFIRM registry. J Cardiovasc Comput Tomogr 2018; 12: 204-209
  • 37 Yasaka K. et al. Deep learning for staging liver fibrosis on CT: a pilot study. Eur Radiol 2018; DOI: 10.1007/s00330-018-5499-7.
  • 38 Yepes-Calderon F. et al. Automatically measuring brain ventricular volume within PACS using artificial intelligence. PLoS One 2018; 13: e0193152
  • 39 Zhao X. et al. Agile convolutional neural network for pulmonary nodule classification using CT images. Int J Comput Assist Radiol Surg 2018; 3: 585-595

Zoom Image
Prof. Dr. med. Michael Forsting
Zoom Image
Prof. Dr. med. Michael Forsting