Subscribe to RSS
DOI: 10.1055/s-0044-1778694
Artificial Intelligence-Based Prediction of Contrast Medium Doses for Computed Tomography Angiography Using Optimized Clinical Parameter Sets
- Abstract
- Introduction
- Objectives
- Methods and Experiments
- Evaluation Measure and Results
- Discussion
- Conclusion
- References
Abstract
Objectives In this paper, an artificial intelligence-based algorithm for predicting the optimal contrast medium dose for computed tomography (CT) angiography of the aorta is presented and evaluated in a clinical study. The prediction of the contrast dose reduction is modelled as a classification problem using the image contrast as the main feature.
Methods This classification is performed by random decision forests (RDF) and k-nearest-neighbor methods (KNN). For the selection of optimal parameter subsets all possible combinations of the 22 clinical parameters (age, blood pressure, etc.) are considered using the classification accuracy and precision of the KNN classifier and RDF as quality criteria. Subsequently, the results of the evaluation were optimized by means of feature transformation using regression neural networks (RNN). These were used for a direct classification based on regressed Hounsfield units as well as preprocessing for a subsequent KNN classification.
Results For feature selection, an RDF model achieved the highest accuracy of 84.42% and a KNN model achieved the best precision of 86.21%. The most important parameters include age, height, and hemoglobin. The feature transformation using an RNN considerably exceeded these values with an accuracy of 90.00% and a precision of 97.62% using all 22 parameters as input. However, also the feasibility of the parameter sets in routine clinical practice has to be considered, because some of the 22 parameters are not measured in routine clinical practice and additional measurement time of 15 to 20 minutes per patient is needed. Using the standard feature set available in clinical routine the best accuracy of 86.67% and precision of 93.18% was achieved by the RNN.
Conclusion We developed a reliable hybrid system that helps radiologists determine the optimal contrast dose for CT angiography based on patient-specific parameters.
#
Keywords
clinical decision support system - contrast medium - CT angiography - deep learning - machine learningIntroduction
Clinical decision support systems (CDSS) play an increasingly important role in clinical practice, supporting physicians in diagnostic and therapeutic decisions.[1] A strong focus of CDSS lies on the application of machine learning methods.[2] Such artificial intelligence (AI)-based algorithms are able to make connections between the recorded data and the objective of the given query without explicit representation of expert knowledge. Recently, the focus of machine learning methods has shifted toward deep learning encompassing algorithms like convolutional neural networks,[3] (CNN) which contain convolutional layer to extract features from image data as being used for instance to detect diabetic retinpathy.[4] Another frequently used deep learning network are autoencoders[5] or variational[5] autoencoder, which allow a low-dimension feature representation that can be used as the input of decision support systems. However, established approaches such as support vector machines,[6] logistic regression,[7] random decision forests (RDF), and k-nearest-neighbor (KNN) also hold a great potential as shown when using a decision support system for early detection of neonatal sepsis based on electronic medical records.[8] A big advantage compared to deep learning algorithms is the fact that the interpretability of the data can be preserved while a more intuitive comprehension of the decision-making process is given.
#
Objectives
In this work, a CDSS will be developed to assists radiologists in optimizing contrast medium (CM) dose for computed tomography angiographies (CTA) of the aorta. CTA is an imaging technique that can be used to examine all vascular territories of the human body. A high enough CM dose is required to provide adequate contrast between vessel lumen and the surrounding area. However, many CMs contain iodine as the enhancing agent. Iodine has a high absorption rate of X-rays leading to high Hounsfield Units (HU) values in the computed tomography (CT) image data to visualize the targeted vessels. The contained iodine is known for causing side effects like allergic reactions, hyperthyroidism, and deterioration of renal function up to contrast-induced nephropathy.[9] Therefore, the patient-specific adjustment of the CM dose offers risk-reducing potential. The optimal CM dose depends on numerous technical and physiological parameters. Some approaches propose to use a lower tube voltage in order to increase the contrast at reduced CM doses.[10] [11] Others vary the dose for each patient individually based on physiological parameters such as body surface area.[12] [13] [14] Similar to these approaches, our CDSS is based on physiological parameters; however, laboratory values and other clinical parameters are also taken into consideration, so that in contrast to the aforementioned works 22 relevant parameters are acquired and used. The result of our system classifies a patient into nonexcessive image contrast/ CM dose (class 1) and excessive image contrast/ CM dose (class 2). If class 2 is predicted for a patient, the system gives the recommendation to use a reduced CM dose rather than the standard dose as shown in [Fig. 1]. In order to generate the ground truth of the classification, we implemented a semiautomatic evaluation in previous works,[15] which evaluates the quality of the image contrast based on mean HU taken in regions of interest (ROI). This work establishes the subsequent classification. As an extension of earlier work[16] where we focused solely on a KNN approach, we chose to investigate deep learning-based, RDF and KNN models. All methods have been successfully used as predictive models with different clinical parameters.[17] [18] [19] [20] For example, in this article,[21] RDFs were used to predict the survival of breast cancer patients, whereas in article by Wong et al[22] KNNs were used to predict early biochemical recurrence after prostatectomy. Both methods were trained with a small number of parameters (23 and 19). We evaluated both KNN and RDF models on all possible clinical parameter combinations to generate the optimal parameter sets by feature selection. In order to quantify the influence of each clinical parameter on the result of the classification, the impurity- and the permutation-based feature importance were also calculated using random forests. In addition to quantitative analysis, feasibility is assessed differentiating clinical parameters into two groups: routinely recorded and nonroutinely recorded, to optimize the use and the acceptance of our CDSS in a clinical workspace. Deep learning-based approaches are used for feature transformation. In this case, a regression neural network (RNN) in the form of a fully connected network was used to encode certain parameter sets. These networks were evaluated on the one hand by a direct classification and on the other hand indirectly as preprocessing of the input features for the already established KNN classification.
#
Methods and Experiments
The optimization of the clinical parameter sets was implemented as a feature selection approach through both RDF and KNN classifiers, respectively. Both methods offer an interpretation of the input variables, which is primarily intended to increase the acceptance of the method and the results themselves in a clinical environment.
Additionally, a feature transformation was implemented for which a RNN was trained that performed favorably in our previous work.[23] As input for the feature transformation the optimized parameter sets as well as the entire set as comparison was used.
Clinical Study for Data Acquisition
As result of a clinical study (UKSH Lübeck) we received 77 CT angiographies encompassing the aorta and the major branch vessels and the corresponding clinical parameters from the Department for Radiology and Nuclear Medicine. All included patients received a CM dose of 100 mL with a concentration of 300 mg iodine/mL. In order to increase the quality of the training data, patients whose images were affected by other factors, such as suboptimal timing scan, were removed in advance so as not to affect possible correlations between image contrast and clinical parameters. The 22 clinical parameters are listed in [Table 1]. In addition, radiologists have indicated which parameters are recorded routinely (rr). As a ground truth for the dose optimization in an earlier work[15] we implemented an image contrast quality measure similar to Behrendt et al.[24] As shown in [Fig. 2] under the guidance of radiological experts three ROIs were defined and placed in each CTA volume, respectively, in the aorta and the arteria femoralis communis. The mean (HU) of the ROIs were taken and categorized among different thresholds to result either in class 1 (nonexcessive image contrast/CM dose) or class 2 (excessive image contrast/CM dose). Following this assessment, we gained a class distribution of 21 patients in class 1 to 56 patients in class 2.
Abbreviations: CM, contrast medium; rr, recorded routinely.
#
Ethical Considerations
The acquisition of patient data within the study was conducted with ethical considerations in mind. The study was approved by the Ethics Committee of the University of Lübeck under the direction of Prof. Dr. med. Alexander Katalinic for the file numbers 18-114 and 18-202.
#
Feature Selection
Random Decision Forests
RDFs[25] belong to the supervised learning methods and are mainly used for classification. RDF implement the concept of ensemble learning and consist of several binary decision trees (BDT). During training the input feature vectors are split at the internal nodes based on the split criterion (e.g. Shannon entropy[26]). After training each BDT contains split thresholds at the internal nodes and a class label at the leaf nodes based on the ground truth. To classify a new instance it traverses each BDT, each resulting labels from the leaf nodes are gathered and a majority vote determines the resulting class. To result in a meaningful ensemble and to avoid overfitting while training randomness is introduced. The most common method is bagging, which uses different subset of the training data to train each BDT.
#
K-Nearest-Neighbor
The KNN method[27] belongs to the group of supervised learning methods and thus requires a ground truth to classify incoming instances. KNN is considered a “lazy learner“ because it does not build a parametric model as a classifier during training. Instead, instance-based learning is implemented. The basic assumption of KNN is that the distance between instances of the same ground truth in the feature space is smaller than that of different ground truths. Therefore, to classify a new instance, the distance is calculated to all available already classified instances using a chosen metric. Of these distances, the k closest are selected and a majority vote determines the class. Special attention should be paid to the choice of k as a too high value can lead to vulnerabilities to outliers and a low value learns to ignore classes with a low number of instances more easily.
#
Experiments
With the goal to find the most suitable classifier we choose a brute force approach for both KNN and RDF in terms of used clinical input parameters. It should be noted that with the KNN we left out two parameters as they are categorical. The number of trained and evaluated classifiers with the number of clinical parameters m = 22 amounted to:
Optimization of the hyperparameters was carried out in advance using initial experiments. The RDF models have a depth of d = 8, use the Shannon entropy as the cost function of a split node and consist of 20 estimators each. For KNN, the number of neighbors was chosen as k = 5, the metric used is the Euclidean distance, and the distances were uniformly weighted. The evaluation was performed using an 11-fold cross validation.
#
#
Feature Importance
One advantage of RDFs is the interpretability of the input features. In addition, various methods for the evaluation of the importance of the individual features are available. As an additional quantitative analysis we use the impurity-based feature importance and the permutation-based feature importance.
Impurity-Based Feature Importance
One way to determine the feature importance is an impurity-based approach, which is often referred to as mean decrease impurity.[25] This feature importance can be calculated directly during training. The importance refers to how successfully the data have been split into two subsets at a split node based on a particular feature. Therefore, the impurity measure is first calculated for the incoming data. The training then determines the feature and the threshold of the optimal split. The optimal split provides the impurity measure since it is equivalent to the split criterion. In our case, we used the Shannon entropy-based impurity gain. Now the decrease of the impurity measure is calculated by the difference of the values before and after the split. A high decrease indicates a feature that meaningfully splits the data according to the ground truth. For each feature their resulting values are summed up over all trees and averaged. The advantage of this procedure is that no extra calculations are needed to calculate the mean decrease impurity. The disadvantage, however, is that features with a high cardinality are often rated as more important. The reason is that, e.g., continuous variables are able to provide more thresholds during node optimization and thus the chance of choosing one of these is higher.
#
Permutation-Based Feature Importance
In order to visualize a possible bias of the impurity-based feature importance, the permutation-based feature importance is evaluated (Section “Evaluation Measure and Results”). The technique was introduced as mean decrease accuracy in this article.[25] This method calculates the feature importance using an already trained model and available test data. After determining the model loss for the test data a feature is selected, and its values are permuted among all test data. Then result of the model is calculated again. The difference between the new model loss and the former one provides information about the feature importance of the specific feature. The greater this difference the more important the feature is. This process is repeated per feature for a chosen number of times and averaged. One advantage of the permutation-based feature importance is that it interrupts the relationship between features and their ground truth. However, as a disadvantage it can decrease the importance of correlated features. Introducing correlated features can lead to the model working best when both features are used equally, rather than one being preferred at split nodes. Permuting one of the two features will negate this and cause both features to be rated lower.
#
Experiments
To compare the importance of each clinical parameter used for training the KNN and RDF classifiers we gathered the top 100 models of the four parameter set configurations: RDF (A: optimized with respect to the accuracy), RDF (P: optimized with respect to the precision of class 2), KNN (A) and KNN (P). Based on those models we counted the number of times a parameter was used as input. In order to be able to draw a comparison between the use of all parameters and only the parameters of the category rr, the best parameter sets were selected there as well. In addition, the evaluation measures are shown for the case where all parameters were used as input as well as all parameters labelled rr.
#
#
Feature Transformation
Regression Neural Network
Deep learning methods like autoencoders are often used for feature transformation or compression and have the disadvantage of not considering the problem-specific properties of the task during training.[23] To avoid this, RNNs have already been considered as a task-specific means for feature transformation.[28] The basic structure of the neural networks is formed by successive layers that contain weights that are applied to their respective input until a task-specific loss function is calculated in the last layer. In training, back-propagation through the network updates the weights with respect to the gradient of the loss function. In our case, the regression of the mean HU values of the three ROIs (2) was implemented. The classification itself was pursued with two different approaches. In the direct variant, the expert rules (ground truth) are applied to the regressed HU means. For an indirect classification the features from the last hidden layer were used as input for an KNN, since experience showed that intermediate features might contain more meaningful information. In contrast to most of the work with a medical–radiological focus, no CNNs are used in this work, as no image data are available for prediction. Therefore, regarding the relatively small dimension of the input features for neural networks, a simple fully connected architecture is used. The network consists of four fully connected layers with rectifier linear unit activations in between. The channel numbers are decreased to five in the last hidden layer. The outputs are the three mean HUs of the respective ROIs, trained with a mean squared error loss for 100 epochs.
#
Experiments
To optimize the feature selection results, the feature transformation using an RNN was applied to selected parameter combinations. The set of all 22 parameters, the 11 marked as ”rr,” and the parameter sets optimized by the previous feature selection by the KNN and RDF approach were evaluated. For the actual classification two variants were tested. In the direct variant, the regressed HU Means were classified using the threshold values also used in ground truth, and indirectly the last hidden layer was used as input of a KNN.
#
#
#
Evaluation Measure and Results
In the following, the results concerning the feature selection/transformation and the associated optimization of the parameter sets are described. For the evaluation, the following values based on the number of true positives (TP), false positives (FP), false negatives (FN), and true negatives (TN) per class are considered:
A particular focus is on the precision of class 2 (excessive image contrast/CM dose) as a recommendation for action is only given for class 2, and this could lead to an unusable CTA image data and subsequent repetition of the scan in the event of a misclassification, we aim to achieve the highest possible precision.
Feature Selection and Feature Importance
As shown in [Fig. 3], the ankle brachial index (ABI) is one of the most important parameters for KNN (A) and KNN (P). In contrast to KNN (P) where hematocrit (HC), hemoglobin (HB), and oxygen saturation (OS) clearly stood out, it can be seen that with KNN a high accuracy can be achieved with a variety of different parameter sets. Both configurations RDF (A) and RDF (P) favor the same clinical parameters with ABI, age (A), height (H), and HB. The overlap between KNN and RDF is mainly in the occurrence of ABI and HB. The results of the impurity- and permutation-based feature importance analysis ([Fig. 4]) support these findings. According to the analysis, H is the most important clinical parameter followed by HB, ABI, HC, and A. It is noticeable that in the case of impurity-based feature importance, the RDFs that were optimized with respect to precision HC, HB, and the gamma-glutamyl transferase (GGT) are clearly more important in the decision than those that were evaluated with respect to accuracy. The same can be seen for permutation-based feature importance with the input parameter gender.
The best models and their clinical input parameters according to the evaluation are shown in [Table 2] for the RDF and the KNN models according to the configurations mentioned above. The best accuracy and precision of class 2 by an RDF was achieved with the combination of ABI, A, H, BB, HB, and GGT. This parameter set achieved an accuracy of 84.42% and a precision of 85.48%. Furthermore, this combination also achieved the highest values in the other evaluation measures. No model trained with only combinations of the rr parameters could outperform them, with the largest drop occurring in class 1 precision from 80.00 to 66.67%. In contrast, the precision of class 2 decreased only slightly with 84.75%. The use of all parameters as well as the rr parameters performed worse in comparison with an accuracy of 70.13% and a maximum precision of class 2 of 76.19%. Two parameter sets achieved the best accuracy of the KNNs with 81.82%. These included the combination of A, glomerular filtration rate, and GGT, and the combination of blood pressure, OS, HB, HC, and creatinine (C). The later also resulted in the highest precision of class 2, which was 86.21%, higher than the best of the RDFs. The other evaluation measures also achieved the highest values for these two parameter sets. As with the RDFs, no model that had only the parameter choices limited by rr could exceed these values. Using all parameters and the limited choice showed lower values, especially the precision and the F-One score of class 1. It should be noted that in each parameter set of the different configurations, at least two of the clinical parameters previously evaluated as most important are included. Among them, HB is the most frequently occurring parameter.
Abbreviations: A, age; ABI, ankle brachial index; BB, beta blocker; BMI, body mass index; BPS, blood pressure systolic; C, creatinine; GFR, glomerular filtration rate; GGT, gamma-glutamyl transferase; H, height; HB, hemoglobin; HC, hematocrit; KNN, k-nearest-neighbor methods; OS, oxygen saturation; RDF, random decision forests; rr, recorded routinely.
Note: Bold values mark the best results of the RDFs and the KNNs.
#
Feature Transformation
The results of the direct classification are shown in the upper part of [Table 3]. The highest values in all evaluation measures were obtained with the use of all 22 clinical parameters. Accuracy in this case was 90.00% and class 2 precision was as high as 97.62%. Another very high value was obtained for the F-One score with 93.18%. Using the 11 parameters belonging to rr yielded lower values for accuracy (86.67%) and precision for class 2 (93.18%). All evaluated parameter sets of RDF and KNN achieved considerably lower values for all evaluation measures in comparison. In the lower half of [Table 3] the evaluation results of the indirect classification with RNN and KNN are shown. Here, the highest value was achieved by using the rr parameters for all evaluation measures. However, with an accuracy of 86.67% and the precision for class 2 of 89.58%, the results of the direct classification could not be surpassed. The values when using all parameters as input is only slightly lower. Again, when using the parameter sets of RDF and KNN, all evaluation measures decreased noticeably. Compared with the feature selection methods, the feature transformation also scored particularly well with class 2 precision. While RDF and KNN are ahead for the extra selected parameter sets, the feature transformation using an RNN is ahead for a larger set of clinical parameters as input. It is noticeable that while 22 parameters in the indirect variant of the feature transformation lead to a decline compared with the rr parameter, the direct variant benefits from the total number of parameters.
Abbreviations: KNN, k-nearest-neighbor methods; RDF, random decision forests; rr, recorded routinely.
Note: Bold values mark the best results of the indirect and direct method.
#
#
Discussion
In this work we aim to establish an AI-based patient-specific prediction, which classifies an excessive CM dose in order to recommend a lowered standard dose to be administered to patients undergoing CTA of the aorta. Trained and evaluated were RDF and KNN models on all possible clinical parameter sets. Feature transformation by means of RNNs was assessed as a preprocessing of the input features. Key clinical parameters included A, ABI, H, HB, HC, and C. While A can influence stroke volume, H is related to a person's total blood volume. The ABI can provide indications of circulatory disorders. Together with HB and HC, these are all parameters that can have a possible influence on the distribution of CM in the blood flow. Rather unexpected is the position of C, which can also be considered in the assessment of kidney function. The optimal parameter sets of the RDFs in terms of accuracy include both the ABI and the GGT. However, these parameters can cause problems in routine clinical practice. The ABI requires finding the pulse, which can vary in difficulty and can lead to a delay of about 5 to 10 minutes. The GGT can cause additional costs if it is ordered later in addition to the standard analysis. A good alternative for the actual application would therefore be the RDF model with A, H, OS, and HB as input parameters. Using the feature transformation the precision of class 2 and the accuracy were considerably increased. While the KNN and RDF benefited from a preselection of clinical parameters, RNN achieved superior values with many parameters. The best precision of 97.62% was achieved when all 22 parameters were used. The inclusion of all parameters, however, results in a delay of 15-20 minutes per patient, unsuited for the clinical workflow. Another point is the loss of interpretability in the feature transformation and whether this circumstance is sufficiently compensated by the higher values of the evaluation measures.
Clinical Health Implications
Overall, the study showed that a large proportion of patients receive an unnecessarily high dose of CM (according to radiological expertise). In addition, it was shown that some previously neglected clinical parameters could help in determining the optimal dose.
#
#
Conclusion
In conclusion we developed a system that can assess with a high degree of accuracy, based on patient-specific clinical parameters, whether a standard dose of CM is too high for a given patient. It is limited in the sense that the medical practitioner has to reduce the standard dose according to their own assessment, prompted by the recommendation of the system. For our future work, the application will be extended to other scan regions besides the aorta to broaden the spectrum of the application. These include, for example, CTA images of the lung or the pelvic–leg region. Furthermore, it is planned through further acquisition activities, to collect new data necessary to calculate an accurate dose reduction of the CM based on our machine learning models.
#
#
Conflict of Interest
None declared.
-
References
- 1 Middleton B, Sittig DF, Wright A. Clinical decision support: a 25 year retrospective and a 25 year vision. Yearb Med Inform 2016; 25 (Suppl. 01) S103-S116
- 2 Peiffer-Smadja N, Rawson TM, Ahmad R. et al. Machine learning for clinical decision support in infectious diseases: a narrative review of current applications. Clin Microbiol Infect 2020; 26 (05) 584-595
- 3 Li Z, Liu F, Yang W, Peng S, Zhou J. A survey of convolutional neural networks: analysis, applications, and prospects. IEEE Trans Neural Netw Learn Syst 2021
- 4 Abràmoff MD, Lavin PT, Birch M, Shah N, Folk JC. Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. NPJ Digit Med 2018; 1 (01) 39
- 5 Kingma DP, Welling M. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
- 6 Cortes C, Vapnik V. Support-vector networks. Mach Learn 1995; 20 (03) 273-297
- 7 Tolles J, Meurer WJ. Logistic regression: relating patient characteristics to outcomes. JAMA 2016; 316 (05) 533-534
- 8 Mani S, Ozdas A, Aliferis C. et al. Medical decision support using machine learning for early detection of late-onset neonatal sepsis. J Am Med Inform Assoc 2014; 21 (02) 326-336
- 9 Pannu N, Wiebe N, Tonelli M. Alberta Kidney Disease Network. Prophylaxis strategies for contrast-induced nephropathy. JAMA 2006; 295 (23) 2765-2779
- 10 Mourits MM, Nijhof WH, van Leuken MH, Jager GJ, Rutten MJ. Reducing contrast medium volume and tube voltage in CT angiography of the pulmonary artery. Clin Radiol 2016; 71 (06) 615.e7-615.e13
- 11 Szucs-Farkas Z, Schibler F, Cullmann J. et al. Diagnostic accuracy of pulmonary CT angiography at low tube voltage: intraindividual comparison of a normal-dose protocol at 120 kVp and a low-dose protocol at 80 kVp using reduced amount of contrast medium in a simulation study. AJR Am J Roentgenol 2011; 197 (05) W852-9
- 12 Seifarth H, Puesken M, Kalafut JF. et al. Introduction of an individually optimized protocol for the injection of contrast medium for coronary CT angiography. Eur Radiol 2009; 19 (10) 2373-2382
- 13 Jin L, Gao Y, Sun Y. et al. Contrast medium administration with a body surface area protocol in step-and-shoot coronary computed tomography angiography with dual-source scanners. Sci Rep 2020; 10 (01) 16690
- 14 Bae KT. Intravenous contrast medium administration and scan timing at CT: considerations and approaches. Radiology 2010; 256 (01) 32-61
- 15 Pallenberg R, Fleitmann M, Soika K. et al. Automatic quality measurement of aortic contrast-enhanced CT angiographies for patient-specific dose optimization. Int J CARS 2020; 15 (10) 1611-1617
- 16 Fleitmann M, Soika K, Stroth AM. et al. Computer-assisted quality assessment of aortic CT angiographies for patient-individual dose adjustment. Stud Health Technol Inform 2020; 270: 123-127
- 17 Campillo-Gimenez B, Jouini W, Bayat S, Cuggia M. Improving case-based reasoning systems by combining k-nearest neighbour algorithm with logistic regression in the prediction of patients' registration on the renal transplant waiting list. PLoS One 2013; 8 (09) e71991
- 18 Heo J, Yoon JG, Park H, Kim YD, Nam HS, Heo JH. Machine learning–based model for prediction of outcomes in acute stroke. Stroke 2019; 50 (05) 1263-1265
- 19 Yang L, Wu H, Jin X. et al. Study of cardiovascular disease prediction model based on random forest in eastern China. Sci Rep 2020; 10 (01) 5245
- 20 Kamel H, Navi BB, Parikh NS. et al. Machine learning prediction of stroke mechanism in embolic strokes of undetermined source. Stroke 2020; 51 (09) e203-e210
- 21 Ganggayah MD, Taib NA, Har YC, Lio P, Dhillon SK. Predicting factors for survival of breast cancer patients using machine learning techniques. BMC Med Inform Decis Mak 2019; 19 (01) 48
- 22 Wong NC, Lam C, Patterson L, Shayegan B. Use of machine learning to predict early biochemical recurrence after robot-assisted prostatectomy. BJU Int 2019; 123 (01) 51-57
- 23 Fleitmann M, Uzunova H, Stroth AM. , et al. Deep-learning-based feature encoding of clinical parameters for patient specific CTA dose optimization. In: International Conference on Wireless Mobile Communication and Healthcare 2020; 315–322
- 24 Behrendt FF, Rebière M, Goedicke A. et al. Contrast medium injection protocol adjusted for body surface area in combined PET/CT. Eur Radiol 2013; 23 (07) 1970-1977
- 25 Breiman L. Random forests. Mach Learn 2001; 45: 5-32
- 26 Shannon CE. A mathematical theory of communication. Bell Syst Tech J 1948; 27: 379-423
- 27 Cover T, Hart P. Nearest neighbor pattern classification. IEEE Trans Inf Theory 1967; 13: 21-27
- 28 Lai Z, Deng H. Medical image classification based on deep features extracted by deep model and statistic feature fusion with multilayer perceptron. Comput Intell Neurosci 2018; 2018: 2061516
Address for correspondence
Publication History
Received: 26 April 2022
Accepted: 28 November 2023
Article published online:
23 January 2024
© 2024. The Author(s). This is an open access article published by Thieme under the terms of the Creative Commons Attribution-NonDerivative-NonCommercial License, permitting copying and reproduction so long as the original work is given appropriate credit. Contents may not be used for commercial purposes, or adapted, remixed, transformed or built upon. (https://creativecommons.org/licenses/by-nc-nd/4.0/)
Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany
-
References
- 1 Middleton B, Sittig DF, Wright A. Clinical decision support: a 25 year retrospective and a 25 year vision. Yearb Med Inform 2016; 25 (Suppl. 01) S103-S116
- 2 Peiffer-Smadja N, Rawson TM, Ahmad R. et al. Machine learning for clinical decision support in infectious diseases: a narrative review of current applications. Clin Microbiol Infect 2020; 26 (05) 584-595
- 3 Li Z, Liu F, Yang W, Peng S, Zhou J. A survey of convolutional neural networks: analysis, applications, and prospects. IEEE Trans Neural Netw Learn Syst 2021
- 4 Abràmoff MD, Lavin PT, Birch M, Shah N, Folk JC. Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. NPJ Digit Med 2018; 1 (01) 39
- 5 Kingma DP, Welling M. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
- 6 Cortes C, Vapnik V. Support-vector networks. Mach Learn 1995; 20 (03) 273-297
- 7 Tolles J, Meurer WJ. Logistic regression: relating patient characteristics to outcomes. JAMA 2016; 316 (05) 533-534
- 8 Mani S, Ozdas A, Aliferis C. et al. Medical decision support using machine learning for early detection of late-onset neonatal sepsis. J Am Med Inform Assoc 2014; 21 (02) 326-336
- 9 Pannu N, Wiebe N, Tonelli M. Alberta Kidney Disease Network. Prophylaxis strategies for contrast-induced nephropathy. JAMA 2006; 295 (23) 2765-2779
- 10 Mourits MM, Nijhof WH, van Leuken MH, Jager GJ, Rutten MJ. Reducing contrast medium volume and tube voltage in CT angiography of the pulmonary artery. Clin Radiol 2016; 71 (06) 615.e7-615.e13
- 11 Szucs-Farkas Z, Schibler F, Cullmann J. et al. Diagnostic accuracy of pulmonary CT angiography at low tube voltage: intraindividual comparison of a normal-dose protocol at 120 kVp and a low-dose protocol at 80 kVp using reduced amount of contrast medium in a simulation study. AJR Am J Roentgenol 2011; 197 (05) W852-9
- 12 Seifarth H, Puesken M, Kalafut JF. et al. Introduction of an individually optimized protocol for the injection of contrast medium for coronary CT angiography. Eur Radiol 2009; 19 (10) 2373-2382
- 13 Jin L, Gao Y, Sun Y. et al. Contrast medium administration with a body surface area protocol in step-and-shoot coronary computed tomography angiography with dual-source scanners. Sci Rep 2020; 10 (01) 16690
- 14 Bae KT. Intravenous contrast medium administration and scan timing at CT: considerations and approaches. Radiology 2010; 256 (01) 32-61
- 15 Pallenberg R, Fleitmann M, Soika K. et al. Automatic quality measurement of aortic contrast-enhanced CT angiographies for patient-specific dose optimization. Int J CARS 2020; 15 (10) 1611-1617
- 16 Fleitmann M, Soika K, Stroth AM. et al. Computer-assisted quality assessment of aortic CT angiographies for patient-individual dose adjustment. Stud Health Technol Inform 2020; 270: 123-127
- 17 Campillo-Gimenez B, Jouini W, Bayat S, Cuggia M. Improving case-based reasoning systems by combining k-nearest neighbour algorithm with logistic regression in the prediction of patients' registration on the renal transplant waiting list. PLoS One 2013; 8 (09) e71991
- 18 Heo J, Yoon JG, Park H, Kim YD, Nam HS, Heo JH. Machine learning–based model for prediction of outcomes in acute stroke. Stroke 2019; 50 (05) 1263-1265
- 19 Yang L, Wu H, Jin X. et al. Study of cardiovascular disease prediction model based on random forest in eastern China. Sci Rep 2020; 10 (01) 5245
- 20 Kamel H, Navi BB, Parikh NS. et al. Machine learning prediction of stroke mechanism in embolic strokes of undetermined source. Stroke 2020; 51 (09) e203-e210
- 21 Ganggayah MD, Taib NA, Har YC, Lio P, Dhillon SK. Predicting factors for survival of breast cancer patients using machine learning techniques. BMC Med Inform Decis Mak 2019; 19 (01) 48
- 22 Wong NC, Lam C, Patterson L, Shayegan B. Use of machine learning to predict early biochemical recurrence after robot-assisted prostatectomy. BJU Int 2019; 123 (01) 51-57
- 23 Fleitmann M, Uzunova H, Stroth AM. , et al. Deep-learning-based feature encoding of clinical parameters for patient specific CTA dose optimization. In: International Conference on Wireless Mobile Communication and Healthcare 2020; 315–322
- 24 Behrendt FF, Rebière M, Goedicke A. et al. Contrast medium injection protocol adjusted for body surface area in combined PET/CT. Eur Radiol 2013; 23 (07) 1970-1977
- 25 Breiman L. Random forests. Mach Learn 2001; 45: 5-32
- 26 Shannon CE. A mathematical theory of communication. Bell Syst Tech J 1948; 27: 379-423
- 27 Cover T, Hart P. Nearest neighbor pattern classification. IEEE Trans Inf Theory 1967; 13: 21-27
- 28 Lai Z, Deng H. Medical image classification based on deep features extracted by deep model and statistic feature fusion with multilayer perceptron. Comput Intell Neurosci 2018; 2018: 2061516