CC BY 4.0 · J Neuroanaesth Crit Care
DOI: 10.1055/s-0044-1787844
Review Article

The Promise of Artificial Intelligence in Neuroanesthesia: An Update

Zhenrui Liao
1   Department of Neuroscience, Columbia University, New York, New York, United States
,
Niharika Mathur
2   School of Interactive Computing, College of Computing, Georgia Institute of Technology, Atlanta, Georgia, United States
,
Vidur Joshi
3   Department of Biomedical Engineering, Steven's Institute of Technology, Hoboken, New Jersey, United States
,
Shailendra Joshi
4   Department of Anesthesiology, Columbia University, New York, New York, United States
› Author Affiliations
 

Abstract

Artificial intelligence (AI) is poised to transform health care across medical specialties. Although the application of AI to neuroanesthesiology is just emerging, it will undoubtedly affect neuroanesthesiologists in foreseeable and unforeseeable ways, with potential roles in preoperative patient assessment, airway assessment, predicting intraoperative complications, and monitoring and interpreting vital signs. It will advance the diagnosis and treatment of neurological diseases due to improved risk identification, data integration, early diagnosis, image analysis, and pharmacological and surgical robotic assistance. Beyond direct medical care, AI could also automate many routine administrative tasks in health care, assist with teaching and training, and profoundly impact neuroscience research. This article introduces AI and its various approaches from a neuroanesthesiology perspective. A basic understanding of the computational underpinnings, advantages, limitations, and ethical implications is necessary for using AI tools in clinical practice and research. The update summarizes recent reports of AI applications relevant to neuroanesthesiology. Providing a holistic view of AI applications, this review shows how AI could usher in a new era in the specialty, significantly improving patient care and advancing neuroanesthesiology research.


#

Introduction

From its inception, research in artificial intelligence (AI) has challenged the human monopoly on knowledge and creativity.[1] Descriptions of AI invariably refer to mimicking different aspects of human cognition in a machine—in recent iterations, particular interest has been devoted to learning from experience without explicit programming. Turing (1950) first distilled this mimetic conception of AI in his now-famous Test, which he called “the imitation game.” Turing proposed that the criterion for a machine deemed capable of “thinking” is if it can “trick” a human into believing it is another human. Like humans, we might expect these devices to learn, reason, self-correct, and create independently by processing vast amounts of visual, textual, and speech data. Based on these capabilities, many definitions of AI have emerged over time.[2] John McCarthy coined “artificial intelligence,” defining AI as “the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to biologically observable methods.” [3]

Health care relies strongly on perception, cognition, and judgment, which human intelligence alone has been able to provide for most of history; introducing any substitute will fundamentally change the delivery of health care. Since the time of Hippocrates, health care has relied on trust in other humans and often their subjective judgments. How will we evaluate life-impacting decisions made by algorithms?[4] [5] A high-level understanding of the modern field of AI, including a frank assessment of its capabilities and limitations, is necessary to answer this question. While reports of the impact of AI on anesthesiology are rapidly emerging,[6] [7] [8] [9] there are very few reports of AI applications to neuroanesthesiology.[10] The role of AI in neuroanesthesiology was addressed by a review in this journal in 2020.[11] This article provides greater insights into underlying computational models, enumerates the advantages and disadvantages of AI, and includes novel reports of AI-anesthesiology applications developed since then. AI will have a transformative impact on neuroanesthesiology directly and indirectly, but also raises serious legal and ethical questions regarding transparency, privacy, data security, trust calibration, concealed bias, intellectual property, and professional liability that must be acknowledged.[5] Neuroanesthesiology should develop a tempered optimism for AI, embracing enthusiasm while acknowledging significant limitations ([Fig. 1]).

Zoom Image
Fig. 1 Anticipated impact of AI on neuroanesthesiology patient care. AI, artificial intelligence.

#

An overview of AI and Data Processing

Central to cognition, biological or engineered, is acquiring, processing, and acting on information from the environment. [Fig. 2] shows the data preparation needed for AI analysis and the overlap between various AI fields. Chae recently provided a detailed technical review of AI in anesthesiology.[12] We first remark that three closely related terms in AI are often (erroneously) used interchangeably:

Zoom Image
Fig. 2 Data processing for artificial intelligence, machine learning, and deep learning.
  • AI is an umbrella term for computationally replicating human cognition and problem-solving.

  • Machine learning (ML) is a subset of AI. It describes the approach of using large datasets to learn patterns implicitly that enable a system to perceive, reason, predict, or interact with its environment. The entity in ML that learns from the data and holds the learned information is called the model.

  • Deep learning (DL) is a subfield of ML that uses a specific class of algorithms, called neural networks, to learn input–output relationships from large amounts of data. It is the most successful paradigm in modern AI. Neural networks can learning functions using sufficient raw data (e.g., image, text, speech) that would be very difficult to program manually. Nonetheless, human involvement is still required in designing the architecture of the network, selecting data on which it will be trained, and evaluating its performance

Non-ML approaches, such as search algorithms and symbolic reasoning, have played a significant role in the history of AI and have experienced a resurgence in some domains. Nonetheless, as medicine is a data-driven field, most of the questions of interest can be reframed as ML questions; hence, this review is devoted to discussing techniques that broadly fall within the ML subset of AI.

Input and Output

Modern AI systems encode their inputs and outputs as vectors. Many forms of data can be encoded as vectors; for example, binary data can be encoded as 0 or 1; laboratory values can be encoded as a list of numbers; radiographic images or pathology slides can be encoded as lists of pixel values; multimodal data can be created simply by concatenating these lists together. The input unit to a model, which may correspond to a set of laboratory values, a pathology slide, or a magnetic resonance image, is called a “sample.” A sample usually consists of multiple “features”: the individual values in a basic metabolic profile or each pixel in an image or video, which must be used together to produce the desired output. The input and output data sizes are predetermined before training the model. The steps in data processing are shown in [Figs. 2] and [3]. For example, a model may take a fluoroscopic image from an angiogram as input and extract the features to produce an output, such as whether the image contains a normal vessel, distal, or proximal vasospasm.

Zoom Image
Fig. 3 The concept of an electronic neuron is shown at the top, and that of a fully connected convoluted neural network showing the stages of an angiographic image analysis is shown at the bottom.

#
#

Organization of Neural Networks

Neural networks have become the most popular model in ML due to their capacity for learning highly complex input–output relationships. It has been mathematically proven that any function can be encoded into a sufficiently large neural network. The simplest neural network, the perceptron, consists of a single neuron that accepts a vector of features as input and returns a binary (0 or 1) output. Deep neural networks have many such connected neurons ([Fig. 3]).

Biological neurons inspired perceptron development. A perceptron receives a set of input features (its “dendrites”). Different synaptic “weights” are applied to each feature, representing the importance that the neuron places on the feature. The sum of these weighted features passes through a nonlinear activation function and generates an output value (in the simplest case, 0 or 1: no spike or spike) (carried along its “axon”). To train the perception, pairs of inputs and known outputs are provided, and the weight and bias are iteratively adjusted so that the output matches the ground truth. A limitation of the perceptron is that only linear functions of its inputs can be learned. This limitation can be circumvented by networking many perceptrons in a neural network.

Neural Networks

In a neural network, many neurons are linked, drawing inspiration from the human cortex.[13] Like the cortex, the neurons in the network are organized in layers, with the neurons of each layer feeding input to the next, as described in an earlier review in this journal.[11] The number of neurons can describe each layer of a neural network it contains (as its width), and the neural network as a whole can be characterized by the number of layers it has (as its depth).[11] [12] [Table 1] describes the several types of neural networks. By convention, the first layer is referred to as the input layer, the final layer as the output layer, and any intermediate layers are called hidden layers. A neural network is called deep if it has many hidden layers; as the depth of a network grows, it gains the ability to learn more complex functions but also requires more data to train effectively.

Table 1

Types of neural networks

Types

Characteristic

Advantage

Disadvantage

Application

Example

Perceptron or threshold logic unit

Single node Supervised learning

Binary classifier

Efficient for simple logic: and or not-and

Not for nonlinear problems, Boolean XOR functions

Binary data classification

Emergence from anesthesia[88]

Multilayer perceptron

(MLP)

Fully connected nodes

Multiple hidden layers

Weights assigned by backpropagation

The “standard” neural network trained easily with backpropagation

Limited ability to handle problems with a temporal or spatial component

Difficult to encode context, with many parameters to train

Classification

Often used as part of a more complex modular network

Emergence from anesthesia[88]

Convolutional NN

The convolutional layer inspired by biological visual processing shares weights across space, enabling piecemeal image processing, pooling, conversion, flattening, and MLP output.

Effective for image analysis

Permits deep learning with few parameters

Complicated design, challenging to maintain

Requires specialized hardware (GPU acceleration) for performance

Image processing and vision, speech, and translation function.

Medical imaging and anomaly detection.

Landmark identification for regional and spinal blocks.[89]

Recurrent NN (RNN)

Inspired by biological networks, the recurrent layer allows information to flow in loops rather than simply forward. This property endows the network with memory that persists over time.

Sequential data modeling. It can be combined with MLP or convolutional layers.

Training is difficult.

Gradients vanish or explode when backpropagated to early layers.

Text processing, including grammar and language suggestions. Text to speech. Translation

Handwriting recognition.

Assessing the depth of anesthesia[59]

Long short-term memory NN

A subtype of RNN in which nodes have memory cells with control input, output, and forget functions.

Effective in classifying and processing sequential data

Complicated

Extensive training

Slow to train

Speech recognition

Language modeling

Drug design

Medical apps

Predicting anesthetic events[90]

Sequence to sequence model

Two RNNs work simultaneously to encode input and decode the output

Handling long sequences of variable lengths

Problem handling context information and long-term dependencies

Chatbots

Language Translation

Q/A systems

Text summaries

Generating medical reports from patient–doctor conversations[91]

Modular NN

Several networks function independently to achieve the output.

The modular approach decreases crosstalk and increases efficiency

Problems can arise while fragmenting the inputs

High-level input data compression

Character recognition

Market predictions

Predicting surgery duration[92]

Transformers

Uses an “attention mechanism” to process sequential data (e.g., text, audio) in a way that is aware of context

Are large encoders and decoders of sequential data that identify contextual relationships.

Can be trained in a self-supervised manner

They are replacing CNNs and RNNs but have high computational demands.

Vast amounts of data and compute required for training

Text, speech analysis, simultaneous translation, gene analysis

Decoding EEG,

predicting the depth of anesthesia[93]

Self-organizing maps

Used for simplification of higher dimensional data yet preserves the topographical structure by determining the best mapping unit

It is based on competitive learning, using unsupervised training to generate a lower dimensional map space, which is then applied to the input data.

Need large datasets.

Computation time and costs.

Enables visualization of large datasets—applications for fraud detection and market analytics that could be useful for health care.

Surgical skill assessment[94]

Generative adversarial networks

The generator-determined fake data are assessed by the discriminator and refined until the fake can no longer be detected.

The creation of novel data is indistinguishable from real data.

Flexible data generation of any size.

The generator and discriminator conflict makes training difficult and time-consuming.

Image creation.

Text-to-image translation.

Image editing

Pain management[95]

Abbreviation: EEG, electroencephalogram.



#

Information Flow through a Neural Network

In the simplest deep neural network, each neuron in a layer simultaneously applies a nonlinear activation function to the weighted sum of its inputs. The output of every neuron in the layer is then fed to every neuron in the next layer of neurons via a layer of synaptic weights. In a fully connected network, if one layer has N neurons and the next has M neurons, the two layers are connected by N × M weight represented by a matrix. This process is repeated for each subsequent layer until the output layer is reached. Thus, information from the input can be recombined several times before reaching the output, allowing complex relationships among the input features to be learned. The number of weights in the network grows exponentially with the depth of the network; a fully connected network with L layers where each layer has N neurons will have NL weights.


#

Training a Neural Network

Neural networks are trained using the backpropagation algorithm. The mathematical details of this algorithm are outside the scope of this review but are available in web tutorials.[14] However, the intuition of the backpropagation algorithm is similar to the intuition of the perceptron training algorithm: pairs of inputs and ground-truth outputs are presented, and the weights in the network are adjusted to minimize the loss, the difference between the network output and the desired ground-truth output. This procedure requires modifying the weight layers of the network in reverse order (i.e., from output to input, calculating a new error at each layer to be used as the loss of the previous layer), hence the name. Because this procedure requires weights in the network to be bidirectional, most neuroscientists do not consider backpropagation to be plausible in biological networks.


#
#

Steps in Applying AI to a New Problem

Planning

The first step in applying AI to a new problem is to define the project's objective and scope with key participants and experts. These projects are resource- and time-intensive and should be periodically assessed to ensure success. From the beginning, major AI projects should involve all stakeholders, including investigators, data scientists, patients, ethicists, and health care professionals. Trained personnel should be available to troubleshoot technical problems and assess progress. Additionally, at the planning stage, the entire impact of the project has to be holistically reviewed, keeping the final product performance in mind. The assessment of AI/ML models includes problem fit, data availability quality and quantity, model testing frequency and scope, scalability and integration, interpretability and explainability, flexibility and customization, ethical and regulatory concerns, and marketing support and monitoring.


#

Data Acquisition and Processing

AI's main strength is its ability to learn from vast amounts of data with or without human intervention to discover patterns humans cannot articulate or program explicitly. The volume of data needed to train DL applications is sometimes in the petabyte range, or approximately 500 billion printed pages. As shown in [Fig. 2], many data types and sources are used for health care purposes. Insufficient data, mislabeled or noisy data, incomplete data, obsolete data, or biased data can all adversely affect the performance of an AI model. The resulting predictions may be incorrect or invisibly reproduce undesirable biases present in the training data.

Once the data are identified from a source, it has to be standardized and cleaned. It has to be assessed for outliers, errors, missing values, and overlapping features. Features in a dataset can be deleted due to lack of significance, redundancy, sparsity, missing values, or dimensionality reduction techniques (discussed later) that can represent the same data using fewer features.[13]


#

Types of ML Model

Most ML models can be categorized as supervised, unsupervised, or reinforcement learning. Some newer models, particularly transformers (on which large-language models such as ChatGPT are based) do not easily fit into the traditional categorization. The technical details of the underlying algorithms used are beyond the scope of this review but are discussed by Hastie et al (2009), Bishop (2006), and Goodfellow et al (2016).[15] [16] [17]

  • Supervised learning: supervised learning models are trained on labeled datasets, where each input is associated with a corresponding ground-truth output. The model's parameters (for example, the weights of a neural network) are updated using these ground-truth input–output pairs to improve subsequent predictions. The goal of training is to find a set of parameters that generalize well to unseen data—a model that performs well on the training data but poorly on new data is said to be overfitted to the training set (akin to a student who has memorized an exam answer key but is unable to apply the information to answer different questions). To detect overfitting, the available data may be divided into a training set (used to train the model) and a test set (used to evaluate the performance model after training). Because the model has not seen the test data during training, the test data can be used to determine how the model might perform in the real world.

    • Classification is the subcategory of supervised learning that assigns a discrete label (such as yes/no or a choice from a list of possibilities) to each data sample. Techniques used for classification include decision trees, random forests, support vector machines, logistic regression, and neural networks.

    • Regression is the subcategory of supervised learning that assigns a continuous numerical label to each sample. Regression techniques include linear and nonlinear models.[18]

  • Unsupervised learning: unsupervised learning uses unlabeled data and lets the algorithm determine the underlying patterns without human intervention or guidance. Unsupervised techniques are helpful for data exploration or hypothesis generation, anomaly reduction, dimensionality reductions, and data clustering. Clinically, unsupervised learning could identify clusters of patients based on genetic markers that may have different responses to treatments.

    • Clustering of the data refers to discovering groups based on its natural distribution rather than on external labels. Clusters can then be examined to determine what drives the differences between those groups.

    • Dimensionality reduction refers to transforming data with hundreds or thousands of features to data with a small number of transformed features (often only two or three for visualization) while preserving essential features such as distances between samples. The projection of a 3D globe onto a 2D map is an everyday example of dimensionality reduction, although, in practice, dimensionality reduction is generally much more dramatic. Algorithms for dimension reduction include principal component analysis, factor analysis, and nonnegative matrix factorization.

  • Reinforcement learning: reinforcement learning attempts to learn optimal actions based on real-time observations to achieve a goal (i.e., maximize reward and/or minimize punishment). It is also used in agent-based robotics to process images of the environment.

  • Beyond supervised/unsupervised: since the early 2000s, DL has fueled the rapid growth of AI technologies. DL models include, but are not limited to, recurrent neural networks, convolutional neural networks, and transformers, some of which are briefly described in [Table 1].[13] This approach has greatly advanced image, text, and speech processing. However, generating sufficient labeled data to train such models is extremely expensive and time-consuming. Self-supervised learning involves using data not labeled manually to train models to learn complex tasks. In self-supervised learning, a model may be asked to predict masked words in a sentence based on context. The masking needed for the model can be performed programmatically without manual labeling.

    • Large language models (LLMs): LLMs use large datasets to comprehend, summarize, create, and predict text contents. These models require training on unlabeled data in the petabyte range. Transformer models that can assess context are generally used for LLMs. Examples of LLMs are:

      • ▪ ChatGPT.

      • ▪ Google Bard.

    • Generative AI: generative AI uses vast amounts of unlabeled data to create new data, such as an essay, report, or a piece of music, using techniques such as stable diffusion and autoregressive modeling. Generative AI creating images from text often consists of an image generation network layered on top of an already-trained LLM. Examples of generative AI include:

      • ▪ Dall-E.

      • ▪ Midjourney.


#

Evaluation of Supervised Learning Models

ML models must be tested initially after validation. Testing is periodically needed to check for drifts after adding new features or any hyperparameter corrections. To test the model, labeled data are compared with the predictions by the model. The familiar true-positive, true-negative, false-positive, and false-negative rates from statistics can also be used to evaluate the performance of ML models. Depending on the objective, tests are undertaken as listed in [Table 2].[12]

Table 2

Assessment of AI and ML models:

Test parameter

Characteristics

Accuracy

Number of correct prediction/total predictions

Precision

True positives/total positives (true and false)

Recall

True positives/total positives (true positives + false negatives)

The area under the curve

Compares models by plotting false positive rates (x-axis) to true positive rates (y-axis). The closer the values are to the y-axis the better the model.

Log loss

Compare model prediction probability to actual values as 0 (certainty) or 1 (improbability)—a suitable parameter for comparing models.

F1 score

Fraction of correct predictions made by the model, ideally >0.9

Confusion matrix

This is a graphic plot of a multiclass classification of actual versus predicted outcomes for each class. It is easy to comprehend, as the diagonal values show correct results.

Abbreviations: AI, artificial intelligence; ML, machine learning.



#
#

Capabilities and Limitations of AI

  • Capabilities: AI can:

    • Assess and quickly analyze vast amounts of language, image, and text data beyond human, statistical, or traditional programming capabilities.

    • Find unforeseen patterns and solutions using large datasets.

    • Self-learn and self-correct to find better solutions over time.

    • Control devices in remote and extreme situations such as deep space, nuclear sites, weapons disposal, and toxic spills.

    • Enhance personal and organizational capabilities.

    • Rapid progress is anticipated with improved AI software and hardware.

  • Limitations: AI:

    • Has a high cost for developing and training new models.

    • Needs large amounts of current and reliable data that are not easy to obtain and may create security, privacy, and ethical concerns.

    • May perform worse than human decision-making.

    • Lacks the ability to process outliers.

    • Is affected by systemic biases in the data, leading to biased output.

    • Lacks true creativity, as the AI solutions require human input.

    • Lacks abstraction and generalization, which limits its use in atypical situations.

    • Needs human oversight in decision-making.

    • Has high energy requirements. The current hardware with CPU–memory separation energy is inefficient and consumes considerable energy. Better hardware, such as integrated CPU memory, is needed to decrease energy consumption.

    • Raises the question of trust due to a lack of transparency/interpretability and contextual awareness.

    • Can “hallucinate,” resulting in grossly erroneous output unexplainable by the data.

    • Lacks emotive factors while assessing complex situations, particularly in health care.

    • Raises concerns regarding liability and accountability when the technology fails.

    • Regulatory oversight is poorly developed, variable, and lagging.

    • Often developed by for-profit entities whose interests are not necessarily aligned with the public's.


#

Applications of AI Relevant to Neuroanesthesiology

Very little literature assesses the direct impact of AI on neuroanesthesiology. One recent report evaluated the usefulness of accurately reporting the Society of Neuroanesthesiology and Critical Care guidelines and recommendations using Chat GPT and found it inadequate.[10] Without more original research publications selectively relevant to neuroanesthesiology, one has to take a more holistic view of the field because changes in neuroanesthesiology practice will be a subset of the changes in the overall practice of medicine, such as in health care delivery, hospital operations, neurology, anesthesiology, and research.

  • Health care delivery applications: AI's role in health care management can be divided into three phases [19]:

    • Early phase: Administration, health information, and electronic data acquisition and analysis.

    • The intermediate phase is related to telemedicine, where there are current deficiencies to justify risks.

    • Late phase with AI-based medical applications directly involved in clinical decision-making, which carries the risk of medical liability.

One advantage of AI is developing dynamic management strategies with built-in social and ethical concerns[20] that can ensure a fair allocation of resources across genders and races.[21] [22]

  • AI's impact on hospital-wide operations[4] : AI can significantly improve hospital management. This includes personnel oversight, scheduling, secretarial assistance, patient outreach, and satisfaction surveys.[23] [24] By voice-to-text transcription alone, AI could reduce physicians' work time by 17% and nurses by 51%.[5]

  • AI's impact on neurology:

    • AI-enhanced neuroimaging: contextual interpretation of imaging data using radiomics for better diagnosis and prognostication.[25] [26] [27] [28]

    • Automated handwriting and gait analysis and rapid electroencephalogram (EEG) signal processing will assist in diagnosing neurological diseases using AI.[29] [30]

    • Building the connectome for understanding cortical functioning[31] and how it is affected by aging.[32] [33]

    • AI-assisted diagnosis of diseases that require neuroanesthesia care, including stroke,[34] Parkinson's disease,[35] epilepsy,[35] [36] and intracranial hemorrhage.[37] [38] Early and reliable diagnosis and classification of brain tumors.[27] [39] [40]

  • AI's impact on anesthesiology: the application of AI in anesthesiology was recently reviewed by several authors.[41] [42] [43] [44] [45] [46] [47] The applications range from scheduling the operating room to reviewing the electronic medical record (EMR) to predicting outcomes, as recently described by Cascella et al.[45] AI can direct anesthetic drug dosing to maintain a targeted level of EEG activity,[48] predict the likelihood of perioperative complications,[44] assess difficult intubation,[49] [50] predict drug doses and delivery,[43] anticipate intraoperative complications such as hypoxia[51] and hypotension[52] [53] by closely analyzing the vital signs and/or EMR data, improve the reliability of vital sign alarms,[54] help with postoperative monitoring,[55] pain treatment,[47] mortality,[56] and anesthesia training.[57] Despite concerns regarding privacy and liability, there is optimism that AI will improve decision-making in the operating room and during perioperative care.[42] However, assessing the airway, predicting hypotension, and improving EEG monitoring are areas of technological development of immense relevance to neuroanesthesiology.

  • AI and EEG analysis: EEG is a powerful neurological tool that provides real-time information on the functioning of the cortex. AI-assisted advances in EEG monitoring are highly relevant to treating epilepsy by establishing convulsions' nature, source, and severity. EEG is routinely monitored during neurovascular surgery and helps assess the depth of anesthesia.[58] Li et al found that long short-term memory with a sparse denoising autoencoder method could better predict the depth of anesthesia compared with conventional EEG parameters such as α ratio or permutation entropy during sevoflurane anesthesia.[59] Since the early 1990s, EEG monitoring has been simplified to numerical values to assess the depth of anesthesia and help titrate anesthetic drugs.[48] Wang et al reported that with ML input of eight EEG parameters and demographic features, they could predict Bispectral Index Score (BIS) changes during propofol infusion.[60] Recently, EEG, electrocardiographic (ECG), and electromyographic data have been used to assess the depth of anesthesia with ML. Nsugbe et al reported that in some situations, ECG could better reflect the depth of anesthesia compared with EEG analysis.[61]

  • AI in assessing the airway: AI has been used to predict difficulty in intubation using photographic images. Using DL, Hayasaka et al found that supine-side-mouth-base position photographs best predict difficult intubation.[50] Lin et al, aware of the DL approach's lack of explainability, used ML to determine difficult intubation while mimicking a clinical protocol.[49] Wang et al applied semi-supervised DL to satisfactorily predict difficulties in mask ventilation and intubation using head and neck images from nine viewpoints. [62] Yamanaka et al determined that ML models could predict successful first-pass intubation and difficult intubation in the emergency department compared with modified LEMON (look, evaluate Mallampati, obstruction, neck movability) criteria.[63] The technology for robotic intubations has also advanced in recent years.[64] Wang et al successfully used a remote intubation device in pigs.[65] Biro et al successfully demonstrated the robotic navigation of an endoscope to intubate.[66] Robotic intubation that enables remote intubations today could function autonomously in the future. AI can detect the correct placement on chest X-ray films once the tube has been placed.[67]

  • AI can predict intraoperative events such as hypoxia and hypotension: one of AI's important applications is accurately predicting the outcome of surgery and anticipated intraoperative complications. Lundberg et al reviewed data from 50,000 EMRs to anticipate the risk of hypoxia during surgery. With ML, they found a twofold increase in anticipation based on patient and procedure characteristics.[68] Park et al used vital signs and ventilatory data to predict episodes of intraoperative hypoxia in pediatric patients with three ML approaches.[51] Kendale et al reviewed EMR for comorbidities, drug treatment, and vital signs to reliably predict postinduction hypotension with ML.[52] Hatib et al successfully predicted the same with a high analysis of arterial waveforms.[69] One concern about using AI in clinical settings is the lack of transparency. To circumvent this issue, van der Ven et al used a standardized patient management algorithm to predict hypotension.[70] Hypotension and hypoxia are two critical concerns during neuroanesthesia, which are further compounded by variations in the patient's position during surgery. The ability to predict such complications is, therefore, clinically highly relevant.

  • Impact of AI on neuroscience research:

    • Bench research: AI can significantly enhance drug discovery,[71] [72] [73] develop better peptide carriers design to deliver drugs to the brain,[74] [75] and help develop better pharmacokinetic models.[76]

    • Clinical trial planning and monitoring: AI will significantly impact future clinical trial design by assisting with patient selection[40] and ensure protocol compliance.[77]

    • Radiomics is an AI application that quantifies images of tissue characteristics. Such image features have been correlated with cellular characteristics, and changes in the features can be tracked over time to assess therapeutic response.[78] [79] Thus, radiomics-based research could generate new insights through the ability to accurately predict outcomes for a given patient and the response to interventions.

      • Stroke: combining radiomics features with clinical characteristics best predicts the outcome of stroke patients, such as with endovascular interventions.[80] [81]

      • Gliomas: radiomics, in combination with genomics, can better predict treatment outcomes. Radiomics and tumor tissue characterization could help assess drug delivery and the extent of pseudo-progression of the tumor.[82] [83]

      • Traumatic brain injuries: AI-assisted outcome assessment of traumatic brain injury with multiparameter image assessment will improve patient classification for better outcome predictions and clinical trials.[84] [85] [86] [87]


#

Conclusion

AI use in anesthesiology is likely to advance rapidly. Few publications currently directly address neuroanesthesia applications, but that is likely to change in the future. This review summarizes the underlying concepts and describes AI's potential and concerns. An overview of AI is necessary to meaningfully understand AI-generated publications, research analysis, and clinical applications. While there is growing enthusiasm about AI, significant concerns with AI, such as privacy, lack of transparency, legal liability, and unrecognized bias, have to be carefully addressed. It is a dangerous fallacy to think of AI as transcending humanity; AI reflects humanity for good and ill. AI mimics our human faculties to perceive, reason, and interact with the world. AI learns and reproduces the human biases present in the data it is trained on. It is reasonable to surmise that the future impacts of AI on health care and society will mirror our collective choices about how health care and society are organized: concentrating control over this technology in the hands of a few will worsen existing inequality, but democratizing could improve the lives of many. From a narrower perspective, one can safely conclude that AI will play a significant role in neuroanesthesia practice and research well into the future.


#
#

Conflict of Interest

None declared.

  • References

  • 1 Chalmers DJ. The singularity: a philosophical review. J Conscious Stud 2010; 17: 7-65
  • 2 Garg PK. Chapter 1: Overview of artificial intelligence. In: Sharma L, Garg PK. Artificial Intelligence Technologies, Applications, and Challenges. Boca Raton, FL:: CRC Press, Taylor & Francis Group;; 2022: 3-18
  • 3 McCarthy J. What is Artificial Intelligence. Accessed 6 June 2024 at: http://www-formal.stanford.edu/jmc/
  • 4 Asan O, Bayrak AE, Choudhury A. Artificial intelligence and human trust in healthcare: focus on clinicians. J Med Internet Res 2020; 22 (06) e15154
  • 5 Hazarika I. Artificial intelligence: opportunities and implications for the health workforce. Int Health 2020; 12 (04) 241-245
  • 6 Feinstein M, Katz D, Demaria S, Hofer IS. Remote monitoring and artificial intelligence: outlook for 2050. Anesth Analg 2024; 138 (02) 350-357
  • 7 Wu YH. Huang KY, Tseng AC. Development of an artificial intelligence-based image recognition system for time-sequence analysis of tracheal intubation. Anesth Analg 2024; (e-pub ahead of print). DOI: 10.1213/ANE.0000000000006934.
  • 8 Fritz BA, Pugazenthi S, Budelier TP. et al. User-centered design of a machine learning dashboard for prediction of postoperative complications. Anesth Analg 2024; 138 (04) 804-813
  • 9 Nathan N. Robotics and the future of anesthesia. Anesth Analg 2024; 138 (02) 238
  • 10 Blacker SN, Kang M, Chakraborty I. et al. Utilizing artificial intelligence and chat generative pretrained transformer to answer questions about clinical scenarios in neuroanesthesiology. J Neurosurg Anesthesiol 2023; (e-pub ahead of print). DOI: 10.1097/ANA.0000000000000949.
  • 11 Rajagopalan V, Kulkarni DK. Artificial intelligence in neuroanesthesiology and neurocritical care. J Neuroanesthesiology Critical Care 2020; 7: 11-18
  • 12 Chae D. Data science and machine learning in anesthesiology. Korean J Anesthesiol 2020; 73 (04) 285-295
  • 13 Anonymous. Types of Neural Networks and Definition of Neural Networks. Updated 23 Nov 2022 . Accessed 6 June 2024 at: https://www.mygreatlearning.com/blog/types-of-neural-networks/
  • 14 Logunova I. Backpropagation in Neural Networks. Updated 18 Dec 2023 . Accessed 6 June 2024 at: https://serokell.io/blog/understanding-backpropagation
  • 15 Bishop CM. Pattern Recognition and Machine Learning. New York, NY:: Springer Inc.;; 2006
  • 16 Hastie T, Tibshirani R, Friedman JH. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Vol 2. New York, NY:: Springer Inc.;; 2009
  • 17 Goodfellow IJ, Bengio Ya, Courville A. Deep Learning. Cambridge, MA:: MIT Press;; 2016
  • 18 Anonymous. Machine learning techniques. Accessed 6 June 2024 at: https://www.javatpoint.com/machine-learning-techniques
  • 19 Spatharou A, Solveigh H, Jenkins J. Transforming healthcare with AI: The impact on the workforce and organisations. McKinsey & Company. 2020 . Accessed 6 June 2024 at: https://www.mckinsey.com/industries/healthcare/our-insights/transforming-healthcare-with-ai
  • 20 Blobel B, Ruotsalainen P, Brochhausen M, Prestes E, Houghtaling MA. Designing and managing advanced, intelligent and ethical health and social care ecosystems. J Pers Med 2023; 13 (08) 1209
  • 21 Özdemir V. Digital is political: why we need a feminist conceptual lens on determinants of digital health. OMICS 2021; 25 (04) 249-254
  • 22 Hernandez-Boussard T, Siddique SM, Bierman AS, Hightower M, Burstin H. Promoting equity in clinical decision making: dismantling race-based medicine. Health Aff (Millwood) 2023; 42 (10) 1369-1373
  • 23 Davenport TH, Glaser JP. Factors governing the adoption of artificial intelligence in healthcare providers. Discov Health Syst 2022; 1 (01) 4
  • 24 Seah J, Boeken T, Sapoval M, Goh GS. Prime time for artificial intelligence in interventional radiology. Cardiovasc Intervent Radiol 2022; 45 (03) 283-289
  • 25 Henssen D, Meijer F, Verburg FA, Smits M. Challenges and opportunities for advanced neuroimaging of glioblastoma. Br J Radiol 2023; 96 (1141) 20211232
  • 26 de Godoy LL, Chawla S, Brem S, Mohan S. Taming glioblastoma in “real time”: integrating multimodal advanced neuroimaging/AI tools towards creating a robust and therapy agnostic model for response assessment in neuro-oncology. Clin Cancer Res 2023; 29 (14) 2588-2592
  • 27 Asif S, Zhao M, Chen X, Zhu Y. BMRI-NET: a deep stacked ensemble model for multi-class brain tumor classification from MRI images. Interdiscip Sci 2023; 15 (03) 499-514
  • 28 Aggarwal K, Manso Jimeno M, Ravi KS, Gonzalez G, Geethanath S. Developing and deploying deep learning models in brain magnetic resonance imaging: a review. NMR Biomed 2023; 36 (12) e5014
  • 29 Vinny PW, Vishnu VY, Padma Srivastava MV. Artificial intelligence shaping the future of neurology practice. Med J Armed Forces India 2021; 77 (03) 276-282
  • 30 Davey Z, Gupta PB, Li DR, Nayak RU, Govindarajan P. Rapid response EEG: current state and future directions. Curr Neurol Neurosci Rep 2022; 22 (12) 839-846
  • 31 Rabinowitch I. What would a synthetic connectome look like?. Phys Life Rev 2020; 33: 1-15
  • 32 Harms MP, Somerville LH, Ances BM. et al. Extending the human connectome project across ages: imaging protocols for the lifespan development and aging projects. Neuroimage 2018; 183: 972-984
  • 33 Sun L, Zhao T, Liang X. et al. Functional connectome through the human life span. bioRxiv . Sep 19 2023; DOI: 10.1101/2023.09.12.557193.
  • 34 Corrias G, Mazzotta A, Melis M. et al. Emerging role of artificial intelligence in stroke imaging. Expert Rev Neurother 2021; 21 (07) 745-754
  • 35 Raghavendra U, Acharya UR, Adeli H. Artificial intelligence techniques for automated diagnosis of neurological disorders. Eur Neurol 2019; 82 (1–3): 41-64
  • 36 Hakeem H, Feng W, Chen Z. et al. Development and validation of a deep learning model for predicting treatment response in patients with newly diagnosed epilepsy. JAMA Neurol 2022; 79 (10) 986-996
  • 37 Cortés-Ferre L, Gutiérrez-Naranjo MA, Egea-Guerrero JJ, Pérez-Sánchez S, Balcerzyk M. Deep learning applied to intracranial hemorrhage detection. J Imaging 2023; 9 (02) 37
  • 38 Warman R, Warman A, Warman P. et al. Deep learning system boosts radiologist detection of intracranial hemorrhage. Cureus 2022; 14 (10) e30264
  • 39 Surianarayanan C, Lawrence JJ, Chelliah PR, Prakash E, Hewage C. Convergence of artificial intelligence and neuroscience towards the diagnosis of neurological disorders-a scoping review. Sensors (Basel) 2023; 23 (06) 3062
  • 40 Luo J, Pan M, Mo K, Mao Y, Zou D. Emerging role of artificial intelligence in diagnosis, classification and clinical management of glioma. Semin Cancer Biol 2023; 91: 110-123
  • 41 Song B, Zhou M, Zhu J. Necessity and importance of developing AI in anesthesia from the perspective of clinical safety and information security. Med Sci Monit 2023; 29: e938835
  • 42 Lopes S, Rocha G, Guimaraes-Pereira L. Artificial intelligence and its clinical application in Anesthesiology: a systematic review. J Clin Monit Comput 2024; 38 (02) 247-259
  • 43 Singh M, Nath G. Artificial intelligence and anesthesia: a narrative review. Saudi J Anaesth 2022; 16 (01) 86-93
  • 44 Yoon HK, Yang HL, Jung CW, Lee HC. Artificial intelligence in perioperative medicine: a narrative review. Korean J Anesthesiol 2022; 75 (03) 202-215
  • 45 Cascella M, Tracey MC, Petrucci E, Bignami EG. Exploring artificial intelligence in anesthesia: a primer on ethics, and clinical applications. Surgeries (Basel) 2023; 4: 264-274
  • 46 Mathis MR, Kheterpal S, Najarian K. Artificial intelligence for anesthesia: what the practicing clinician needs to know: more than black magic for the art of the dark. Anesthesiology 2018; 129 (04) 619-622
  • 47 Hashimoto DA, Witkowski E, Gao L, Meireles O, Rosman G. Artificial intelligence in anesthesiology: current techniques, clinical applications, and limitations. Anesthesiology 2020; 132 (02) 379-394
  • 48 Lee HC, Ryu HG, Chung EJ, Jung CW. Prediction of bispectral index during target-controlled infusion of propofol and remifentanil: a deep learning approach. Anesthesiology 2018; 128 (03) 492-501
  • 49 Lin Q, Chng C-B, Too J. et al. Towards artificial intelligence-enabled medical pre-operative airway assessment. Paper presented at: 2022 IEEE International Conference on E-health Networking, Application & Services (HealthCom); 17–19 October 2022, Genoa, Italy
  • 50 Hayasaka T, Kawano K, Kurihara K, Suzuki H, Nakane M, Kawamae K. Creation of an artificial intelligence model for intubation difficulty classification by deep learning (convolutional neural network) using face images: an observational study. J Intensive Care 2021; 9 (01) 38
  • 51 Park JB, Lee HJ, Yang HL. et al. Machine learning-based prediction of intraoperative hypoxemia for pediatric patients. PLoS One 2023; 18 (03) e0282303
  • 52 Kendale S, Kulkarni P, Rosenberg AD, Wang J. Supervised machine-learning predictive analytics for prediction of postinduction hypotension. Anesthesiology 2018; 129 (04) 675-688
  • 53 Lee S, Lee M, Kim SH, Woo J. Intraoperative hypotension prediction model based on systematic feature engineering and machine learning. Sensors (Basel) 2022; 22 (09) 3108
  • 54 Maciąg TT, van Amsterdam K, Ballast A, Cnossen F, Struys MM. Machine learning in anesthesiology: detecting adverse events in clinical practice. Health Informatics J 2022; 28 (03) 14 604582221112855
  • 55 Bellini V, Valente M, Gaddi AV, Pelosi P, Bignami E. Artificial intelligence and telemedicine in anesthesia: potential and problems. Minerva Anestesiol 2022; 88 (09) 729-734
  • 56 Lee CK, Hofer I, Gabel E, Baldi P, Cannesson M. Development and validation of a deep neural network model for prediction of postoperative in-hospital mortality. Anesthesiology 2018; 129 (04) 649-662
  • 57 Angel MC, Rinehart JB, Canneson MP, Baldi P. Clinical knowledge and reasoning abilities of AI large language models in anesthesiology: a comparative study on the ABA exam. medRxiv . May 16 2023; DOI: 10.1101/2023.05.10.23289805.
  • 58 Roy S, Kiral I, Mirmomeni M. et al; IBM Epilepsy Consortium. Evaluation of artificial intelligence systems for assisting neurologists with fast and accurate annotations of scalp electroencephalography data. EBioMedicine 2021; 66: 103275
  • 59 Li R, Wu Q, Liu J, Wu Q, Li C, Zhao Q. Monitoring depth of anesthesia based on hybrid features and recurrent neural network. Front Neurosci 2020; 14: 26
  • 60 Wang Y, Zhang H, Fan Y. et al. Propofol anesthesia depth monitoring based on self-attention and residual structure convolutional neural network. Comput Math Methods Med 2022; 2022: 8501948
  • 61 Nsugbe E, Connelly S, Mutanga I. Towards an affordable means of surgical depth of anesthesia monitoring: an EMG-ECG-EEG case study. BioMedInformatics 2023; 3: 769-790
  • 62 Wang G, Li C, Tang F. et al. A fully-automatic semi-supervised deep learning model for difficult airway assessment. Heliyon 2023; 9 (05) e15629
  • 63 Yamanaka S, Goto T, Morikawa K. et al. Machine learning approaches for predicting difficult airway and first-pass success in the emergency department: multicenter prospective observational study. Interact J Med Res 2022; 11 (01) e28366
  • 64 Khan MJ, Karmakar A. Emerging robotic innovations and artificial intelligence in endotracheal intubation and airway management: current state of the art. Cureus 2023; 15 (07) e42625
  • 65 Wang X, Tao Y, Tao X. et al. An original design of remote robot-assisted intubation system. Sci Rep 2018; 8 (01) 13403
  • 66 Biro P, Hofmann P, Gage D. et al. Automated tracheal intubation in an airway manikin using a robotic endoscope: a proof of concept study. Anaesthesia 2020; 75 (07) 881-886
  • 67 Brown MS, Wong KP, Shrestha L. et al. Automated endotracheal tube placement check using semantically embedded deep neural networks. Acad Radiol 2023; 30 (03) 412-420
  • 68 Lundberg SM, Nair B, Vavilala MS. et al. Explainable machine-learning predictions for the prevention of hypoxaemia during surgery. Nat Biomed Eng 2018; 2 (10) 749-760
  • 69 Hatib F, Jian Z, Buddi S. et al. Machine-learning algorithm to predict hypotension based on high-fidelity arterial pressure waveform analysis. Anesthesiology 2018; 129 (04) 663-674
  • 70 van der Ven WH, Veelo DP, Wijnberge M, van der Ster BJP, Vlaar APJ, Geerts BF. One of the first validations of an artificial intelligence algorithm for clinical use: the impact on intraoperative hypotension prediction and clinical decision-making. Surgery 2021; 169 (06) 1300-1303
  • 71 Lavecchia A. Deep learning in drug discovery: opportunities, challenges and future prospects. Drug Discov Today 2019; 24 (10) 2017-2032
  • 72 Stephenson N, Shane E, Chase J. et al. Survey of machine learning techniques in drug discovery. Curr Drug Metab 2019; 20 (03) 185-193
  • 73 Carracedo-Reboredo P, Liñares-Blanco J, Rodríguez-Fernández N. et al. A review on machine learning approaches and trends in drug discovery. Comput Struct Biotechnol J 2021; 19: 4538-4558
  • 74 de Oliveira ECL, da Costa KS, Taube PS, Lima AH, Junior CSS. Biological membrane-penetrating peptides: computational prediction and applications. Front Cell Infect Microbiol 2022; 12: 838259
  • 75 Sharma S, Borski C, Hanson J. et al. Identifying an optimal neuroinflammation treatment using a nanoligomer discovery engine. ACS Chem Neurosci 2022; 13 (23) 3247-3256
  • 76 Gupta S, Basant N, Singh KP. Qualitative and quantitative structure-activity relationship modelling for predicting blood-brain barrier permeability of structurally diverse chemicals. SAR QSAR Environ Res 2015; 26 (02) 95-124
  • 77 Getz K, Smith Z, Shafner L, Hanina A. Assessing the scope and predictors of intentional dose non-adherence in clinical trials. Ther Innov Regul Sci 2020; 54 (06) 1330-1338
  • 78 Beig N, Bera K, Tiwari P. Introduction to radiomics and radiogenomics in neuro-oncology: implications and challenges. Neurooncol Adv 2021; 2 (Suppl. 04) iv3-iv14
  • 79 McCague C, Ramlee S, Reinius M. et al. Introduction to radiomics for a clinical audience. Clin Radiol 2023; 78 (02) 83-98
  • 80 Dragoș HM, Stan A, Pintican R. et al. MRI radiomics and predictive models in assessing ischemic stroke outcome-a systematic review. Diagnostics (Basel) 2023; 13 (05) 857
  • 81 Ramos LA, van Os H, Hilbert A. et al. Combination of radiological and clinical baseline data for outcome prediction of patients with an acute ischemic stroke. Front Neurol 2022; 13: 809343
  • 82 Mammadov O, Akkurt BH, Musigmann M. et al. Radiomics for pseudoprogression prediction in high grade gliomas: added value of MR contrast agent. Heliyon 2022; 8 (08) e10023
  • 83 Sakly H, Said M, Seekins J, Guetari R, Kraiem N, Marzougui M. Brain tumor radiogenomic classification of O6-methylguanine-DNA methyltransferase promoter methylation in malignant gliomas-based transfer learning. Cancer Contr 2023; 30: 10 732748231169149
  • 84 Åkerlund CAI, Holst A, Stocchetti N. et al; CENTER-TBI Participants and Investigators. Clustering identifies endotypes of traumatic brain injury in an intensive care cohort: a CENTER-TBI study. Crit Care 2022; 26 (01) 228
  • 85 Brossard C, Grèze J, de Busschère JA. et al. Prediction of therapeutic intensity level from automatic multiclass segmentation of traumatic brain injury lesions on CT-scans. Sci Rep 2023; 13 (01) 20155
  • 86 Pease M, Arefan D, Barber J. et al; TRACK-TBI Investigators. Outcome prediction in patients with severe traumatic brain injury using deep learning from head CT scans. Radiology 2022; 304 (02) 385-394
  • 87 Luo X, Lin D, Xia S. et al. Machine learning classification of mild traumatic brain injury using whole-brain functional activity: a radiomics analysis. Dis Markers 2021; 2021: 3015238
  • 88 Huang L, Chen X, Liu W, Shih PC, Bao J. Automatic surgery and anesthesia emergence duration prediction using artificial neural networks. J Healthc Eng 2022; 2022: 2921775
  • 89 Hetherington J, Lessoway V, Gunka V, Abolmaesumi P, Rohling R. SLIDE: automatic spine level identification system using a deep convolutional neural network. Int J CARS 2017; 12 (07) 1189-1198
  • 90 Miyaguchi N, Takeuchi K, Kashima H, Morita M, Morimatsu H. Predicting anesthetic infusion events using machine learning. Sci Rep 2021; 11 (01) 23648
  • 91 Enarvi S, Amoia S, Del-Agua Teba M. et al. Generating Medical Reports from Patient-Doctor Conversations using Sequence-to-Sequence Models. Associatio n for Computational Linguistics; 2020: 22-30
  • 92 Jiao Y, Xue B, Lu C, Avidan MS, Kannampallil T. Continuous real-time prediction of surgical case duration using a modular artificial neural network. Br J Anaesth 2022; 128 (05) 829-837
  • 93 He Y, Peng S, Chen M, Yang Z, Chen Y. A transformer-based prediction method for depth of anesthesia during target-controlled infusion of propofol and remifentanil. IEEE Trans Neural Syst Rehabil Eng 2023; 31: 3363-3374
  • 94 Dresp B, Liu R, Wandeto J. Surgical task expertise detected by a self-organizing neural network map. 2021
  • 95 Cascella M, Scarpati G, Bignami EG. et al. Utilizing an artificial intelligence framework (conditional generative adversarial network) to enhance telemedicine strategies for cancer pain management. J Anesth Analg Crit Care 2023; 3 (01) 19

Address for correspondence

Shailendra Joshi, MD
Department of Anesthesiology, Columbia University, College of Physicians and Surgeons
630 West 168th Street, P&S Box 46, New York, NY 10032
United States   

Publication History

Article published online:
06 August 2024

© 2024. The Author(s). This is an open access article published by Thieme under the terms of the Creative Commons Attribution License, permitting unrestricted use, distribution, and reproduction so long as the original work is properly cited. (https://creativecommons.org/licenses/by/4.0/)

Thieme Medical and Scientific Publishers Pvt. Ltd.
A-12, 2nd Floor, Sector 2, Noida-201301 UP, India

  • References

  • 1 Chalmers DJ. The singularity: a philosophical review. J Conscious Stud 2010; 17: 7-65
  • 2 Garg PK. Chapter 1: Overview of artificial intelligence. In: Sharma L, Garg PK. Artificial Intelligence Technologies, Applications, and Challenges. Boca Raton, FL:: CRC Press, Taylor & Francis Group;; 2022: 3-18
  • 3 McCarthy J. What is Artificial Intelligence. Accessed 6 June 2024 at: http://www-formal.stanford.edu/jmc/
  • 4 Asan O, Bayrak AE, Choudhury A. Artificial intelligence and human trust in healthcare: focus on clinicians. J Med Internet Res 2020; 22 (06) e15154
  • 5 Hazarika I. Artificial intelligence: opportunities and implications for the health workforce. Int Health 2020; 12 (04) 241-245
  • 6 Feinstein M, Katz D, Demaria S, Hofer IS. Remote monitoring and artificial intelligence: outlook for 2050. Anesth Analg 2024; 138 (02) 350-357
  • 7 Wu YH. Huang KY, Tseng AC. Development of an artificial intelligence-based image recognition system for time-sequence analysis of tracheal intubation. Anesth Analg 2024; (e-pub ahead of print). DOI: 10.1213/ANE.0000000000006934.
  • 8 Fritz BA, Pugazenthi S, Budelier TP. et al. User-centered design of a machine learning dashboard for prediction of postoperative complications. Anesth Analg 2024; 138 (04) 804-813
  • 9 Nathan N. Robotics and the future of anesthesia. Anesth Analg 2024; 138 (02) 238
  • 10 Blacker SN, Kang M, Chakraborty I. et al. Utilizing artificial intelligence and chat generative pretrained transformer to answer questions about clinical scenarios in neuroanesthesiology. J Neurosurg Anesthesiol 2023; (e-pub ahead of print). DOI: 10.1097/ANA.0000000000000949.
  • 11 Rajagopalan V, Kulkarni DK. Artificial intelligence in neuroanesthesiology and neurocritical care. J Neuroanesthesiology Critical Care 2020; 7: 11-18
  • 12 Chae D. Data science and machine learning in anesthesiology. Korean J Anesthesiol 2020; 73 (04) 285-295
  • 13 Anonymous. Types of Neural Networks and Definition of Neural Networks. Updated 23 Nov 2022 . Accessed 6 June 2024 at: https://www.mygreatlearning.com/blog/types-of-neural-networks/
  • 14 Logunova I. Backpropagation in Neural Networks. Updated 18 Dec 2023 . Accessed 6 June 2024 at: https://serokell.io/blog/understanding-backpropagation
  • 15 Bishop CM. Pattern Recognition and Machine Learning. New York, NY:: Springer Inc.;; 2006
  • 16 Hastie T, Tibshirani R, Friedman JH. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Vol 2. New York, NY:: Springer Inc.;; 2009
  • 17 Goodfellow IJ, Bengio Ya, Courville A. Deep Learning. Cambridge, MA:: MIT Press;; 2016
  • 18 Anonymous. Machine learning techniques. Accessed 6 June 2024 at: https://www.javatpoint.com/machine-learning-techniques
  • 19 Spatharou A, Solveigh H, Jenkins J. Transforming healthcare with AI: The impact on the workforce and organisations. McKinsey & Company. 2020 . Accessed 6 June 2024 at: https://www.mckinsey.com/industries/healthcare/our-insights/transforming-healthcare-with-ai
  • 20 Blobel B, Ruotsalainen P, Brochhausen M, Prestes E, Houghtaling MA. Designing and managing advanced, intelligent and ethical health and social care ecosystems. J Pers Med 2023; 13 (08) 1209
  • 21 Özdemir V. Digital is political: why we need a feminist conceptual lens on determinants of digital health. OMICS 2021; 25 (04) 249-254
  • 22 Hernandez-Boussard T, Siddique SM, Bierman AS, Hightower M, Burstin H. Promoting equity in clinical decision making: dismantling race-based medicine. Health Aff (Millwood) 2023; 42 (10) 1369-1373
  • 23 Davenport TH, Glaser JP. Factors governing the adoption of artificial intelligence in healthcare providers. Discov Health Syst 2022; 1 (01) 4
  • 24 Seah J, Boeken T, Sapoval M, Goh GS. Prime time for artificial intelligence in interventional radiology. Cardiovasc Intervent Radiol 2022; 45 (03) 283-289
  • 25 Henssen D, Meijer F, Verburg FA, Smits M. Challenges and opportunities for advanced neuroimaging of glioblastoma. Br J Radiol 2023; 96 (1141) 20211232
  • 26 de Godoy LL, Chawla S, Brem S, Mohan S. Taming glioblastoma in “real time”: integrating multimodal advanced neuroimaging/AI tools towards creating a robust and therapy agnostic model for response assessment in neuro-oncology. Clin Cancer Res 2023; 29 (14) 2588-2592
  • 27 Asif S, Zhao M, Chen X, Zhu Y. BMRI-NET: a deep stacked ensemble model for multi-class brain tumor classification from MRI images. Interdiscip Sci 2023; 15 (03) 499-514
  • 28 Aggarwal K, Manso Jimeno M, Ravi KS, Gonzalez G, Geethanath S. Developing and deploying deep learning models in brain magnetic resonance imaging: a review. NMR Biomed 2023; 36 (12) e5014
  • 29 Vinny PW, Vishnu VY, Padma Srivastava MV. Artificial intelligence shaping the future of neurology practice. Med J Armed Forces India 2021; 77 (03) 276-282
  • 30 Davey Z, Gupta PB, Li DR, Nayak RU, Govindarajan P. Rapid response EEG: current state and future directions. Curr Neurol Neurosci Rep 2022; 22 (12) 839-846
  • 31 Rabinowitch I. What would a synthetic connectome look like?. Phys Life Rev 2020; 33: 1-15
  • 32 Harms MP, Somerville LH, Ances BM. et al. Extending the human connectome project across ages: imaging protocols for the lifespan development and aging projects. Neuroimage 2018; 183: 972-984
  • 33 Sun L, Zhao T, Liang X. et al. Functional connectome through the human life span. bioRxiv . Sep 19 2023; DOI: 10.1101/2023.09.12.557193.
  • 34 Corrias G, Mazzotta A, Melis M. et al. Emerging role of artificial intelligence in stroke imaging. Expert Rev Neurother 2021; 21 (07) 745-754
  • 35 Raghavendra U, Acharya UR, Adeli H. Artificial intelligence techniques for automated diagnosis of neurological disorders. Eur Neurol 2019; 82 (1–3): 41-64
  • 36 Hakeem H, Feng W, Chen Z. et al. Development and validation of a deep learning model for predicting treatment response in patients with newly diagnosed epilepsy. JAMA Neurol 2022; 79 (10) 986-996
  • 37 Cortés-Ferre L, Gutiérrez-Naranjo MA, Egea-Guerrero JJ, Pérez-Sánchez S, Balcerzyk M. Deep learning applied to intracranial hemorrhage detection. J Imaging 2023; 9 (02) 37
  • 38 Warman R, Warman A, Warman P. et al. Deep learning system boosts radiologist detection of intracranial hemorrhage. Cureus 2022; 14 (10) e30264
  • 39 Surianarayanan C, Lawrence JJ, Chelliah PR, Prakash E, Hewage C. Convergence of artificial intelligence and neuroscience towards the diagnosis of neurological disorders-a scoping review. Sensors (Basel) 2023; 23 (06) 3062
  • 40 Luo J, Pan M, Mo K, Mao Y, Zou D. Emerging role of artificial intelligence in diagnosis, classification and clinical management of glioma. Semin Cancer Biol 2023; 91: 110-123
  • 41 Song B, Zhou M, Zhu J. Necessity and importance of developing AI in anesthesia from the perspective of clinical safety and information security. Med Sci Monit 2023; 29: e938835
  • 42 Lopes S, Rocha G, Guimaraes-Pereira L. Artificial intelligence and its clinical application in Anesthesiology: a systematic review. J Clin Monit Comput 2024; 38 (02) 247-259
  • 43 Singh M, Nath G. Artificial intelligence and anesthesia: a narrative review. Saudi J Anaesth 2022; 16 (01) 86-93
  • 44 Yoon HK, Yang HL, Jung CW, Lee HC. Artificial intelligence in perioperative medicine: a narrative review. Korean J Anesthesiol 2022; 75 (03) 202-215
  • 45 Cascella M, Tracey MC, Petrucci E, Bignami EG. Exploring artificial intelligence in anesthesia: a primer on ethics, and clinical applications. Surgeries (Basel) 2023; 4: 264-274
  • 46 Mathis MR, Kheterpal S, Najarian K. Artificial intelligence for anesthesia: what the practicing clinician needs to know: more than black magic for the art of the dark. Anesthesiology 2018; 129 (04) 619-622
  • 47 Hashimoto DA, Witkowski E, Gao L, Meireles O, Rosman G. Artificial intelligence in anesthesiology: current techniques, clinical applications, and limitations. Anesthesiology 2020; 132 (02) 379-394
  • 48 Lee HC, Ryu HG, Chung EJ, Jung CW. Prediction of bispectral index during target-controlled infusion of propofol and remifentanil: a deep learning approach. Anesthesiology 2018; 128 (03) 492-501
  • 49 Lin Q, Chng C-B, Too J. et al. Towards artificial intelligence-enabled medical pre-operative airway assessment. Paper presented at: 2022 IEEE International Conference on E-health Networking, Application & Services (HealthCom); 17–19 October 2022, Genoa, Italy
  • 50 Hayasaka T, Kawano K, Kurihara K, Suzuki H, Nakane M, Kawamae K. Creation of an artificial intelligence model for intubation difficulty classification by deep learning (convolutional neural network) using face images: an observational study. J Intensive Care 2021; 9 (01) 38
  • 51 Park JB, Lee HJ, Yang HL. et al. Machine learning-based prediction of intraoperative hypoxemia for pediatric patients. PLoS One 2023; 18 (03) e0282303
  • 52 Kendale S, Kulkarni P, Rosenberg AD, Wang J. Supervised machine-learning predictive analytics for prediction of postinduction hypotension. Anesthesiology 2018; 129 (04) 675-688
  • 53 Lee S, Lee M, Kim SH, Woo J. Intraoperative hypotension prediction model based on systematic feature engineering and machine learning. Sensors (Basel) 2022; 22 (09) 3108
  • 54 Maciąg TT, van Amsterdam K, Ballast A, Cnossen F, Struys MM. Machine learning in anesthesiology: detecting adverse events in clinical practice. Health Informatics J 2022; 28 (03) 14 604582221112855
  • 55 Bellini V, Valente M, Gaddi AV, Pelosi P, Bignami E. Artificial intelligence and telemedicine in anesthesia: potential and problems. Minerva Anestesiol 2022; 88 (09) 729-734
  • 56 Lee CK, Hofer I, Gabel E, Baldi P, Cannesson M. Development and validation of a deep neural network model for prediction of postoperative in-hospital mortality. Anesthesiology 2018; 129 (04) 649-662
  • 57 Angel MC, Rinehart JB, Canneson MP, Baldi P. Clinical knowledge and reasoning abilities of AI large language models in anesthesiology: a comparative study on the ABA exam. medRxiv . May 16 2023; DOI: 10.1101/2023.05.10.23289805.
  • 58 Roy S, Kiral I, Mirmomeni M. et al; IBM Epilepsy Consortium. Evaluation of artificial intelligence systems for assisting neurologists with fast and accurate annotations of scalp electroencephalography data. EBioMedicine 2021; 66: 103275
  • 59 Li R, Wu Q, Liu J, Wu Q, Li C, Zhao Q. Monitoring depth of anesthesia based on hybrid features and recurrent neural network. Front Neurosci 2020; 14: 26
  • 60 Wang Y, Zhang H, Fan Y. et al. Propofol anesthesia depth monitoring based on self-attention and residual structure convolutional neural network. Comput Math Methods Med 2022; 2022: 8501948
  • 61 Nsugbe E, Connelly S, Mutanga I. Towards an affordable means of surgical depth of anesthesia monitoring: an EMG-ECG-EEG case study. BioMedInformatics 2023; 3: 769-790
  • 62 Wang G, Li C, Tang F. et al. A fully-automatic semi-supervised deep learning model for difficult airway assessment. Heliyon 2023; 9 (05) e15629
  • 63 Yamanaka S, Goto T, Morikawa K. et al. Machine learning approaches for predicting difficult airway and first-pass success in the emergency department: multicenter prospective observational study. Interact J Med Res 2022; 11 (01) e28366
  • 64 Khan MJ, Karmakar A. Emerging robotic innovations and artificial intelligence in endotracheal intubation and airway management: current state of the art. Cureus 2023; 15 (07) e42625
  • 65 Wang X, Tao Y, Tao X. et al. An original design of remote robot-assisted intubation system. Sci Rep 2018; 8 (01) 13403
  • 66 Biro P, Hofmann P, Gage D. et al. Automated tracheal intubation in an airway manikin using a robotic endoscope: a proof of concept study. Anaesthesia 2020; 75 (07) 881-886
  • 67 Brown MS, Wong KP, Shrestha L. et al. Automated endotracheal tube placement check using semantically embedded deep neural networks. Acad Radiol 2023; 30 (03) 412-420
  • 68 Lundberg SM, Nair B, Vavilala MS. et al. Explainable machine-learning predictions for the prevention of hypoxaemia during surgery. Nat Biomed Eng 2018; 2 (10) 749-760
  • 69 Hatib F, Jian Z, Buddi S. et al. Machine-learning algorithm to predict hypotension based on high-fidelity arterial pressure waveform analysis. Anesthesiology 2018; 129 (04) 663-674
  • 70 van der Ven WH, Veelo DP, Wijnberge M, van der Ster BJP, Vlaar APJ, Geerts BF. One of the first validations of an artificial intelligence algorithm for clinical use: the impact on intraoperative hypotension prediction and clinical decision-making. Surgery 2021; 169 (06) 1300-1303
  • 71 Lavecchia A. Deep learning in drug discovery: opportunities, challenges and future prospects. Drug Discov Today 2019; 24 (10) 2017-2032
  • 72 Stephenson N, Shane E, Chase J. et al. Survey of machine learning techniques in drug discovery. Curr Drug Metab 2019; 20 (03) 185-193
  • 73 Carracedo-Reboredo P, Liñares-Blanco J, Rodríguez-Fernández N. et al. A review on machine learning approaches and trends in drug discovery. Comput Struct Biotechnol J 2021; 19: 4538-4558
  • 74 de Oliveira ECL, da Costa KS, Taube PS, Lima AH, Junior CSS. Biological membrane-penetrating peptides: computational prediction and applications. Front Cell Infect Microbiol 2022; 12: 838259
  • 75 Sharma S, Borski C, Hanson J. et al. Identifying an optimal neuroinflammation treatment using a nanoligomer discovery engine. ACS Chem Neurosci 2022; 13 (23) 3247-3256
  • 76 Gupta S, Basant N, Singh KP. Qualitative and quantitative structure-activity relationship modelling for predicting blood-brain barrier permeability of structurally diverse chemicals. SAR QSAR Environ Res 2015; 26 (02) 95-124
  • 77 Getz K, Smith Z, Shafner L, Hanina A. Assessing the scope and predictors of intentional dose non-adherence in clinical trials. Ther Innov Regul Sci 2020; 54 (06) 1330-1338
  • 78 Beig N, Bera K, Tiwari P. Introduction to radiomics and radiogenomics in neuro-oncology: implications and challenges. Neurooncol Adv 2021; 2 (Suppl. 04) iv3-iv14
  • 79 McCague C, Ramlee S, Reinius M. et al. Introduction to radiomics for a clinical audience. Clin Radiol 2023; 78 (02) 83-98
  • 80 Dragoș HM, Stan A, Pintican R. et al. MRI radiomics and predictive models in assessing ischemic stroke outcome-a systematic review. Diagnostics (Basel) 2023; 13 (05) 857
  • 81 Ramos LA, van Os H, Hilbert A. et al. Combination of radiological and clinical baseline data for outcome prediction of patients with an acute ischemic stroke. Front Neurol 2022; 13: 809343
  • 82 Mammadov O, Akkurt BH, Musigmann M. et al. Radiomics for pseudoprogression prediction in high grade gliomas: added value of MR contrast agent. Heliyon 2022; 8 (08) e10023
  • 83 Sakly H, Said M, Seekins J, Guetari R, Kraiem N, Marzougui M. Brain tumor radiogenomic classification of O6-methylguanine-DNA methyltransferase promoter methylation in malignant gliomas-based transfer learning. Cancer Contr 2023; 30: 10 732748231169149
  • 84 Åkerlund CAI, Holst A, Stocchetti N. et al; CENTER-TBI Participants and Investigators. Clustering identifies endotypes of traumatic brain injury in an intensive care cohort: a CENTER-TBI study. Crit Care 2022; 26 (01) 228
  • 85 Brossard C, Grèze J, de Busschère JA. et al. Prediction of therapeutic intensity level from automatic multiclass segmentation of traumatic brain injury lesions on CT-scans. Sci Rep 2023; 13 (01) 20155
  • 86 Pease M, Arefan D, Barber J. et al; TRACK-TBI Investigators. Outcome prediction in patients with severe traumatic brain injury using deep learning from head CT scans. Radiology 2022; 304 (02) 385-394
  • 87 Luo X, Lin D, Xia S. et al. Machine learning classification of mild traumatic brain injury using whole-brain functional activity: a radiomics analysis. Dis Markers 2021; 2021: 3015238
  • 88 Huang L, Chen X, Liu W, Shih PC, Bao J. Automatic surgery and anesthesia emergence duration prediction using artificial neural networks. J Healthc Eng 2022; 2022: 2921775
  • 89 Hetherington J, Lessoway V, Gunka V, Abolmaesumi P, Rohling R. SLIDE: automatic spine level identification system using a deep convolutional neural network. Int J CARS 2017; 12 (07) 1189-1198
  • 90 Miyaguchi N, Takeuchi K, Kashima H, Morita M, Morimatsu H. Predicting anesthetic infusion events using machine learning. Sci Rep 2021; 11 (01) 23648
  • 91 Enarvi S, Amoia S, Del-Agua Teba M. et al. Generating Medical Reports from Patient-Doctor Conversations using Sequence-to-Sequence Models. Associatio n for Computational Linguistics; 2020: 22-30
  • 92 Jiao Y, Xue B, Lu C, Avidan MS, Kannampallil T. Continuous real-time prediction of surgical case duration using a modular artificial neural network. Br J Anaesth 2022; 128 (05) 829-837
  • 93 He Y, Peng S, Chen M, Yang Z, Chen Y. A transformer-based prediction method for depth of anesthesia during target-controlled infusion of propofol and remifentanil. IEEE Trans Neural Syst Rehabil Eng 2023; 31: 3363-3374
  • 94 Dresp B, Liu R, Wandeto J. Surgical task expertise detected by a self-organizing neural network map. 2021
  • 95 Cascella M, Scarpati G, Bignami EG. et al. Utilizing an artificial intelligence framework (conditional generative adversarial network) to enhance telemedicine strategies for cancer pain management. J Anesth Analg Crit Care 2023; 3 (01) 19

Zoom Image
Fig. 1 Anticipated impact of AI on neuroanesthesiology patient care. AI, artificial intelligence.
Zoom Image
Fig. 2 Data processing for artificial intelligence, machine learning, and deep learning.
Zoom Image
Fig. 3 The concept of an electronic neuron is shown at the top, and that of a fully connected convoluted neural network showing the stages of an angiographic image analysis is shown at the bottom.