Google AI offers deep learning-based algorithm to improve medical ventilator control for invasive ventilation


Mechanical ventilators help patients breathe who are undergoing surgery or who cannot breathe independently due to acute illness. Patients are A hollow tube (artificial airway) is inserted into the patient’s mouth and down into their primary airway, or trachea, to connect to the ventilator.

The ventilator follows a respiratory waveform prescribed by the clinician based on a patient’s respiratory measurement (eg, airway pressure, tidal volume). Therefore, it is a difficult task that requires systems that are both robust to variations or changes in patients’ lungs and adhere to the ideal waveform to avoid injury. This is one of the reasons why they are closely monitored by highly experienced professionals to ensure that their performance meets patient needs and does not cause lung damage.

Most fans are controlled using PID (Proportional, Integral, Differential) methods. This method regulates a system based on the history of errors between observed and desired states. For fan control, a PID controller uses three characteristics:

  • Percentage (“P”), which compares the measured pressure and the target pressure
  • Integral (“I”), which sums past data
  • Differential (“D”), the difference between two previous measurements.

PID control establishes a solid foundation, depending on the high responsiveness of P control to rapidly increase lung pressure during inspiration, and I control stability to hold the breath before expelling. However, ventilators must be fine-tuned for each patient, often multiple times, to balance the “ringing” of the overzealous P control with the inefficiently slow rise in lung pressure of the dominant I control.

A recent Google study proposes a neural network-based controller that improves control of medical ventilators by balancing these properties, reducing the risk of injury to patients’ lungs. The new control algorithm detects airway pressure and calculates the changes in airflow needed to reach prescribed values ​​using signals from an artificial lung. This method requires less manual intervention from clinicians and shows high robustness and performance compared to other systems while requiring.

The deep controller is based on deep neural networks and therefore requires a set of data for training, while a limited number of repeated trials could be used to tune the coefficients of a PID controller. Additionally, popular model-free techniques like Q-Learning and Policy Gradient are data-intensive and therefore unsuitable for the model. Moreover, these approaches ignore the inherent differentiability of the dynamical fan system, which is deterministic, continuous and contactless.

Researchers are adopting a model-based strategy, a more accurate data-driven alternative to physics-based models. This approach is primarily a DNN-based ventilator-patient dynamic system simulator.

The team explored the control space and resulting pressures while keeping physical safety in mind (such as not overinflating a test lung and causing injury) to create a training dataset. They use PID controllers with variable control coefficients to provide control pressure trajectory data for simulator training to safely explore system behavior. They also added random deviations to the PID controllers to ensure the dynamics were accurately captured.

They used an open-source ventilator built by Princeton University’s People’s Ventilator Project to perform mechanical ventilation tasks on a physical test lung. They created a fan farm on a server rack with ten ventilator-lung systems. This farm captures various airway resistance and compliance parameters that cover a range of patient lung conditions, as required for practical applications of ventilation systems.

They model the state of the system at each instant in the simulator as a set of past pressure observations and control actions. This data is sent in a DNN, which predicts the subsequent system pressure. Control pressure trajectory data collected in the test lung is used to train this simulator.

Researchers use a learned simulator to train an offline DNN-based controller. This method allows for quick updates when training the controller. Moreover, the differentiable nature of the simulator allows the stable application of the direct policy gradient. This strategy is significantly more efficient than the model-free alternatives because it analytically calculates the loss gradient against the DNN parameters.

The researchers compare the best performing PID controller to their proposed method for several parameters. For some lung settings. The results show that the proposed controller performs better than the PID controller and exhibits 22% lower mean absolute error (MAE) between the target and actual pressure waveforms.

They also compare the performance of the best single PID controller in the full set of lung parameters to that of their controller trained on a set of simulators trained in the same parameters. The team controller outperforms the competition in MAE between objective and actual pressure waveforms by up to 32%. This demonstrates that the proposed controller would require fewer manual interventions between patients or when a patient’s condition changes.

The team also assessed the viability of using model-free methods and other common RL methods (PPO, DQN). For this, they compared their controllers to a linear policy gradient trained on the simulator. Their results show that the simulator-trained direct policy gradient achieves slightly higher scores while using a more stable training method with orders of magnitude less training samples and considerably more hyperparameter search space. small.