Machine learning for cardiovascular modelling
Predicting time-dependent, aortic blood flow
More Info
expand_more
Abstract
The increase in complexity of mathematical models in an attempt to approximate reality and desire to have near real-time results have emphasized the need for fast numerical simulations. Especially in areas where classic numerical methods struggle to produce valid solutions in reasonable computational time due to their
complex behaviour on multiple temporal and spatial scales, such as (cardio-vascular) fluid modelling, machine learning techniques can be of help. The aim of this study is to construct a single reduced order model, based on neural networks, for time-dependent incompressible blood flow through the aorta that can account for varying velocity inlet conditions, material parameters and geometries (computational domains). The objectives are to minimize the reduction of accuracy of simulated time series with OpenFoam, and to obtain speedup compared to the OpenFoam simulations.
In this study, aortic blood flow during one heartbeat is modelled by Navier-Stokes equations for incompressible, Newtonian fluids. Data for training and testing the neural networks was generated using the finite volume method. The network architecture consists of a convolutional encoder and decoder model for dimensionality reduction, with an additional neural network for time evolution inserted between the encoder and decoder. The network takes one time step as input and predicts five consecutive time steps ahead. Three different networks for time evolution were tested. The best performing network was tested on increasingly complex computational domains. Thereafter, the network’s ability to generalize to varying inlet velocity patterns and new variations of the computational domain was tested, for which the network for time evolution was adapted to take into account varying inlet velocity patterns. The performance of the networks was evaluated on predicting one to five time steps ahead from one input time step, and using the prediction of one time step ahead recursively to obtain the prediction of one entire heartbeat.
The best performing architecture found was an encoder, recurrent neural network with long-short term memory units for time evolution, decoder model combination. The generalizability for the tested material parameter, blood viscosity, while keeping the inlet velocity pattern fixed, appeared quite good. A relative mean absolute error (MAE) of around 10% was obtained for straight, bent and bifurcated channel geometries. When testing the network on varying inlet velocity patterns, the relative MAE in the domain increased considerably to
around 25%. In general, the relative errors were significantly higher in comparison to the absolute errors, and mainly increased by low velocity parts of the domain and time series. In addition, the full height of the velocity peak was not always captured and a spike in the domain MAE was seen for rapidly decreasing inlet velocities. Also, the network was unable generalize to new geometries.
Various attempts to improve the accuracy, including altering the loss function and augmenting the data, were not successful. However, many choices of data augmentations, normalization techniques and loss functions were not tested and could improve the performance. The generalizability to different computational
domains could be improved by adding more geometries and more variation to the geometries in the data. Nonetheless, the prediction of one heartbeat was obtained in approximately 20s, which is equivalent to an achieved speedup of around 20 compared to OpenFoam simulations. In conclusion, this research should be seen as a basis for time-dependent blood flow predictions in the aorta, from which a multitude of different paths can be explored.