When issuing a prediction, one can question human or artificial intelligence on how much she/it is sure about the prediction. Without going into detail on the dramatic consequences of overconfident artificial agents in such situations as Autonomous Driving, this article will focus on the Neural Concept’s core competence: building predictive models that are surrogates of more resource-intensive CAE models.
CAE simulations are physics-based and solve numerical equations on computational domains, mostly of industrial interest (CAD shapes). We will assume that CAE yields a reasonable estimation of a real life behaviour and can therefore be assumed as “Ground Truth” (otherwise, it can be supplemented by experimental data). Similarly to the human brain, Deep Learning data-driven models learn by seeing examples and extracting information from these results. In our case, learnt examples are from CAD or various inputs (materials, boundary conditions etc.) and learnt results are CAE (or experimental). Hence, when facing a new unknown example, the model will predict the result in real-time, based on the data used for its training.
In general, when issuing a prediction, one can question human or artificial intelligence on how much she/it is sure about the prediction. The same is true for our Neural Network models.
The Deep Learning implementation of Neural Concept Shape is a tool to build and support surrogate models that produce outputs. The surrogate model (trained neural network) undergoes extensive training and testing phases, having measurables such as I1 (the mean average error) and R2 (the coefficient of determination). However, an R2 is a sort of “backwards-looking” measurable on the training/testing phase. During the neural network deployment phase, engineers would like to have a “dynamic” way to assess if the prediction is reliable. Suppose the neural network issues a signal of uncertainty above a certain predefined level. In that case, the human or artificial agent could decide to activate remediations such as submitting the neural network to new training (for efficiency, starting from the previous neural network and using Transfer Learning technology).
Therefore, Neural Concept has also worked extensively on uncertainty estimation, to help engineers during deployment phase facing epistemic uncertainty (degree of variation due to lack of knowledge on the model we are trying to predict). As we can see from the example given with figure 1, after uploading a given CAD geometry (here called “geometry_0001.stl”), the engineer receives from the neural network real-time predictions on the values of interest together with a confidence index from the model.
Another typical application is generative design. In this case, Neural Concept Shape is an agent that may create geometric shapes far out of the initial design envelope. We wish to know if the predictions associated with the new generated shapes are still reliable, or more input/output samples that one needs for neural network re-training.
Masksembles: A New Methodology to Compute Uncertainty in Prediction
Neural Concept’s staff collaborates in research topics on top of available software capabilities, mainly with EPFL (Lausanne – Switzerland). This final section will report some activity carried out on a novel promising methodology called Masksembles.
Masksembles contains in itself the explanation of its technology, as follows more detailed:
An “ensemble”, in general, is a set of virtual copies of anything (in physics, a vessel under pressure; in AI, a neural network; in society, a collection of individuals) where the extension over multiple copies allows for several different physical states or network configurations. In particular, we are interested in epistemic uncertainty.
The “Deep Ensembles” technique consists of training an ensemble of deep neural networks on the same data with random initialization of each of the neural networks in the ensemble. By running all neural networks aggregating their prediction, one obtains the best in class uncertainty estimation at the cost of computational investment.
A “mask” is, in deep learning technology, a way to drop/hide artificial neurons, thus generating several slightly different model architectures, allowing a single model to mimic ensemble behaviour.
Masksembles can generate a range of models within which MC-Dropout and Deep Ensembles are extreme cases. It joins MC-Dropout’s light computational overhead and Deep Ensembles’ performance. Using many masks approximates MC-Dropout while using a set of non-overlapping, completely disjoint masks, yields an Ensemble-like behaviour.
Conclusion
Neural Concept is already providing the possibility, even for non-specialists, to have confidence levels for predictions. The type of advanced research exposed in the last section will bring further benefits to engineers in terms of high performance and low computational costs.
Thus, we will continue to sustain engineers facing an always important question – “am I sure about my predictions”?
Bibliography
Nikita Durasov, Timur Bagautdinov, Pierre Baqué, Pascal Fua (Computer Vision Laboratory, EPFL and Neural Concept): «Masksembles for Uncertainty Estimation», arXiv:2012.08334v1 [cs.LG], 15 Dec 2020