Protect your AI Solutions from Runtime Errors
Most Deep Learning (DL) solutions were not designed to automatically detect runtime errors once their models have been trained and deployed. A commonly made error is to blindly classify an input that was not in the original distribution of the training data set. If not caught in time, this type of error may cause unintended consequences.
To guard against such failures, EpiSci developed SafeguardAI to identify when and what the model observes is unfamiliar, and communicate to a human supervisor, ‘I’m not sure what to do.