Protect your AI Solutions from Runtime Errors

Why SafeguardAI?

Most Deep Learning (DL) solutions were not designed to automatically detect runtime errors once their models have been trained and deployed. A commonly made error is to blindly classify an input that was not in the original distribution of the training data set. If not caught in time, this type of error may cause unintended consequences.
To guard against such failures, EpiSci developed SafeguardAI to identify when and what the model observes is unfamiliar, and communicate to a human supervisor, ‘I’m not sure what to do.

How does SafeguardAI work?

The key insight is to embed a set of well-positioned intelligent agents inside the neural nets during the DL training process. These agents will continuously live inside a trained DL model during runtime and report out-of-distribution inputs as surprises or unusual behaviors. DL solutions can use these intelligent agents to safeguard its decision-making process.
Deployed alongside your trained and tested DL models, SafeguardAI will autonomously monitor your DL solutions for unexpected errors at runtime. Using SafeguardAI in your decision-making applications will produce two different types of results:
1) the DL model’s normal classification
2) SafeguardAI’s numerical output from 0 (normal) to 1 (anomaly), indicating the likelihood that the model has encountered an abnormal input (i.e. a new data point that is likely outside your original distribution of training data)

Demo video coming soon.

Enable your AI Model

to Self Monitor Itself

SafeguardAI provides fully autonomous runtime monitoring of your AI models with no engineering overhead.

Increase the Safety Net

of your AI Models

SafeguardAI can identify WHAT and WHEN it is observing something new and different from what it’s seen before and send an alert on the fly.

Know when your AI

is Unsure

SafeguardAI knows when to disengage the AI and request human assistance for the appropriate response depending on the strength and rate of surprise.

Improve the Quality

of your Training Data Sets

SafeguardAI automatically tags data the model was not trained for in real time as surprises, which are valuable data points for retraining purposes.

© Copyright - SurpriseAI