Integration of Semantic Information Towards Explainable Novelty Detection
The transformation of industrial environments is progressing at a fast pace as more and more autonomous systems are installed and operated. Save and explainable AI algorithms are thus essential, especially for collaborative interactive systems that operate in human space. Almost all machine-learning based systems make the assumption of a “closed world”, assuming a complete system model. However, AI-based systems might be exposed to a constantly evolving environment and need to adapt to new settings in a trustworthy manner. First, I will present our NovelCraft dataset, containing multi-modal episodic data of the images and symbolic world-states seen by an agent completing a pogo-stick assembly task within a video game world. We benchmark state-of-the-art novelty detection and generalized category discovery models on this dataset. Next, I will describe our current work based on the “Semantic Encoder”, a 2D-vision based CNN model trained on a purely synthetic dataset. Our concept aims to address the explainability aspect by extracting semantic descriptions of real objects based on their visual appearances. We can use the extracted semantic information to simply describe depicted samples or to differentiate between normal and novel samples, with the possibility to explain what caused the novelty detection. The semantic description can be further used to sort samples by classifying them or to find a sample with specific semantic properties.
Research areas: Computer Vision, Novelty Detection, Open Worlds