Learning to detect single cells in complex microscopes – In this paper, we propose an algorithm that can correctly infer and correct single cell prediction under a wide variety of conditions such as cell size and the number of targets. We demonstrate that this is possible using both synthetic and real-world datasets, as well as from real-world experiments.
This paper addresses the problem of supervised learning of visual attention networks by applying deep reinforcement learning (DL) to reinforcement learning tasks. DL is an end-to-end learning algorithm that does not require the user to learn any specific visual scene. In particular, DL can learn to capture visual dependencies and to adapt to different visual cues of the scene at different levels of its complexity, in a global way. In this paper, we propose a novel DL model trained with the task-dependent visual cue to learn to predict the next action sequence over the entire network. As an example, we consider our attention-to-sequence learning algorithm which is trained from scratch and learns to predict the next sequence over every visual cue of an object at each level of the network (i.e. after training the supervised models only on the task-dependent visual cue). We demonstrate that our DL model outperforms the state-of-the-art attention based vision models in terms of accuracy, on an unstructured object detection task.
Robust Decomposition Based on Robust Compressive Bounds
Learning and Inference with Predictive Models from Continuous Data
Learning to detect single cells in complex microscopes
A deep-learning-based ontology to guide ontological research
Structural Matching through Reinforcement LearningThis paper addresses the problem of supervised learning of visual attention networks by applying deep reinforcement learning (DL) to reinforcement learning tasks. DL is an end-to-end learning algorithm that does not require the user to learn any specific visual scene. In particular, DL can learn to capture visual dependencies and to adapt to different visual cues of the scene at different levels of its complexity, in a global way. In this paper, we propose a novel DL model trained with the task-dependent visual cue to learn to predict the next action sequence over the entire network. As an example, we consider our attention-to-sequence learning algorithm which is trained from scratch and learns to predict the next sequence over every visual cue of an object at each level of the network (i.e. after training the supervised models only on the task-dependent visual cue). We demonstrate that our DL model outperforms the state-of-the-art attention based vision models in terms of accuracy, on an unstructured object detection task.