Automating and Explaining Polygraph-based Translation in Sub-Territorial Corpora Using Wikidata – Convarting a given data into semantic sentences is a difficult task for the machine-learning community as it requires the human’s ability to understand a set of variables which must be interpreted to understand it. In this paper, a novel convolutional neural network (CNN) is proposed to facilitate the interpretation of a given sentence by means of a deep learning technique. The system, called Multi-task, is trained using a variety of data sets which have a range of semantic topics and a large number of sentences belonging to a given topic. After a series of experiments the results show that the proposed network can correctly classify data into both semantic and sentence parts of a given text and outperform state-of-art CNNs in terms of the number of semantic sentences and the accuracy of comprehension of the sentences. Further, the proposed model is particularly effective when using a large corpus to study complex sentence structures.
In this paper, we propose a general framework for the analysis of hierarchical visual data as a part of a semantic representation. The framework consists in two components. A rich prior-based knowledge representation is extracted from visual data, and supervised learning methods are developed to make use of it in the learning process. Based on the prior, we consider the structure of the visual data, and in addition, the structure of the visual data under different shapes, levels or orientations (visual or non visual) are explored. The presented framework is a unified framework for both visual and symbolic data.
A Unified Hypervolume Function for Fast Search and Retrieval
Learning to detect single cells in complex microscopes
Automating and Explaining Polygraph-based Translation in Sub-Territorial Corpora Using Wikidata
Robust Decomposition Based on Robust Compressive Bounds
Learning to See through the Box: Inducing Contours through Hidden RepresentationIn this paper, we propose a general framework for the analysis of hierarchical visual data as a part of a semantic representation. The framework consists in two components. A rich prior-based knowledge representation is extracted from visual data, and supervised learning methods are developed to make use of it in the learning process. Based on the prior, we consider the structure of the visual data, and in addition, the structure of the visual data under different shapes, levels or orientations (visual or non visual) are explored. The presented framework is a unified framework for both visual and symbolic data.