SoLiD is the Social Linked Data platform developed by Tim Berners Lee's Group at the Distributed Information Group
at MIT, which extends the web to move us from a hyper text browser web to a hyper data application web. Henry will
give an overview of the architecture of this project, philosophical issues that spawn it, and technical issues that he
discovered while building both server and client applications for it.
We present new approaches to the problems of named entity linking and question answering over linked data. Named entity linking consists in linking a name in a text to an entity in a knowledge base that represents its denotation. Question answering over linked data consists in interpreting a natural language question in terms of a SPARQL query that can be evaluated over a given RDF dataset.
We model both problems as statistical learning problems and present undirected probabilistic graphical models for both problems. Inference is done via approximative methods using Markov Chain Monte Carlo Method and in particular using the Metropolis Hastings algorithm. Parameter optimization is performed via a Learning-to-Rank Approach based on SampleRank.
Fueling many recent advances in NLP, continuous word embeddings have received much attention over the last years. However, the popular word2vec implementation assumes a single vector per word type, an approach ignoring polysemy and homonymy. This talk will give an overview over extensions to this model that support multiple senses per word and discuss whether still some modeling flaws remain and whether they can be solved.
During the last years, the task of updating ontologies has gained increasing interest which among other things led to the introduction of SPARQL Update. However up till now the proposed approaches do not allow TBoxes beyond DL-Lite. Can reasoners help to allow more expressive TBoxes?
Extending business processes with semantic annotations has gained recent attention. This comprises relating process elements to ontology elements in order to create a shared conceptual and terminological understanding. In business process modeling, processes may have to adhere to a multitude of rules. A common way to detect compliance automatedly is studying the artifact of the process model itself.
The development and standardization of semantic web technologies has resulted in an unprecedented volume of data being published on the Web as Linked Data (LD). However, we observe widely varying data quality ranging from extensively curated datasets to crowdsourced and extracted data of relatively low quality.
On March 30 2017 Rene Pickhardt will present the results of the fOERder award which he won one year ago in March 2016. Together with Sebastian Schlicht the MOOC-Extension for Mediawiki ( https://www.mediawiki.org/wiki
Open Information Extraction (OIE) is a well performing intermediate step for tasks like summarization, text comprehension relation extraction or knowledge base construction. However, there is surprisingly little work on evaluating and on comparing different methods. How can we compare results of existing OIE systems?
Textual data is a core source of information in the enterprise. Example demands arise from sales departments (monitor and identify leads), human resources (identify professionals with capabilities in “xyz”), market research (campaign monitoring from the social web), product development (incorporate feedback from customers), supply chain management.
Cognitive disability may lead to learning difficulties, difficulties in problem solving, concentration and attention problems, poor orientation capability. Accessibility for users with cognitive disabilities is a great challenge. So far in our work on interactive Web and human computing, we have focused on eye-brain interfaces and how it can assist the people with motor disability.