Sie sind hier

Talks

Dr. Jeff Z. Pan

 Approximate reasoning is regarded one of the most convincing approaches to reasoning with ontologies and knowledge graphs in applications. This talk has two parts. In the first part, I will explain why approximate reasoning might work and how to perform faithful approximate reasoning, i.e, approximate reasoning with some level of quality control. In the second part, I will share some further thoughts towards a new roadmap of approximate reasoning in the era of Knowledge Graph.

21.06.2017 - 10:00 h

The concept of relevance was proposed to model different temporal effects in networks (e.g. "aging effect", i.e. how the interest of nodes decay over time), where the traditional preferential attachment (PA) model fails to explain. We analyze the citation data provided by American Physical Society (APS). We group papers by their final in-degrees (# of papers citing them), and do not observe an obvious decline of citations for the most cited papers (indegree>1000). This might be due to the fact that the size of the network (total # of papers) grows exponentially, which compensates the decay of papers' relevance. For the next step, we want to analyze different citation networks (ACM, DBLP).

08.06.2017 - 10:15 h

Gaze-based virtual keyboards provide an effective interface for text entry by eye movements. The efficiency and usability of these keyboards have traditionally been evaluated with conventional text entry performance measures such as words per minute, keystrokes per character, backspace usage, etc. However, in comparison to the traditional text entry approaches, gaze-based typing involves natural eye movements that are highly correlated with human brain cognition. Employing eye gaze as an input could lead to excessive mental demand, and in this work we argue the need to include cognitive load as an eye typing evaluation measure.

01.06.2017 - 10:15 h

Detection and handling of misinformation is one of nowadays' hot topics, which is being specially discussed and researched with the existence of social networks and exponential spreading of political fake news in the web. The main point here is to first understand (1) what is misinformation, (2) which structure does it contain (present), (3) how can it be discovered (misinformation prediction) and (4) how can it's spreading be prevented (misinformation prevention)?! In general, I would define whole this as a general term of "Misinformation Analytics (MA)". The first step in the MA is to understood what and how machine learning algorithms could support autonomous fake news detection and handling. Unsupervised learning is one of candidate solutions in this regard.

18.05.2017 - 10:15 h
Henry Story

SoLiD is the Social Linked Data platform developed by Tim Berners Lee's Group at the Distributed Information Group
at MIT, which extends the web  to move us from a hyper text browser web to a hyper data application web. Henry will
give an overview of the architecture of this project, philosophical issues that spawn it, and technical issues that he
discovered while building both server and client applications for it.

11.05.2017 - 10:15 h
Philipp Cimiano

We present new approaches to the problems of named entity linking and question answering over linked data. Named entity linking consists in linking a name in a text to an entity in a knowledge base that represents its denotation. Question answering over linked data consists in interpreting a natural language question in terms of a SPARQL query that can be evaluated over a given RDF dataset.

We model both problems as statistical learning problems and present undirected probabilistic graphical models for both problems. Inference is done via approximative methods using Markov Chain Monte Carlo Method and in particular using the Metropolis Hastings algorithm. Parameter optimization is performed via a Learning-to-Rank Approach based on SampleRank.

10.05.2017 - 16:00 h

 Fueling many recent advances in NLP, continuous word embeddings have received much attention over the last years. However, the popular word2vec implementation assumes a single vector per word type, an approach ignoring polysemy and homonymy. This talk will give an overview over extensions to this model that support multiple senses per word and discuss whether still some modeling flaws remain and whether they can be solved.
 

04.05.2017 - 10:15 h

During the last years, the task of updating ontologies has gained increasing interest which among other things led to the introduction of SPARQL Update. However up till now the proposed approaches do not allow TBoxes beyond DL-Lite. Can reasoners help to allow more expressive TBoxes?
 

27.04.2017 - 10:15 h
Carl Corea

Extending business processes with semantic annotations has gained recent attention. This comprises relating process elements to ontology elements in order to create a shared conceptual and terminological understanding. In business process modeling, processes may have to adhere to a multitude of rules. A common way to detect compliance automatedly is studying the artifact of the process model itself.

20.04.2017 - 10:15 h
Dr. Amrapali Zaveri

The development and standardization of semantic web technologies has resulted in an unprecedented volume of data being published on the Web as Linked Data (LD). However, we observe widely varying data quality ranging from extensively curated datasets to crowdsourced and extracted data of relatively low quality.

06.04.2017 - 10:15 h

Seiten