Sie sind hier

Talks

Tobias Käfer

Abstract:

The Linked Data best practices for data publishing encourage the use of RDF to describe URI-identified resources on the Web. As those resources reflect things in the real world, which is without a doubt dynamic, the dynamics of Linked Data should not be neglected. In this talk I report on experimental work on dynamic Linked Data that is based on the Dynamic Linked Data Observatory, a long-term data collection of Linked Data on the Web. Moreover, I cover formal work to capture the dynamics of Linked Data with the aim to specify agents on the Linked Data web using rules. Last, I showcase applications based on the talk topics from the area of cyber-physical systems and the Web of Things.

Bio:

13.07.2017 - 10:15 h
Zeyd Boukhers

For semantic analysis of activities and events in videos, it is important to capture the spatio-temporal relation among objects in the 3D space.  This presentation introduces a probabilistic method that extracts 3D trajectories of objects from 2D videos, captured from a monocular moving camera. Compared to existing methods
that rely on restrictive assumptions, the presented method can extract 3D trajectories with much less restriction by adopting new example-based techniques which compensate the lack of information. Here,  the focal length of the camera is estimated based on similar candidates, and used afterwards to compute depths of detected objects.

22.06.2017 - 10:15 h

Ein solcher Ansatz für das Reasoning in der Abstrakten Argumentationssystemen nach Dung sind Backtracking-Algorithmen, die Extensionen aufzählen. Zur Optimierung dieser Algorithmen sollen verschiedene Heuristiken verglichen werden.

22.06.2017 - 10:15 h
Dr. Jeff Z. Pan

 Approximate reasoning is regarded one of the most convincing approaches to reasoning with ontologies and knowledge graphs in applications. This talk has two parts. In the first part, I will explain why approximate reasoning might work and how to perform faithful approximate reasoning, i.e, approximate reasoning with some level of quality control. In the second part, I will share some further thoughts towards a new roadmap of approximate reasoning in the era of Knowledge Graph.

21.06.2017 - 10:00 h

The concept of relevance was proposed to model different temporal effects in networks (e.g. "aging effect", i.e. how the interest of nodes decay over time), where the traditional preferential attachment (PA) model fails to explain. We analyze the citation data provided by American Physical Society (APS). We group papers by their final in-degrees (# of papers citing them), and do not observe an obvious decline of citations for the most cited papers (indegree>1000). This might be due to the fact that the size of the network (total # of papers) grows exponentially, which compensates the decay of papers' relevance. For the next step, we want to analyze different citation networks (ACM, DBLP).

08.06.2017 - 10:15 h

Gaze-based virtual keyboards provide an effective interface for text entry by eye movements. The efficiency and usability of these keyboards have traditionally been evaluated with conventional text entry performance measures such as words per minute, keystrokes per character, backspace usage, etc. However, in comparison to the traditional text entry approaches, gaze-based typing involves natural eye movements that are highly correlated with human brain cognition. Employing eye gaze as an input could lead to excessive mental demand, and in this work we argue the need to include cognitive load as an eye typing evaluation measure.

01.06.2017 - 10:15 h

Detection and handling of misinformation is one of nowadays' hot topics, which is being specially discussed and researched with the existence of social networks and exponential spreading of political fake news in the web. The main point here is to first understand (1) what is misinformation, (2) which structure does it contain (present), (3) how can it be discovered (misinformation prediction) and (4) how can it's spreading be prevented (misinformation prevention)?! In general, I would define whole this as a general term of "Misinformation Analytics (MA)". The first step in the MA is to understood what and how machine learning algorithms could support autonomous fake news detection and handling. Unsupervised learning is one of candidate solutions in this regard.

18.05.2017 - 10:15 h
Henry Story

SoLiD is the Social Linked Data platform developed by Tim Berners Lee's Group at the Distributed Information Group
at MIT, which extends the web  to move us from a hyper text browser web to a hyper data application web. Henry will
give an overview of the architecture of this project, philosophical issues that spawn it, and technical issues that he
discovered while building both server and client applications for it.

11.05.2017 - 10:15 h
Philipp Cimiano

We present new approaches to the problems of named entity linking and question answering over linked data. Named entity linking consists in linking a name in a text to an entity in a knowledge base that represents its denotation. Question answering over linked data consists in interpreting a natural language question in terms of a SPARQL query that can be evaluated over a given RDF dataset.

We model both problems as statistical learning problems and present undirected probabilistic graphical models for both problems. Inference is done via approximative methods using Markov Chain Monte Carlo Method and in particular using the Metropolis Hastings algorithm. Parameter optimization is performed via a Learning-to-Rank Approach based on SampleRank.

10.05.2017 - 16:00 h

 Fueling many recent advances in NLP, continuous word embeddings have received much attention over the last years. However, the popular word2vec implementation assumes a single vector per word type, an approach ignoring polysemy and homonymy. This talk will give an overview over extensions to this model that support multiple senses per word and discuss whether still some modeling flaws remain and whether they can be solved.
 

04.05.2017 - 10:15 h

Seiten