Sie sind hier


Sergei Koltcov

Entropy models are widely used in various scientific fields, such as statistical physics, biology, economics, and machine learning. However, while the models developed in statistical physics have been mostly based on deformed entropies, including the entropies of Renyi, Tsallis, and Sharma-Mittal, machines learning has been mainly relying on Boltzmann-Gibbs-Shannon entropy. This type of entropy has been often used as one of regularizers of topic models (TM) or for their diagnosis. Topic modeling is a class of algorithms based on the procedure of restoring of a multidimensional distribution as a mixture of hidden distributions. One of the unsolved problems in TM is the choice of the number of distributions in the mixture. Another problem is its semantic stability.

26.11.2018 - 16:15 h

Convolutional neural networks (CNNs) have gain great success in many fields in machine learning. It is however not so obvious how convolution can be performed on non-Euclidean structures such as graphs. Starting from a simple diffusion model, we examine different concepts, namely, the graph Laplacian matrix and the Fourier transform, and show the relations between them. The convolution on graphs can be naturally defined once these relations become clear.

22.11.2018 - 10:15 h

Sensory data in sequential format can be obtained from different sensors describing different events. As a clear example of their usability, a smartphone has several inbuilt sensors such as accelerometer, gyroscope, magnetometer, etc. Independently, each sensor continuously measures an action value (e.g. acceleration) at each time stamp. However, the interpretation of a series of instantaneous actions to higher-level events is complicated due to the lack of information in the one-dimensional series and the high similarity among different events. Inspired from image processing, sensory words are new descriptors of sequential data, where it captures the magnitude and orientation of data points and present them in frequency histogram.

08.11.2018 - 10:15 h

The aim of the research project GazeMining by WeST and EYEVIDO GmbH is to capture Web sessions semantically and thus obtain a comprehensive yet rich picture of visual content as presented to the users, alongside attention and interaction by the users. This talk will provide you with the overall motivation, the current status and the future plans of the project.

18.10.2018 - 10:15 h

People widely use online social media to search for authorised information, disseminate and communication during breaking events such as natural disasters, political elections. On the other hand, along with verified information, it is also used to spread rumours which might cause undesirable circumstances. Therefore, early detection of emerging rumours is crucial task to deal with them, and challenging due to lack of sufficient information on circulating rumour.

11.10.2018 - 10:15 h

In the last years, scalable RDF stores in the cloud have been developed, where graph data is distributed over compute and storage nodes for scaling efforts of query processing and memory needs.

04.10.2018 - 10:15 h

With the popularity of RDF as an independent data model came the need for specifying constraints on RDF graphs, and for mechanisms to detect violations of such constraints. One of the most promising schema languages for RDF is SHACL, a recent W3C recommendation. Unfortunately, the specification of SHACL leaves open the problem of validation against recursive constraints. This omission is important because SHACL by design favors constraints that reference other ones, which in practice may easily yield reference cycles. In this paper, we propose a concise formal semantics for the so-called “core constraint components” of SHACL. This semantics handles arbitrary recursion, while being compliant with the current standard.

27.09.2018 - 10:15 h
Takashi Matsubara

Since deep learning is a very flexible framework, it works well for various tasks without expert knowledge, but it also has difficulty leveraging explicit knowledge. Deep learning always requires massive dataset and is applicable to limited tasks. I introduce deep generative model, which is a Bayesian network implemented on deep neural networks. By expressing our knowledge as the network structure, deep generative model works for a small-sized dataset and provides interpretable results.

07.09.2018 - 10:15 h

I present a family of stochastic local search algorithms for finding a single stable extension in an abstract argumentation framework. These incomplete algorithms work on random labellings for arguments and iteratively select a random mislabeled argument and flip its label. We present a general version of this approach and an optimisation that allows for greedy selections of arguments. We conduct an empirical evaluation with benchmark graphs from the previous two ICCMA competitions and further random instances. Our results show that our approach is competitive in general and significantly outperforms previous direct approaches and reduction-based approaches for the Barabasi-Albert graph model.

06.09.2018 - 10:15 h
Muhammad Ahmad

In both conversation and writing, grammar gives us the opportunity to avoid articulating parts of a sentence, which are overtly expressed in the preceding linguistic context. For instance, in the sentence, /I wanted to play football but I couldn’t/, after /couldn’t/, /play football/ can be dropped because it can be understood from the context. In linguistics, this phenomenon is known as verb phrase (VP) ellipsis. Detection and resolution of ellipsis lead towards understanding text properly which could be helpful to improve language understanding systems. Since this phenomenon is optional, the challenge was to find a way to systematically distinguish auxiliaries and modals that indicate VP ellipsis from auxiliaries that do not.

23.08.2018 - 10:15 h