A compendium of applications of tailor-made network-theoretic tools have been devised and implemented in a data-driven fashion. In the first part, a (formerly) novel centrality metric, aptly named “bridgeness”, based on a decomposition of the standard betweenness centrality, will be introduced. A prominent feature is its agnosticism with regard to any possible community structure prior. A second application is aimed at describing dynamic features of temporal graphs which are apparent at the mesoscopic level. A dataset comprising 40 years' worth of selected scientific publications is used to highlight the appearance and evolution in time of a specific field of study: “wavelets”.
Anomalous diffusion processes, both in the superdiffusive and subdiffusive regimes, have spurred a lot of theoretical research effort, along with experimental validation, for decades now. Their description, however, strongly relies on the existence of a metric in continuous space. Complex networks lack an intrinsic metric definition and, in this talk, I will present some theoretical "recipes" to work around this issue and recover such regimes on networks as well. On the applied side, some machine learning algorithms, like the celebrated Page Rank, exploit diffusion for classification and ranking tasks. Thus I will show how, through enhanced diffusion regimes, it is possible to address and correct some shortcomings of those algorithms and improve classification performance.
Knowledge-based authentication methods are vulnerable to Shoulder surfing phenomenon. The widespread usage of these methods and not addressing the limitations it has could result in the user's information to be compromised.
At WeST we had prior discussions on how eye tracking studies can be useful in other projects of our group. Currently have 8 eye trackers in our lab, and we should plan to extend the scope of our eye tracking expertise and resources. In this direction, I will take a topic common to most of our group members, i.e., how eye tracking has been used to evaluate ontologies. Will take a simple example from related work  to evaluate two commonly used ontology visualization techniques, namely, indented list and graph. Will discuss the eye-tracking experiment and analysis procedure and how it complements the set of existing evaluation protocols for ontology visualization.
Graph-based data models allow for flexible data representation. In particular, semantic data based on RDF and OWL fuels use cases ranging from general knowledge graphs to domain specific knowledge. The flexibility of these approaches however make programming with semantic data tedious and error-prone. In particular the logics-based data descriptions used in OWL are problematic for existing error-detecting techniques such as type systems. In the LISeQ project, we investigate integration of such data descriptions and associated query languages into programming languages. In this presentation, we discuss the first publication (currently under submission) of this project: ScaSpa.
Voice User Interfaces (VUI) are are mostly limited to smart agents like google assistant or alexa. These agents are capable of interpreting commands and and executing instructions on the particular platform in which they are running. However, for Web interaction, where the context is already known to the user, apart from navigation and selection of links, voice commands have not been designed effectively . In this talk, I would discuss an approach for efficient usage of voice commands for domain specific interaction and discuss the current challenges I am encountering.
This thesis deals with the solvability of planning problems. For that the unsolvability will be mapped to a number range from 0 to 1 with the aid of the measure of inconsistency. After the problem gets explored, the analysis of its solvability will take place. If there is no solution then different methods of measuring can be applied to the problem itself and its partly explored solution.
Entities are commonly modeled by explicitly describing their features and their relations to other entities. The most extensive collections of such entity-centric information are large scale Knowledge Graphs (KGs) like DBpedia that describe the interdependency of millions of real word entities and abstract concepts.
One benefit of KG entities is that they are universal identifiers and thus provide a way to link content across languages and modalities once their occurrence in images and (multilingual) text has been annotated.
Argumentation networks and Bayesian networks are formalisms used in AI that serve different purposes. However they do share some conceptual similarities: they are both based on directed graphs, and edges represent relationships of influence among variables. In this talk I explore these similarities. Based on this, I propose a unifying perspective that forms the basis for a new approach to probabilistic argumentation.
I revisit the notion of "Bose-Einstein Condensation in Complex Networks" based on . In the fitness model for evolving networks, the rate at which existing nodes acquire new links is proportional to the node's fitness and degree. Networks that are described by the fitness model can be apped to an equilibrium Bose gas, thus allowing us to "reuse" conclusions in the well studied thermal-dynamics field, in particular Bose statistics.
Akin to their counterparts in physical systems, complex networks can undergoes Bose-Einstein condensation, well predicting the “winner-takes-all” phenomena observed in reality.