Microtask crowdsourcing has shown to be a successful method for engaging humans to accomplish tasks that, even simple and repetitive, cannot so effectively be carried out by fully automatic techniques. With an increasing micro-labor supply and a larger available workforce, new microtask platforms have emerged providing an extensive list of marketplaces where microtasks are offered by requesters and completed by crowd workers. One of the aspects of micro work that remains under discussion is the process of quality assurance. Besides methods to identify inappropriate actions (e.g. scammers), finding a suitable match between available microtasks and crowd workers has been acknowledged as a promising way to improve the quality of crowd work.
Bilder und Fotos nehmen einen großen Stellenwert in den heutigen Medien und sozialen Netzwerken ein. Durch die steigende Verbreitung von aufnahmefähigen Geräten wie Smartphones und Digitalkameras gibt es kaum einen Moment, der nicht festgehalten oder dokumentiert wird. Dies erzeugt große Mengen von Bildern, die, wenn sie geordnet werden sollen,mit Annotationen bzgl. ihres Inhaltes versehen werden müssen. Der Mensch ist in dieser Disziplin weiterhin den bekannten automatisierten Verfahren weit voraus.
We introduce a novel approach for building language models based on a systematic, recursive exploration of skip n-gram models which are interpolated using modified Kneser-Ney smoothing. Our approach generalizes language models as it contains the classical interpolation with lower order models as a special case.
In this talk I will give a brief overview of my past research activity and delineate my future research at WeST.
This talk looks at how the parameters of a complex network model may be estimated. Specifically, I examine the application of Approximate Bayes Computation as an alternative to likelihood or link based techniques.
In the context of open and distributed innovation companies make use of technological developments that occur beyond their legal boundaries. To facilitate these developments firms dedicate resources, e.g. to open source software projects, such as the Linux kernel project, through their employees, which are active in and part of the community.
Manually selecting subsets of photos from large collections in order to present them to friends or colleagues or to print them as photo books can be a tedious task.
KONECT (The Koblenz Network Collection) ist ein Projekt, das verschiedene Netzwerkdatensätze sammelt und mit Werkzeugen der Netzwerkanalyse Netzwerkstatistiken berechnet, repräsentative Diagramme darstellt, und verschiedene Linkvorhersagealgorithmen implementiert.
We introduce an algorithm currently studied by us for relationship query over large relationship graphs, which can be considered from algorithmic point of view as a Steiner Tree problem. The algorithm runs in two phases. In the first phase, it tries to quickly build a first tree that interconnects all nodes from a graph.
We introduce an algorithm for generating graphs that reproduces properties of input graphs to a very high precision, using as information only six subgraph counts.