Loss of the voluntary muscular control while preserving cognitive functions is a common symptom of neuromuscular disorders leading to a variety of functional deficits, including the ability to operate software tools that require the use of conventional interfaces like mouse, key-board, or touch-screens. As a result, the affected individuals are marginalized and unable to keep up with the rest of the society in a digitized world.
MAMEM's goal is to integrate these people back into society by increasing their potential for communication and exchange in leisure (e.g. social networks) and non-leisure context (e.g. workplace). In this direction, MAMEM delivers the technology to enable interface channels that can be controlled through eye-movements and mental commands. This is accomplished by extending the core API of current operating systems with advanced function calls, appropriate for accessing the signals captured by an eye-tracker, an EEGrecorder and bio-measurement sensors. Then, pattern recognition and tracking algorithms are employed to jointly translate these signals into meaningful control and enable a set of novel paradigms for multimodal interaction. These paradigms will allow for low- (e.g., move a mouse), meso- (e.g., tick a box) and high-level (e.g., select n-out-of-m items) control of interface applications through eyes and mind. A set of persuasive design principles together with profiles modeling the users (dis-)abilities will be also employed for designing adapted interfaces for disabled. MAMEM will engage three different cohorts of disabled (i.e. Parkinson's disease, muscular disorders, and tetraplegia) that will be asked to test a set of prototype applications dealing with multimedia authoring and management. MAMEM's final objective is to assess the impact of this technology in making these people more socially integrated by, for instance, becoming more active in sharing content through social networks and communicating with their friends and family.
For more details please visit the project Website.
At WeST we currently develop different eye-controlled interfaces as part of MAMEM project. Following are some example demonstrations of 1) Eye-controlled Web browsing 2) Eye-controlled Social media browsing 2) Eye-controlled Gaming
May 2015 - July 2018
Source of funding:
EU Project Horizon 2020 - The EU Framework Programme for Research and Innovation
- CERTH - Centre for Research & Technology Hellas
- EB Neuro S.p.A (EBN)
- SIM GmbH
- Eindhoven University of Technology (TUe)
- Muscular Dystrophy Association (MDA) Hellas
- Auth - School of Medicine
- Sheba Medical Center (SMC)
Project home page
I have studied computer science and computational linguistics at the Universität Erlangen-Nürnberg and at the University of Pennsylvania. I worked in the previous computational linguistics research group at the Universität Freiburg and did my Ph.D. in computer science in the faculty for technology in 1998. Afterwards I joined Universität Stuttgart, Institute IAT & Fraunhofer IAO, before I moved on to the Universität Karlsruhe (now: KIT), where I progressed from project lead, over lecturer and senior lecturer and did my habilitation in 2002. In 2004 I became professor for databases and information systems at Universität Koblenz-Landau, where I founded the Institute for Web Science and Technologies (WeST) in 2009. In parallel, I hold a Chair for Web and Computer Science at University of Southampton since March 2015.
Data represent the world on our computers. While the world is very intriguing, data may be quite boring, if one does not know what they mean. I am interested in making data more meaningful to find interesting insights in the world outside.
How does meaning arise?
- One can model data and information. Conceptual models and ontologies are the foundations for knowledge networks that enable the computer to treat data in a meaningful way.
- Text and data mining as well as information extraction find meaningful patterns in data (e.g. using ontology learning of text clustering) as well as connections between data and its use in context (e.g. using smartphones). Hence, knowledge networks may be found in data.
- Humans communicate information. In order to understand what data and information means, one has to understand social interactions. In the context of social network knowledge networks become meaningful for human consumption.
- Eventually meaning is nothing that exists in the void. Data and information must be communicated to people who may use insights into data and information. Interaction between humans and computers must happen in a way that matches the meaning of data and information.
The World Wide Web is the largest information construct made by mankind to convey meaningful data. Web Science is the discipline that considers how networks of people and knowledge in the Web arise, how humans deal with it and which consequences this has for all of us. The Web is a meaning machine that I want do understand by my research.