In this master thesis your approach to cater the specific needs of these individuals. The focus is to interpret information from combining user input modes (that are more convenient to a patient) and build a multimodal application interface for him/her. Multimodal interaction means interaction via several interaction channels such as speech, gesture, eye tracking, graphics etc. The mode of interaction and interface design would cater to the specific user requirement.
Eye tracking devices would help you capture user’s point of interest through gaze control. Eyes allow determining where the user is focusing his attention, and the computer as a pointing change translates every focus change. At WeST we already have the project group  and expertise of working with eye tracking applications (e.g. browser ). The ultimate goal is to obtain the ease and robustness of user communications integrating other methods like automatic speech recognition, to improve the output of a multimodal application. There are several research questions can be discussed for the master thesis, e.g., When should certain interaction modalities be used? How should multiple interaction modalities (speech and eye tracking) be combined? How can modalities be adapted according to context of use for a certain user with limited abilities?
You would involve the patient in requirement analysis, user centered design and evaluation. We will provide you the contact to targeted users (e.g., our contact at university of Cologne ), the MAMEM project resources , working prototype with eye tracking interaction , and state of art eye tracking device . Some relevant papers in the field are attached [4,5,6].