This research line focuses on applying technologies to achieve natural interaction between human beings, machines and their environment, besides improving user experience through gamification techniques and vice versa.
This interaction comprises an input (human to machine) and an output (machine to human) channel. As for the former, we work on speech and speaker recognition, the analysis of gestures and expressions on different modalities (image, voice, text, and their combination), and the identification of environmental sounds to adapt the interaction to the environment and the user.
As regards the output interaction channel, we are focused on developing a proprietary speech synthesis system (text-to-speech and voice transformation), together with the generation of virtual characters (avatars) with natural movements (from MoCap), and the mining and indexing of multimedia data (treatment and the integration of different information modalities: speech, image or text, to name a few).
There is a myriad of potential applications where HCI can be applied, such as: videogames, animated TV series or audiovisual productions, to name a few, and general domains of application such as education (e.g. mEducation), Smart Cities (e.g. information points) and Health (e.g. ergonomics, tele-assistance, applications for aged and disabled).
Human Computer InteractionFrancesc Alíasfalias@salle.url.edu