This research line focuses on applying technologies to achieve natural interaction between human beings, machines and their environment, besides improving user experience.
This interaction comprises an input (human to machine) and an output (machine to human) channel. As for the former, we work on speech and speaker recognition, the analysis of gestures and expressions on different modalities (image, voice, text, and their combination), and the identification of environmental sounds to adapt the interaction to the environment and the user.
As regards the output interaction channel, we are focused on developing a proprietary speech synthesis system (text-to-speech and voice transformation), together with the generation of virtual characters (avatars) with natural movements (from MoCap), and the mining and indexing of multimedia data (treatment and the integration of different information modalities: speech, image or text, to name a few).
There is a myriard of potential applications where natural interaction can be applied, such as: videogames, animated TV series, audiovisual productions, collaborative virtual environments, Smart Cities (e.g. information points), Health (e.g. ergonomics, tele-assistance), applications for aged and disabled users, etc.
Human Computer InteractionFrancesc Alíasfalias@salle.url.edu