Basit öğe kaydını göster

dc.contributor.authorButepage, Judith
dc.contributor.authorGhadirzadeh, Ali
dc.contributor.authorKaradağ, Özge Öztimur
dc.contributor.authorBjorkman, Marten
dc.contributor.authorKragic, Danica
dc.date.accessioned2021-02-19T21:16:17Z
dc.date.available2021-02-19T21:16:17Z
dc.date.issued2020
dc.identifier.issn2296-9144
dc.identifier.urihttps://doi.org/10.3389/frobt.2020.00047
dc.identifier.urihttps://hdl.handle.net/20.500.12868/360
dc.descriptionWOS: 000531230100001en_US
dc.descriptionPubMed: 33501215en_US
dc.description.abstractTo coordinate actions with an interaction partner requires a constant exchange of sensorimotor signals. Humans acquire these skills in infancy and early childhood mostly by imitation learning and active engagement with a skilled partner. They require the ability to predict and adapt to one's partner during an interaction. In this work we want to explore these ideas in a human-robot interaction setting in which a robot is required to learn interactive tasks from a combination of observational and kinesthetic learning. To this end, we propose a deep learning framework consisting of a number of components for (1) human and robot motion embedding, (2) motion prediction of the human partner, and (3) generation of robot joint trajectories matching the human motion. As long-term motion prediction methods often suffer from the problem of regression to the mean, our technical contribution here is a novel probabilistic latent variable model which does not predict in joint space but in latent space. To test the proposed method, we collect human-human interaction data and human-robot interaction data of four interactive tasks "hand-shake," "hand-wave," "parachute fist-bump," and "rocket fist-bump." We demonstrate experimentally the importance of predictive and adaptive components as well as low-level abstractions to successfully learn to imitate human behavior in interactive social tasks.en_US
dc.description.sponsorshipEU through the project socSMCs (H2020-FETPROACT-2014); Swedish Foundation for Strategic ResearchSwedish Foundation for Strategic Research; EnTimeMent [H2020-FETPROACT-824160]; Knut and AliceWallenberg FoundationKnut & Alice Wallenberg Foundationen_US
dc.description.sponsorshipThis work was supported by the EU through the project socSMCs (H2020-FETPROACT-2014) and the Swedish Foundation for Strategic Research and EnTimeMent (H2020-FETPROACT-824160), and the Knut and AliceWallenberg Foundation.en_US
dc.language.isoengen_US
dc.publisherFrontiers Media Saen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectimitation learningen_US
dc.subjecthuman-robot interactionen_US
dc.subjectgenerative modelsen_US
dc.subjectdeep learningen_US
dc.subjectsensorimotor coordinationen_US
dc.subjectvariational autoencodersen_US
dc.titleImitating by generating: Deep generative models for imitation of interactive tasksen_US
dc.typearticleen_US
dc.contributor.departmentALKÜen_US
dc.contributor.institutionauthor0-belirlenecek
dc.identifier.doi10.3389/frobt.2020.00047
dc.identifier.volume7en_US
dc.relation.journalFrontiers In Robotics And Aien_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US


Bu öğenin dosyaları:

DosyalarBoyutBiçimGöster

Bu öğe ile ilişkili dosya yok.

Bu öğe aşağıdaki koleksiyon(lar)da görünmektedir.

Basit öğe kaydını göster