This study explores estimation of grasping region of objects from gaze data. Our study distinguishes from previous works by accounting for "grasping uniformity" of the objects. In particular, we consider three types of graspable objects: (i) with a well-defined graspable part (e.g. handle), (ii) without a grip but with an intuitive grasping region, (iii) without any grip or intuitive grasping region. We assume that these types define how "uniform" grasping region is across different graspers. In experiments, we use "Learning to grasp" data set and apply the method of [5] for estimating grasping region from gaze data. We compute similarity of estimations and ground truth annotations for the three types of objects regarding subjects (a) who perform free viewing and (b) who view the images with the intention of grasping. In line with many previous studies, similarity is found to be higher for non-graspers. An interesting finding is that the difference in similarity (between free viewing and motivated to grasp) is higher for type-iii objects; and comparable for type-i and ii objects. Based on this, we believe that estimation of grasping region from gaze data offers a larger potential to "learn" particularly grasping of type-iii objects.

Effect of grasping uniformity on estimation of grasping region from gaze data

Yucel Z.;
2019-01-01

Abstract

This study explores estimation of grasping region of objects from gaze data. Our study distinguishes from previous works by accounting for "grasping uniformity" of the objects. In particular, we consider three types of graspable objects: (i) with a well-defined graspable part (e.g. handle), (ii) without a grip but with an intuitive grasping region, (iii) without any grip or intuitive grasping region. We assume that these types define how "uniform" grasping region is across different graspers. In experiments, we use "Learning to grasp" data set and apply the method of [5] for estimating grasping region from gaze data. We compute similarity of estimations and ground truth annotations for the three types of objects regarding subjects (a) who perform free viewing and (b) who view the images with the intention of grasping. In line with many previous studies, similarity is found to be higher for non-graspers. An interesting finding is that the difference in similarity (between free viewing and motivated to grasp) is higher for type-iii objects; and comparable for type-i and ii objects. Based on this, we believe that estimation of grasping region from gaze data offers a larger potential to "learn" particularly grasping of type-iii objects.
2019
HAI 2019 - Proceedings of the 7th International Conference on Human-Agent Interaction
File in questo prodotto:
File Dimensione Formato  
c_30_hai_effect.pdf

non disponibili

Tipologia: Versione dell'editore
Licenza: Copyright dell'editore
Dimensione 913.08 kB
Formato Adobe PDF
913.08 kB Adobe PDF   Visualizza/Apri

I documenti in ARCA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10278/5079601
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 1
  • ???jsp.display-item.citation.isi??? 0
social impact