Friction from vision

(If you are looking for code to predict friction from vision, please check this project instead)

This page contains a collection of datasets for benchmarking algorithms for visual prediction of friction, as well as understanding human perception of friction.

The datasets are described in the following publication:

  • M. Brandao, K. Hashimoto, and A. Takanishi, Friction from vision: a study of algorithmic and human performance with consequences for robot perception and teleoperation, in 16th ieee-ras international conference on humanoid robots, 2016, pp. 428-435.
    [Abstract] [BibTeX] [PDF] [DOI]

    Friction estimation from vision is an important problem for robot locomotion through contact. The problem is challenging due to its dependence on many factors such as material, surface conditions and contact area. In this paper we 1) conduct an analysis of image features that correlate with humans’ friction judgements; and 2) compare algorithmic to human performance at the task of predicting the coefficient of friction between different surfaces and a robot’s foot. The analysis is based on two new datasets which we make publicly available. One is annotated with human judgements of friction, illumination, material and texture; the other is annotated with static coefficient of friction (COF) of a robot’s foot and human judgments of friction. We propose and evaluate visual friction prediction methods based on image features, material class and text mining. And finally, we make conclusions regarding the robustness to COF uncertainty which is necessary by control and planning algorithms; the low performance of humans at the task when compared to simple predictors based on material label; and the promising use of text mining to estimate friction from vision.

    @INPROCEEDINGS{Brandao2016friction,
    author = {Martim Brandao and Kenji Hashimoto and Atsuo Takanishi},
    title = {Friction from Vision: A Study of Algorithmic and Human Performance
    with Consequences for Robot Perception and Teleoperation},
    booktitle = {16th IEEE-RAS International Conference on Humanoid Robots},
    year = {2016},
    pages = {428-435},
    month = {Nov},
    abstract = {Friction estimation from vision is an important problem for robot
    locomotion through contact. The problem is challenging due to its
    dependence on many factors such as
    material, surface conditions and contact area. In this paper we 1)
    conduct an analysis of image features that correlate with humans’
    friction judgements; and 2) compare algorithmic to human performance
    at the task of predicting the coefficient of friction between different
    surfaces and a robot’s foot. The analysis is based on two new datasets
    which we make publicly available. One is annotated with human judgements
    of friction, illumination, material and texture; the other is annotated
    with static coefficient of friction (COF) of a robot’s foot and human
    judgments of friction. We propose and evaluate visual friction prediction
    methods based on image features, material class and text mining.
    And finally, we make conclusions regarding the robustness to COF
    uncertainty which is necessary by control and planning algorithms;
    the low performance of humans at the task when compared to simple
    predictors based on material label; and the promising use of text
    mining to estimate friction from vision.},
    doi = {10.1109/HUMANOIDS.2016.7803311},
    topic = {Friction from vision},
    url = {http://www.martimbrandao.com/papers/Brandao2016-humanoids-friction.pdf}
    }

You can download the data here: Friction from vision datasets. Please cite the publication above if you use the datasets.

Ground-Truth coefficient of Friction dataset (GTF)

leaves_0 granite_marble_1 dirt_3 carpet_rug_0

Was originally targeted at studying algorithmic and human performance at the task, for robot
locomotion applications.

The dataset consists of several measurements on a set of 43 walkable surfaces:

  • Coefficient of friction (COF) between the surface and a robot foot
  • Picture of the surface and image region where COF was measured
  • Surface material label
  • Human predictions of robot foot friction made by visual inspection of the pictures (each surface was rated by 12 subjects)
  • Train/test set splits used in the paper

OpenSurfaces and Friction dataset (OSA+F)

osaf_tile_6 osaf_metal_9 osaf_wood_2 osaf_concrete_0

Was originally targeted at studying human perception of friction from vision. It is based on a subset of the OpenSurfaces dataset of Bell et al. [1] and the additional texture attributes of Cimpoi et al. [2]. On top of the image, label and gloss data they provide, we obtained human (visual) judgements of friction.

The dataset consists of several measurements on a set of 96 walkable surfaces:

  • Picture of the surface and image region corresponding to the surface [1]
  • Surface material, scene and texture labels from [1, 2]
  • Human predictions of shoe friction made by visual inspection of the pictures (each surface was rated by 14 subjects)
  • Train/test set splits used in the paper

References:

  • S. Bell, P. Upchurch, N. Snavely, and K. Bala, OpenSurfaces: a richly annotated catalog of surface appearance, Acm trans. on graphics (siggraph), vol. 32, iss. 4, 2013.
    [BibTeX] [DOI]
    @ARTICLE{Bell2013,
    author = {Sean Bell and Paul Upchurch and Noah Snavely and Kavita Bala},
    title = {Open{S}urfaces: A Richly Annotated Catalog of Surface Appearance},
    journal = {ACM Trans. on Graphics (SIGGRAPH)},
    year = {2013},
    volume = {32},
    number = {4},
    doi = {10.1145/2461912.2462002},
    owner = {biped},
    timestamp = {2016.01.24}
    }

  • M. Cimpoi, S. Maji, and A. Vedaldi, Deep filter banks for texture recognition and segmentation, in IEEE conference on computer vision and pattern recognition, CVPR 2015, boston, ma, usa, june 7-12, 2015, 2015, pp. 3828-3836.
    [BibTeX] [DOI]
    @INPROCEEDINGS{Cimpoi2015,
    author = {Mircea Cimpoi and Subhransu Maji and Andrea Vedaldi},
    title = {Deep filter banks for texture recognition and segmentation},
    booktitle = {{IEEE} Conference on Computer Vision and Pattern Recognition, {CVPR}
    2015, Boston, MA, USA, June 7-12, 2015},
    year = {2015},
    pages = {3828--3836},
    bibsource = {dblp computer science bibliography, http://dblp.org},
    biburl = {http://dblp.uni-trier.de/rec/bib/conf/cvpr/CimpoiMV15},
    crossref = {DBLP:conf/cvpr/2015},
    doi = {10.1109/CVPR.2015.7299007},
    owner = {biped},
    timestamp = {Fri, 16 Oct 2015 14:06:23 +0200}
    }

Subjective ranking of material slipperiness

Was originally targeted at understanding whether text mining of large text sources (e.g. Wikipedia) can predict humans’ intuitive ranking of materials by friction. Brandao et al. 2016 shows some promising results.

The dataset consists of a set of 19 different materials, ordered from most to least slippery by several (also 19) human subjects. The ranking was made through an online survey without image support, from material names only.