Friction-from-vision

(If you are looking for code to predict friction from vision, please check this project instead)

This page contains a collection of datasets for benchmarking algorithms for visual prediction of friction, as well as understanding human perception of friction.

The datasets are described in the following publication:

You can download the data here: Friction from vision datasets. Please cite the publication above if you use the datasets.

Ground-Truth coefficient of Friction dataset (GTF)

Was originally targeted at studying algorithmic and human performance at the task, for robot locomotion applications.

The dataset consists of several measurements on a set of 43 walkable surfaces:

OpenSurfaces and Friction dataset (OSA+F)

Was originally targeted at studying human perception of friction from vision. It is based on a subset of the OpenSurfaces dataset of Bell et al. [1] and the additional texture attributes of Cimpoi et al. [2]. On top of the image, label and gloss data they provide, we obtained human (visual) judgements of friction.

The dataset consists of several measurements on a set of 96 walkable surfaces:

References:

  1. M. Cimpoi, S. Maji, and A. Vedaldi, “Deep filter banks for texture recognition and segmentation,” in IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, 2015, pp. 3828–3836. [DOI]
  2. S. Bell, P. Upchurch, N. Snavely, and K. Bala, “OpenSurfaces: A Richly Annotated Catalog of Surface Appearance,” ACM Trans. on Graphics (SIGGRAPH), vol. 32, no. 4, 2013.

Subjective ranking of material slipperiness

Was originally targeted at understanding whether text mining of large text sources (e.g. Wikipedia) can predict humans’ intuitive ranking of materials by friction. Brandao et al. 2016 shows some promising results.

The dataset consists of a set of 19 different materials, ordered from most to least slippery by several (also 19) human subjects. The ranking was made through an online survey without image support, from material names only.