Different legged robot locomotion controllers offer different advantages; from speed of motion to energy, computational demand, safety and others. In this paper we propose a method for planning locomotion with multiple controllers and sub-planners, explicitly considering the multi-objective nature of the legged locomotion planning problem. The planner first obtains body paths extended with a choice of controller or sub-planner, and then fills the gaps by sub-planning. The method leads to paths with a mix of static and dynamic walking which only plan footsteps where necessary. We show that it is faster than pure footstep planning methods both in computation (2x) and mission time (1.4x), and safer than pure dynamic-walking methods. In addition, we propose two methods for aggregating the multiple objectives in search-based planning and reach desirable trade-offs without weight tuning. We show that they reach desirable Pareto-optimal solutions up to 8x faster than fairly-tuned traditional weighted-sum methods. Our conclusions are drawn from a combination of planning, physics simulation, and real robot experiments.
In this paper we evaluate the age and gender bias in state-of-the-art pedestrian detection algorithms. These algorithms are used by mobile robots such as autonomous vehicles for locomotion planning and control. Therefore, performance disparities could lead to disparate impact in the form of biased crash outcomes. Our analysis is based on the INRIA Person Dataset extended with child, adult, male and female labels. We show that all of the 24 top-performing methods of the Caltech Pedestrian Detection Benchmark have higher miss rates on children. The difference is significant and we analyse how it varies with the classifier, features and training data used by the methods. Algorithms were also gender-biased on average but the performance differences were not significant. We discuss the source of the bias, the ethical implications, possible technical solutions and barriers to "solving" the issue.
Motivated by experiments showing that humans’ localization performance changes with walking parameters, in this paper we explore the effects of walking gait on biped humanoid localization. We focus on walking style (normal and gallop) and gait symmetry (one side slower), and we assess the performance of visual odometry (VO) and kinematic odometry algorithms for the robot’s localization. Changing the walking style from normal to gallop slightly improved the performance of the visual localization, which was related to a reduction in torques on the feet. Changing the gait temporal symmetry worsened the performance of the visual algorithms, which according to an analysis of inertial data, is related to an increase of mechanical vibrations and camera rotations. Both changes of gait style and symmetry decreased the performance of the kinematic localization, caused by the increase of vertical ground reaction forces, to which kinematic odometry is very sensitive. These observations support our claim that gait and footstep planning could be used to improve the performance of localization algorithms in the future.
Motivated by experiments showing that humans regulate their walking speed in order to improve localization performance, in this paper we explore the effects of walking gait on biped humanoid localization. We focus on step length as a proxy for speed and because of its ready applicability to current footstep planners, and we compare the performance of three different sparse visual odometry (VO) algorithms as a function of step length: a direct, a semi-direct and an indirect algorithm. The direct algorithm’s performance decreased the longer the step lengths, which along with the analysis of inertial and force/torque data, point to a decrease in performance due to an increase of mechanical vibrations. The indirect algorithm’s performance decreased in an opposite way, i.e., showing more errors with shorter step lengths, which we show to be due to the effects of drift over time. The semi-direct algorithm showed a performance in-between the previous two. These observations show that footstep planning could be used to improve the performance of VO algorithms in the future.
Modeling heat transfer is an important problem in high-power electrical robots as the increase of motor temperature leads to both lower energy efficiency and the risk of motor damage. Power consumption itself is a strong restriction in these robots especially for battery-powered robots such as those used in disaster-response. In this paper, we propose to reduce power consumption and temperature for robots with high-power DC actuators without cooling systems only through motion planning. We first propose a parametric thermal model for brushless DC motors which accounts for the relationship between internal and external temperature and motor thermal resistances. Then, we introduce temperature variables and a thermal model constraint on a trajectory optimization problem which allows for power consumption minimization or the enforcing of temperature bounds during motion planning. We show that the approach leads to qualitatively different motion compared to typical cost function choices, as well as energy consumption gains of up to 40%.
This paper addresses two issues with the development of ethical algorithms for autonomous vehicles. One is that of uncertainty in the choice of ethical theories and utility functions. Using notions of moral diversity, normative uncertainty, and autonomy, we argue that each vehicle user should be allowed to choose the ethical views by which the vehicle should act. We then deal with the issue of indirect discrimination in ethical algorithms. Here we argue that equality of opportunity is a helpful concept, which could be applied as an algorithm constraint to avoid discrimination on protected characteristics.
Complex robots such as legged and humanoid robots are often characterized by non-convex optimization landscapes with multiple local minima. Obtaining sets of these local minima has interesting applications in global optimization, as well as in smart teleoperation interfaces with automatic posture suggestions. In this paper we propose a new heuristic method to obtain sets of local minima, which is to run multiple minimization problems initialized around a local maximum. The method is simple, fast, and produces diverse postures from a single nominal posture. Results on the robot WAREC using a sum-of-squared-torques cost function show that our method quickly obtains lower-cost postures than typical random restart strategies. We further show that obtained postures are more diverse than when sampling around nominal postures, and that they are more likely to be feasible when compared to a uniform sampling strategy. We also show that lack of completeness leads to the method being most useful when computation has to be fast, but not on very large computation time budgets.
Trajectory optimization and posture generation are hard problems in robot locomotion, which can be non-convex and have multiple local optima. Progress on these problems is further hindered by a lack of open benchmarks, since comparisons of different solutions are difficult to make. In this paper we introduce a new benchmark for trajectory optimization and posture generation of legged robots, using a pre-defined scenario, robot and constraints, as well as evaluation criteria. We evaluate state-of-the-art trajectory optimization algorithms based on sequential quadratic programming (SQP) on the benchmark, as well as new stochastic and incremental optimization methods borrowed from the large-scale machine learning literature. Interestingly we show that some of these stochastic and incremental methods, which are based on stochastic gradient descent (SGD), achieve higher success rates than SQP on tough initializations. Inspired by this observation we also propose a new incremental variant of SQP which updates only a random subset of the costs and constraints at each iteration. The algorithm is the best performing in both success rate and convergence speed, improving over SQP by up to 30% in both criteria. The benchmark’s resources and a solution evaluation script are made openly available.
Friction estimation from vision is an important problem for robot locomotion through contact. The problem is challenging due to its dependence on many factors such as material, surface conditions and contact area. In this paper we 1) conduct an analysis of image features that correlate with humans’ friction judgements; and 2) compare algorithmic to human performance at the task of predicting the coefficient of friction between different surfaces and a robot’s foot. The analysis is based on two new datasets which we make publicly available. One is annotated with human judgements of friction, illumination, material and texture; the other is annotated with static coefficient of friction (COF) of a robot’s foot and human judgments of friction. We propose and evaluate visual friction prediction methods based on image features, material class and text mining. And finally, we make conclusions regarding the robustness to COF uncertainty which is necessary by control and planning algorithms; the low performance of humans at the task when compared to simple predictors based on material label; and the promising use of text mining to estimate friction from vision.
In this paper we tackle the problem of visually predicting surface friction for environments with diverse surfaces, and integrating this knowledge into biped robot locomotion planning. The problem is essential for autonomous robot locomotion since diverse surfaces with varying friction abound in the real world, from wood to ceramic tiles, grass or ice, which may cause difficulties or huge energy costs for robot locomotion if not considered. We propose to estimate friction and its uncertainty from visual estimation of material classes using convolutional neural networks, together with probability distribution functions of friction associated with each material. We then robustly integrate the friction predictions into a hierarchical (footstep and full-body) planning method using chance constraints, and optimize the same trajectory costs at both levels of the planning method for consistency. Our solution achieves fully autonomous perception and locomotion on slippery terrain, which considers not only friction and its uncertainty, but also collision, stability and trajectory cost. We show promising friction prediction results in real pictures of outdoor scenarios, and planning experiments on a real robot facing surfaces with different friction.
Energy efficiency and robustness of locomotion to different terrain conditions are important problems for humanoid robots deployed in the real world. In this paper, we propose a footstep-planning algorithm for humanoids that is applicable to flat, slanted, and slippery terrain, which uses simple principles and representations gathered from human gait literature. The planner optimizes a center-of-mass (COM) mechanical work model subject to motion feasibility and ground friction constraints using a hybrid A* search and optimization approach. Footstep placements and orientations are discrete states searched with an A* algorithm, while other relevant parameters are computed through continuous optimization on state transitions. These parameters are also inspired by human gait literature and include footstep timing (double-support and swing time) and parameterized COM motion using knee flexion angle keypoints. The planner relies on work, the required coefficient of friction (RCOF), and feasibility models that we estimate in a physics simulation. We show through simulation experiments that the proposed planner leads to both low electrical energy consumption and human-like motion on a variety of scenarios. Using the planner, the robot automatically opts between avoiding or (slowly) traversing slippery patches depending on their size and friction, and it chooses energy-optimal stairs and climbing angles in slopes. The obtained motion is also consistent with observations found in human gait literature, such as human-like changes in RCOF, step length and double-support time on slippery terrain, and human-like curved walking on steep slopes. Finally, we compare COM work minimization with other choices of the objective function.
Stereo confidence measures are important functions for global reconstruction methods and some applications of stereo. In this article we evaluate and compare several models of confidence which are defined at the whole disparity range. We propose a new stereo confidence measure to which we call the Histogram Sensor Model (HSM), and show how it is one of the best performing functions overall. We also introduce, for parametric models, a systematic method for estimating their parameters which is shown to lead to better performance when compared to parameters as computed in previous literature. All models were evaluated when applied to two different cost functions at different window sizes and model parameters. Contrary to previous stereo confidence measure benchmark literature, we evaluate the models with criteria important not only to winner-take-all stereo, but also to global applications. To this end, we evaluate the models on a real-world application using a recent formulation of 3D reconstruction through occupancy grids which integrates stereo confidence at all disparities. We obtain and discuss our results on both indoors’ and outdoors’ publicly available datasets.
In this paper we use an extended footstep planning algorithm to plan optimal humanoid locomotion trajectories subject to constraints on the maximum predicted Zero Moment Point (ZMP) tracking error. The approach can guarantee walking stability bounds with little extra computational burden, thus increasing safety of robots walking in challenging environments. This is done by estimating energy and stability models in simulation through Bayesian optimization, and smartly integrating the models into search-based planning.
Energy consumption and stability are two important problems for humanoid robots deployed in remote outdoor locations. In this paper we propose an extended footstep planning method to optimize energy consumption while considering motion feasibility and ground friction constraints. To do this we estimate models of energy, feasibility and slippage in physics simulation, and integrate them into a hybrid A* search and optimization-based planner. The graph search is done in footstep position space, while timing (leg swing and double support times) and COM motion (parameterized height trajectory) are obtained by solving an optimization problem at each node. We conducted experiments to validate the obtained energy model on the real robot, as well as planning experiments showing 9 to 19% energy savings. In example scenarios, the robot can correctly plan to optimally traverse slippery patches or avoid them depending on their size and friction; and uses stairs with the most beneficial dimensions in terms of energy consumption.
The Uncanny valley hypothesis, which tells us that almost-human characteristics in a robot or a device could cause uneasiness in human observers, is an important research theme in the Human Robot Interaction (HRI) field. Yet, that phenomenon is still not well-understood. Many have investigated the external design of humanoid robot faces and bodies but only a few studies have focused on the influence of robot movements on our perception and feelings of the Uncanny valley. Moreover, no research has investigated the possible relation between our uneasiness feeling and whether or not we would accept robots having a job in an office, a hospital or elsewhere. To better understand the Uncanny valley, we explore several factors which might have an influence on our perception of robots, be it related to the subjects, such as culture or attitude toward robots, or related to the robot such as emotions and emotional intensity displayed in its motion. We asked 69 subjects (N = 69) to rate the motions of a humanoid robot (Perceived Humanity, Eeriness, and Attractiveness) and state where they would rather see the robot performing a task. Our results suggest that, among the factors we chose to test, the attitude toward robots is the main influence on the perception of the robot related to the Uncanny valley. Robot occupation acceptability was affected only by Attractiveness, mitigating any Uncanny valley effect. We discuss the implications of these findings for the Uncanny valley and the acceptability of a robotic worker in our society.
We propose a new biped locomotion planning method that optimizes locomotion speed subject to friction constraints. For this purpose we use approximate models of required coefficient of friction (RCOF) as a function of gait. The methodology is inspired by findings in human gait analysis, where subjects have been shown to adapt spatial and temporal variables of gait in order to reduce RCOF in slippery environments. Here we solve the friction problem similarly, by planning on gait parameter space: namely foot step placement, step swing time, double support time and height of the center of mass (COM). We first used simulations of a 48 degrees-of-freedom robot to estimate a model of how RCOF varies with these gait parameters. Then we developed a locomotion planning algorithm that minimizes the time the robot takes to reach a goal while keeping acceptable RCOF levels. Our physics simulation results show that RCOF-aware planning can drastically reduce slippage amount while still maximizing efficiency in terms of locomotion speed. Also, according to our experiments human-like stretched-knees walking can reduce slippage amount more than bent-knees (i.e. crouch) walking for the same speed.
We present a grid-based 3D reconstruction method which integrates all costs given by stereo vision into what we call a Cost-Curve Occupancy Grid (CCOG). Occupancy probabilities of grid cells are estimated in a Bayesian formulation, from the likelihood of stereo cost measurements taken at all distance hypotheses. This is accomplished with only a small set of probabilistic assumptions which we discuss in the paper. We quantitatively characterize the method’s performance under different conditions of both image noise and number of used stereo pairs, compared also to traditional algorithms. We complement the study by giving insights on design choices of CCOGs such as likelihood model, window size of the cost function and use of a hole filling method. Experiments were made on a real-world outdoors dataset with ground-truth data.
Humanoid robots have this formidable advantage to possess a body quite similar in shape to humans. This body grants them, obviously, locomotion but also a medium to express emotions without even needing a face. In this paper we propose to study the effects of emotional gaits from our biped humanoid robot on the subjects’ perception of the robot (recognition rate of the emotions, reaction time, anthropomorphism, safety, likeness, etc.). We made the robot walk towards the subjects with different emotional gait patterns. We assessed positive (Happy) and negative (Sad) emotional gait patterns on 26 subjects divided in two groups (whether they were familiar with robots or not). We found that even though the recognition of the different types of patterns does not differ between groups, the reaction time does. We found that emotional gait patterns affect the perception of the robot. The implications of the current results for Human Robot Interaction (HRI) are discussed.
We describe our recent developments in probabilistic modeling of 3D reconstruction with stereo vision, applied to planning strategies for locomotion and gaze. We first overview the use of probabilistic occupancy grids for 3D reconstruction, and the sensor models of stereo best suited to the problem. These grids are then used for robot navigation, which is tackled at two levels: 1) At the locomotion level, trajectories are computed from the grid using an A* search algorithm that minimizes the total probability of occupancy over the trajectory. 2) At the grid level, we propose two task-relevant active strategies which redirect the sensor to "maximum visible entropy" and "maximum visible occupancy" points along the planned locomotion trajectories. Steps 1) and 2) are executed alternately until the locomotion trajectory converges to a high certainty, safe solution. Results of the proposed gaze and locomotion planning strategies were obtained on simulated scenarios and a real robot. Estimates of the uncertainty that occupancy grids are subjected to in real outdoor scenarios were computed for different stereo sensor models. These estimates were used in active gaze simulations for an extensive comparison of gaze strategies across 400 randomly generated environments. The results show that careful modeling of stereo vision sensor uncertainty and the proposed task-relevant planning strategies lead to more complete and consequently collision-free reconstructions of the environment along planned robot trajectories.
We describe a learning strategy that allows a humanoid robot to autonomously build a representation of its workspace: we call this representation Reachable Space Map. Interestingly, the robot can use this map to (i) estimate the Reachability of a visually detected object (i.e. judge whether the object can be reached for, and how well, according to some performance metric) and (ii) modify its body posture or its position with respect to the object to achieve better reaching. The robot learns this map incrementally during the execution of goal-directed reaching movements; reaching control employs kinematic models that are updated online as well. Our solution is innovative with respect to previous works in three aspects: the robot workspace is described using a gaze-centered motor representation, the map is built incrementally during the execution of goal-directed actions, learning is autonomous and online. We implement our strategy on the 48-DOFs humanoid robot Kobian and we show how the Reachable Space Map can support intelligent reaching behavior with the whole-body (i.e. head, eyes, arm, waist, legs).
Extensive literature has been written on occupancy grid mapping for different sensors. When stereo vision is applied to the occupancy grid framework it is common, however, to use sensor models that were originally conceived for other sensors such as sonar. Although sonar provides a distance to the nearest obstacle for several directions, stereo has confidence measures available for each distance along each direction. The common approach is to take the highestconfidence distance as the correct one, but such an approach disregards mismatch errors inherent to stereo. In this work, stereo confidence measures of the whole sensed space are explicitly integrated into 3D grids using a new occupancy grid formulation. Confidence measures themselves are used to model uncertainty and their parameters are computed automatically in a maximum likelihood approach. The proposed methodology was evaluated in both simulation and a realworld outdoor dataset which is publicly available. Mapping performance of our approach was compared with a traditional approach and shown to achieve less errors in the reconstruction.
Robots depend on a world map representation in order to navigate on it. Only a part of the space around the agent can be sensed at each time and so measures must be taken in order to reduce the uncertainty of this map and likelihood of collision. In this work we propose the use of a probabilistic occupancy grid to guide active gaze of the robot on the “walk to target” task. A map uncertainty measure is proposed, as is a method for choosing gaze points along the robot’s computed trajectory to anticipate the need for trajectory changes. Gaze points are chosen from the whole space volume the robot will traverse. Then, robot trajectories are computed directly on the probabilistic map in order to drive the robot towards free-space areas of high confidence. A preliminary evaluation of the approach is done on a real scenario using the humanoid robot KOBIAN for the preparatory gaze exploration task necessary for safe trajectory planning to a target.
We present a novel control architecture for the integration of visually guided walking and whole-body reaching in a humanoid robot. We propose to use robot gaze as a common reference frame for both locomotion and reaching, as suggested by behavioral neuroscience studies in humans. A gaze controller allows the robot to track and fixate a target object, and motor information related to gaze control is then used to i) estimate the reachability of the target, ii) steer locomotion, iii) control whole-body reaching. The reachability is a measure of how well the object can be reached for, depending on the position and posture of the robot with respect to the target, and it is obtained from the gaze motor information using a mapping that has been learned autonomously by the robot through motor experience: we call this mapping Reachable Space Map. In our approach, both locomotion and whole-body movements are seen as ways to maximize the reachability of a visually detected object, thus i) expanding the robot workspace to the entire visible space and ii) exploiting the robot redundancy to optimize reaching. We implement our method on a full 48-DOF humanoid robot and provide experimental results in the real world.
Humanoid robots are complex sensorimotor systems where the existence of internal models are of utmost importance both for control purposes and for predicting the changes in the world arising from the system’s own actions. This so-called expected perception relies on the existence of accurate internal models of the robot’s sensorimotor chains. We assume that the kinematic model is known in advance but that the absolute offsets of the different axes cannot be directly retrieved from encoders. We propose a method to estimate such parameters, the zero position of the joints of a humanoid robotic head, by relying on proprioceptive sensors such as relative encoders, inertial sensing and visual input. We show that our method can estimate the correct offsets of the different joints (i.e. absolute positioning) in a continuous, online manner. Not only the method is robust to noise but it can as well cope with and adjust to abrupt changes in the parameters. Experiments with three different robotic heads are presented and illustrate the performance of the methodology as well as the advantages of using such an approach.
Tracking an object’s 3D position and orientation from a color image can been accomplished with particle filters if its color and shape properties are known. Unfortunately, initialization in particle filters is often manual or random, thus rendering the tracking recovery process slow or no longer autonomous. A method that uses image data to generate likely pose hypotheses for known objects is proposed. These generated pose hypotheses are then used to guide visual attention and computer resources in a “top-down” tracking system such as a particle filter: speeding up the tracking process and making it more robust to unpredictable movement.
Tracking an object’s 3D pose from a color image can been accomplished with particle filters if its color and shape properties are known a priori. Unfortunately, initialization in particle filters is often manual or random, thus rendering the tracking recovery process slow or no longer autonomous. A method that uses existing object information to better decide on where to automatically start or recover the tracking process is proposed. Each 3D pose of an object is observed as a 2D shape and so training is made to infer pose from image information. The object is first segmented through color, then shape description is made using geometric moments and finally a learning stage maps 2D shapes to 3D poses with an associated likelihood measure.