Prof. Roland Johansson

Title: Tactile macrogeometric sensing supporting dexterous object manipulation

Abstract:

The populations of first order tactile afferent neurons that innervate the inside of the hand signal soft tissue transformations that occur when the hand interacts with objects, thus providing moment-to-moment information about the contact state between the object and the hand. A critical aspect of dexterous object manipulation is that the relevant spatiotemporal tactile information is sufficiently accurate and that it can be used quickly and efficiently by the brain. My talk addresses encoding and use of tactile afferent information in the control of manual dexterity. In particular, I discuss how the peripheral organization of first order tactile neurons in humans might promote rapid and automatic processing of macrogeometric tactile information required to control fingertips in fine manipulation tasks. The emphasis is on population coding and representation of object location and orientation within fingertips based on objects’ edge-based features.

Prof. Allison Okamura

Title: What can we learn from human touch in virtual and teleoperated environments?

Abstract:

When we design haptic feedback systems for humans in virtual and teleoperated environments, they are inherently limited in modality, degrees of freedom, and time response. Yet, humans can often use this limited feedback quite effectively to explore their environments, perform manipulation tasks, and even communicate emotion. I propose that observing and understanding “sensory deprived” human behaviors in these scenarios can be used to inspire new approaches for robot touch. In this talk, I will discuss recent developments in compelling haptic displays that could inform and drive methods for sensor design and perception algorithms for touch in autonomous robots.

Prof Akihiko Yamaguchi

Title: Visuotactile sensor FingerVision for deformable and fragile object manipulation

Abstract:

FingerVision is a vision-based tactile sensor consisting of elastic and transparent skin and cameras. It provides multimodal sensation to robots, including force and slip distributions, and nearby object information such as position, orientation, and texture. In this talk, I will demonstrate the use of FingerVision in manipulation of deformable and fragile objects. The high resolution slip-detection increases the robustness of grasping. It also enables robots to grasp objects with the sense of touch, which improves the sample efficiency of learning grasping. Another feature of FingerVision is giving additional modalities by analyzing (proximity) vision. For example, an application is inspecting the manipulated objects such as food products.

Prof. Ravinder Dahiya

Title: Self-powered tactile skin

Abstract:

Tactile or electronic skin is critical for haptic perception in robots, prosthetics, as well as, wearable electronics. The field has received significant attention in terms of development of various types of sensors and mimicking morphology of human skin etc. However, practical issue such as energy needed for operation of large number of sensors and electronic components of e-skin has not received much attention. The energy autonomy of skin is critical to enable portability and the longer operation times. This talk will present recent progress in the direction of energy-autonomous or self-powered skin. The talk will includes examples of energy-autonomous flexible and tactile skin made from integration of advanced materials such as graphene on top of flexible photovoltaic cells.

Dr. Robert Haschke

Title: Tactile Sensing for manipulation and exploration

Abstract: 

First I will give an overview about the zoo of tactile sensors developed at Bielefeld University, including MID fingertip sensors, a human-wearable tactile data glove, and a newly developed tactile-sensitive fingernail for the Shadow Robot hands. Subsequently, I will summarize our tactile servoing control framework to allow for tactile-based manipulation and object exploration. Some new applications employing this framework will be presented.

Prof. Matei Ciocarlie

Title: A Touch Sensor Designed for Learning-based Use: Successes, Limitations and Pitfalls

Abstract:

This talk will present an approach to tactile sensing designed from the ground up for machine learning: our optics-based touch sensors aim to generate data sets as rich as possible, exhibiting strong variations as touch properties change, but with no attempt to build analytical models of how these variations occur. Specifically, we embed numerous terminals into our sensors, and hope for significant cross-talk for any touch that takes place. We then extract information from this rich data set in a purely learned fashion, by collecting controlled, labeled touch data for training. This approach has led to promising results (sub-millimeter localization accuracy with few wires and simple manufacturing), but also highlighted some of the limitations and potential pitfalls of purely data-driven sensors. This talk will highlight our main results and lessons.

Prof. Huaping Liu

Title: Bridge the gap between tactile and visual modalities

Abstract:

Due to its occurrence in engineering domains and implications for natural perception, tactile sensors have attracted increasing attention in cognitive systems and neurorobots. Tactile modality provides rich information about the hardness, roughness and texture of material surfaces. Therefore, the tactile sensing plays more and more roles in cognitive object recognition, robotic grasp and localization. Despite of the significant achievements of the tactile sensors, there still exists a great gap between the true tactile properties and the measured tactile characteristics. This tactile information exchange problem is more significant in the case of Internet shopping, which is more risky than traditional shopping due to the lack of opportunity to physically examine the product and the lack of personal contact. In the clothing and textile category, touch is very relevant as it plays a dual role in evaluating the physical attributes of the product such as texture. Inspired by some findings from cognitive psychology, this work tries to make a step towards answering the question Is there any compensatory mechanism which compensates for the inability to touch in an online context. When encoding properties of familiar objects, vision may be sufficient because visual recognition of an object may rapidly trigger the retrieval of information about its properties stored in memory, eliminating the need for direct perceptual encoding by tactile exploration. In this work, we borrow the idea of cross-modal correspondence to partially solve this problem. We establish a cross-modal matching framework for the visual and tactile modalities. Such a problem exhibits a non-trivial challenge that there does not exist sample-to-sample pairing relation between tactile and visual modalities. Our experimental results show that using appropriate training dataset consisting of weakly-paired visual and tactile samples, we can establish an effective visual-tactile cross-modal matching method.

Prof. Oliver Kroemer

Title: Learning to Monitor and Adapt Manipulation Skills based on Tactile Events

Abstract:

Contact states are fundamental to manipulation tasks, as they determine which objects the robot's actions will directly affect. A change in the contact state, e.g., making, breaking, or slipping contacts, often corresponds to a subgoal of the task or an error depending on the context. In this talk, I will discuss methods for learning to detect these types of contact events using tactile sensing. I will also explain how the robot can use this contact information to monitor and adapt its manipulation skills in order to perform the tasks more robustly.

Prof. Kaspar Althoefer

Title: Integrating optics-based force and proximity sensors

 Abstract:

With an ever-growing interest in creating intelligent manipulation devices capable of expertly interacting with their environments, comes the necessity to create new sensing technologies that can accurately estimate relevant interaction parameters. Researchers face the challenge of creating sensor devices that satisfy the requirements for integration with the increasingly sophisticated robot hands and grippers being developed and, at the same time, allow a truthful haptic perception of the objects being handled. Our research focusses on creating miniature sensors that are suitable to be embedded with the fingers and palms of robot hands to measure interaction forces, tactile information as well as proximity signals. Our sensor concept is based on a low-cost optical method whereby a mechanical structure that deforms under applied forces modulates the intensity of a light beam that in turn can be interrogated by an opto-electronic system. This optics-based approach allows for the creation of sensors that are free of electrical currents at the point of sensing and as such particularly suited for safe physical interactions with humans. This talk will summarize the design and integration challenges of our sensors. The talk will also describe recent extensions of the main sensing concept and the successful realisation of a multi-axis force sensors and tactile array sensors integrated with robot hands and surgical instruments.

Dr. Lorenzo Natale

Title: Tactile perception and control on the iCub robot

Abstract:

Robots can actively interact with the environment to learn about objects and their properties, using their sensory system. To extract structured information useful for learning, however, the robot needs to be endowed with appropriate exploratory behaviors. In this talk I will revise our recent work on tactile control and tactile perception on the iCub humanoid robot. I will show experiments in which the robot exploits tactile feedback to grasp and manipulate objects and to learn models for recognition. Finally, I will describe recent work in which vision and touch are integrated for tracking objects during manipulation.

Prof. Edward Adelson

Title: High Resolution 3D Touch with Soft Fingers

Abstract:

For robots to make good use of their hands, they will need fingers that are soft, sensitive, and accurate. GelSight fingers are as soft as human fingers, and have superhuman spatial resolution. Being camera-based, they don’t require custom sensory transduction (one of the main problems that has stymied tactile sensing). In spite of being soft, they provide accurate information about the shape, texture, and pose of grasped objects. They can provide full 3D information about the contact surface, including its geometry and motion in both the normal and tangential directions. They can be used to infer normal force and shear force, and can be used to detect slip. We have used them in combination with deep learning to estimate material properties like hardness, and to classify fabrics by touch.

Dr. Mohsen Kaboli

Title: New Methods for Active Tactile Object Perception and Learning with Artificial Robotic Skin

Abstract:

In this talk I will cover the new methods I developed to tackle key challenges for active tactile object perception and learning in robotics. The new methods propose novel active pre-touch and touch-based exploration strategies for unknown workspaces. It introduces robust tactile feature descriptors to perceive the textural properties of the objects. I will present the first tactile-based approach to explore and determine the center of mass of rigid objects. Moreover, I propose a novel probabilistic active touch learning method to efficiently learn about objects as well as a new active tactile object discrimination to strategically discriminate among objects via their physical properties. For the first time in the tactile learning domain, this newly developed method proposes tactile transfer learning techniques which enable the robotic systems to re-use their past tactile experience (prior tactile knowledge) to learn about new objects with a low number of training samples. Furthermore, it introduces a novel tactile-based framework to enable the robotic systems to safely manipulate deformable objects with a dynamic center of mass. I will also describe a novel approach for touch modality identification during the tactile human-robot communication.