Recognition of objects, for one, would be an area that, if improved, would allow robots behave more human-like. It’s a perpetual problem for robots to translate what their camera sees into understanding what it actually is, especially if they are introduced to new objects quickly.
I don’t think we have developed that much in terms or robots feeling touch either. Some pressure sensors are there, of course, but there is certainly more potential in that field. Bringing vision and touch together is a problem too. We have a robot that is designed specifically to fold towels and nothing else, because it’s such a hard problem! https://www.youtube.com/watch?v=gy5g33S0Gzo
There is also a potential for robots to be better than people in certain sensory tasks. For example, if they have powerful enough sensors, they might be able to tell you a lot about the person in the next room, even though you’d have nothing to go on from your own senses of vision, hearing, &c.
So, plenty of potential, if robots process sensory info fast enough, and we program them with fluid motions 🙂 I think pair-wise dancing would be a good test for both the sensory input and the fluidity of motion. You can check dancing NAO robots here, but they have ways to go before being as graceful as humans: https://www.youtube.com/watch?v=4t1NWH6G1f0