Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added alternative HRI task to Stage 2: Help Me Find. #924

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

SparkRibeiro21
Copy link

Description

This is a rough first attempt at the "Help Me Find" task, that consists of helping a visually impaired person find navigate and pick an object the this person is trying to collect. To goal is to force robots to adapt to a person that has limited information regarding its environment, and therefore must develop a good perception of the person. The volunteer should be wearing a blindfold make this scenario more realistic and forcing the person to trust the information received from the robot. In addition, the robot must be able to describe objects not present in the competition dataset and understand object descriptions.

Other comments

This is just a rough description... points and even task description still need a lot of thinking and adjusting, please provide feedback...

Notes

This PR is made after the last TC meeting where this proposal has a lot of positive feedback. This PR is also to help the next EC meeting.

@hawkina
Copy link
Contributor

hawkina commented Dec 1, 2024

The robot is expected to describe all the objects in can see on e.g. the table? Do we ensure that there are not too many obejcts? Otherwise the robot might spend too much time describing things.

One cool solution would be ofcourse to list all objects it sees, and the user can then say which should be described.

@ARTenshi
Copy link
Collaborator

ARTenshi commented Dec 2, 2024

I still don't like the story; visually impaired people have been able to move without external help, especially in their own houses. I might change it to be them guests, so the robot takes them from the front entrance to the couch and then describes the drinks and snacks on the closest table to offer some.

In any case, regarding object description, I think there is very little HRI; the robot might just describe the objects it sees using a dictionary -- the base case would be the robot taking any object and giving it to the person, without any other interaction. Gesture recognition and visual or physical interaction are desirable -- human-robot collaboration would be a plus.

@LeroyR
Copy link
Member

LeroyR commented Dec 2, 2024

I still don't like the story; visually impaired people have been able to move without external help, especially in their own houses. I might change it to be them guests, so the robot takes them from the front entrance to the couch and then describes the drinks and snacks on the closest table to offer some.

Currently it talks about visually impaired being at the house for the first time. I think we should change the story to simply be about someone wearing a blindfold for the purpose of whatever happens in the task.

In any case, regarding object description, I think there is very little HRI; the robot might just describe the objects it sees using a dictionary -- the base case would be the robot taking any object and giving it to the person, without any other interaction. Gesture recognition and visual or physical interaction are desirable -- human-robot collaboration would be a plus.
If the robot can guide me, blindfolded, to take the correct item is has to either instruct me carefully (where to place my hand) or drop it into my extended open hand, both sound like a good solution.

For Dialog: I still have the open question of what exactly anyone considers good HRI (that we can somehow measure.)

In what situations do we have a real dialog between robot and human? The only thing that comes to MY mind is somehow making a Taboo like game out of it to force robot/human to ask/answer multiple questions to get to the desired item. For the Objects we then give a description of the desired features to the blindfolded human so that saying "there is an apple" is not enough. 😅

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants