Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE] Documentation for Build #22

Open
nikunjsanghai opened this issue Nov 1, 2024 · 4 comments
Open

[FEATURE] Documentation for Build #22

nikunjsanghai opened this issue Nov 1, 2024 · 4 comments

Comments

@nikunjsanghai
Copy link

nikunjsanghai commented Nov 1, 2024

Is your feature request related to a problem? Please describe.
No.

Describe the solution you'd like
From the existing documentation, it is not entirely clear how to operationalize the entire motion capture system. From our understanding, there is central script mocaprasp.py, and from here would call the following functions in order:

  1. Camera Extrinsics Calibration
  2. Ground Plane Estimation
  3. Standard Capture Routine
    Is this ordering correct?

Describe alternatives you've considered
NA

Additional context
NA

@debOliveira
Copy link
Owner

Yes, the ordering is correct. The instructions for running the calibration and capture are in the README:

## ⚔️ Usage
1) Run a capture routine in the server (`mocaprasp cec`, `mocaprasp gpe` or `mocaprasp scr`)
2) `source run.sh`
3) Press <kbd>Ctrl+C</kbd> when capture is finished to avoid waiting for the timeout
> You can change the timeout waiting in line 85 of `watch.py` (default: 300 seconds)

@loolirer can give you more details if something remains unclear.

@debOliveira
Copy link
Owner

@nikunjsanghai is your issue solved?

@adthoms
Copy link

adthoms commented Nov 26, 2024

@debOliveira thank you for the support. Both @nikunjsanghai and I are attempting to recreate the results in your conference paper and so far things are going smoothly. We are waiting for our IR LED lights to arrive this week, and we will let you know if any issues arise. If you are curious, we are putting together the following experimental rig to evaluate a SLAM algorithm I have developed.
image
As shown, we are using three Module 2 NoIR cameras fixed to a 3D-printed plate, and we aim to track a drone ($F_R$) with respect to the camera cluster ($F_C$) to (ultimately) resolve frame $F_J$ in the world coordinate frame ($F_W$).

@debOliveira
Copy link
Owner

Hey @adthoms,

That is excellent work! Please keep me posted when the paper comes out; we may apply it. Contact us if issues arise or if you want to brainstorm some ideas.

@loolirer developed a digital twin for our arena to facilitate prototyping. Maybe it is useful for some of your visualizations.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants