You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was wondering if you have any experience using the output of your tracker with the pytorch3d renderer.
If I understood correctly, the modelview matrix gives us the conversion from object/model coordinates to camera coordinates, whereas the projection matrix can be used to project the 3D points to 2D. However, I was following a simple example and I don't find the correct way to pass R and T to the renderer to visualize the mesh with the predicted head pose (It shows only a white canvas).
I tried passing R and T directly from the modelview matrix, also combining it with the projection matrix and even applying the transformations to the mesh and simply passing R=Identity and T=[0,0,0].
Any suggestions?
Thank you very much in advance.
The text was updated successfully, but these errors were encountered:
Thanks a lot! My apologies for the very late reply, it has been extremely busy unfortunately.
Your general understanding of the matrices is right. The eos renderer follows the OpenGL conventions, so what you get should be OpenGL-compliant model-view and projection matrices. I don't have any experience with pytorch3d yet, but it probably expects those matrices in a slightly different format. Unfortunately problems between camera coordinate system conventions are quite common and most annoying (I wish our field could get rid of those problems!), and I would recommend reading more about the OpenGL and PyTorch3D conventions first and then do some debugging.
It would be quite interesting to get the eos fitting output to work with pytorch3d, so I'd be quite interested in that. Have you made and progress on this since January or what did you end up doing?
Hi Patrik, thanks for your excellent work!
I was wondering if you have any experience using the output of your tracker with the pytorch3d renderer.
If I understood correctly, the modelview matrix gives us the conversion from object/model coordinates to camera coordinates, whereas the projection matrix can be used to project the 3D points to 2D. However, I was following a simple example and I don't find the correct way to pass R and T to the renderer to visualize the mesh with the predicted head pose (It shows only a white canvas).
I tried passing R and T directly from the modelview matrix, also combining it with the projection matrix and even applying the transformations to the mesh and simply passing R=Identity and T=[0,0,0].
Any suggestions?
Thank you very much in advance.
The text was updated successfully, but these errors were encountered: