Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Basic python implementation to test accuracy of model and landmarks #35

Open
Nishad-Harsulkar opened this issue Oct 18, 2024 · 0 comments

Comments

@Nishad-Harsulkar
Copy link

I tried doing some basic code[after pip install shapenet], but couldnt come to conclusive end. I also tried using pretrained weights on grayscale face images but it gave ''RuntimeError: Legacy model format is not supported on mobile". Can you provide suggestions on how code can be implemented or actual implementation of model? Below is my code for grayscale
import torch
import torchvision.transforms as transforms
from PIL import Image
import matplotlib.pyplot as plt

model_path = Path to model
image_path = Path to image
model = torch.jit.load(model_path)
model.eval()

Define the image transformations

transform = transforms.Compose([
transforms.Resize((224, 224)), # Resize the image to 224x224
transforms.ToTensor(), # Convert image to tensor
transforms.Normalize((0.5,), (0.5,)) # Normalize the image (mean=0.5, std=0.5)
])

Load and transform the grayscale image

image = Image.open(image_path).convert('L')
input_tensor = transform(image).unsqueeze(0)
with torch.no_grad():
output = model(input_tensor)

landmarks = output.squeeze().cpu().numpy()

Visualize the landmarks on the image

plt.imshow(image, cmap='gray')
plt.scatter(landmarks[:, 0] * image.width, landmarks[:, 1] * image.height, s=10, c='red', marker='o')
plt.show()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant