You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried doing some basic code[after pip install shapenet], but couldnt come to conclusive end. I also tried using pretrained weights on grayscale face images but it gave ''RuntimeError: Legacy model format is not supported on mobile". Can you provide suggestions on how code can be implemented or actual implementation of model? Below is my code for grayscale
import torch
import torchvision.transforms as transforms
from PIL import Image
import matplotlib.pyplot as plt
model_path = Path to model
image_path = Path to image
model = torch.jit.load(model_path)
model.eval()
Define the image transformations
transform = transforms.Compose([
transforms.Resize((224, 224)), # Resize the image to 224x224
transforms.ToTensor(), # Convert image to tensor
transforms.Normalize((0.5,), (0.5,)) # Normalize the image (mean=0.5, std=0.5)
])
I tried doing some basic code[after pip install shapenet], but couldnt come to conclusive end. I also tried using pretrained weights on grayscale face images but it gave ''RuntimeError: Legacy model format is not supported on mobile". Can you provide suggestions on how code can be implemented or actual implementation of model? Below is my code for grayscale
import torch
import torchvision.transforms as transforms
from PIL import Image
import matplotlib.pyplot as plt
model_path = Path to model
image_path = Path to image
model = torch.jit.load(model_path)
model.eval()
Define the image transformations
transform = transforms.Compose([
transforms.Resize((224, 224)), # Resize the image to 224x224
transforms.ToTensor(), # Convert image to tensor
transforms.Normalize((0.5,), (0.5,)) # Normalize the image (mean=0.5, std=0.5)
])
Load and transform the grayscale image
image = Image.open(image_path).convert('L')
input_tensor = transform(image).unsqueeze(0)
with torch.no_grad():
output = model(input_tensor)
landmarks = output.squeeze().cpu().numpy()
Visualize the landmarks on the image
plt.imshow(image, cmap='gray')
plt.scatter(landmarks[:, 0] * image.width, landmarks[:, 1] * image.height, s=10, c='red', marker='o')
plt.show()
The text was updated successfully, but these errors were encountered: