Multidimensionnal classification #446
SeriousJ55
started this conversation in
Ideas
Replies: 1 comment 2 replies
-
Hi there, that's a good question. Like you suggested, this would typically be a hierarchical classification approach where you'd chain 2 models. However, you could also approach this as a multi-class problem:
For this, you'd just add one more class label. Or you could frame it as a multi-label classification task:
The difference between multi-class and multi-label is that in multi-class only 1 label is selected whereas in multi-label classification multiple labels can be selected. Implementation-wise, multi-label uses a logistic sigmoid function in the output layer: Here's a simple example Standard multi-classy = torch.tensor([2, 0, ...])
class MulticlassExample(nn.Module):
def __init__(self, input_size, num_classes):
super().__init__()
self.output_layer = nn.Linear(layer_input_size, num_classes)
def forward(self, x):
return self.output_layer(x)
# computing the loss
outputs = model(x)
loss = criterion(outputs, y) Multi-label classificationy = torch.tensor([[1, 0, 1], [0, 1, 1]], dtype=torch.float32) # binary labels
class MultilabelExample(nn.Module):
def __init__(self, input_size, num_classes):
super().__init__()
self.output_layer = nn.Linear(layer_input_size, num_classes)
def forward(self, x):
return torch.sigmoid(self.output_layer(x))
# computing the loss
outputs = model(x)
loss = F.binary_cross_entropy(outputs, y) |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I just finished chapter 6 "Fine-tuning for classification" and I'm wondering if there's a way to train a multidimensional classifier.
For instance, I would want to classify emails by classes "spam" and "not spam" but also by unrelated classes like "urgent" and "not urgent".
Of course, it can be done by fine tuning two separate GPTs. But do you know if there's a way to do it with only one GPT?
Beta Was this translation helpful? Give feedback.
All reactions