You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Feature nets in NAMs are selected amongst (1) DNNs containing 3 hidden layers with 64, 64 and 32 units and ReLU activation, and (2) single hidden layer NNs with1024 ExU units and ReLU-1 activation
However, the feature nets described in FeatureNN use either a ExU layer or a LinReLU layer followed by more LinReLU layers topped off with a standard Linear layer. May I ask:
What was the basis of the this feature net architecture?
What was the basis for the LinReLU layer? I understand that this LinReLU layer is similar to the ExU layer described in the paper, but without the exponential, but where did this come about?
I do apologize if the answers are already in the paper and I just overlooked them while reading it!
The text was updated successfully, but these errors were encountered:
Hello! According to Section 3 of the NAM paper,
However, the feature nets described in
FeatureNN
use either aExU
layer or aLinReLU
layer followed by moreLinReLU
layers topped off with a standardLinear
layer. May I ask:What was the basis of the this feature net architecture?
What was the basis for the
LinReLU
layer? I understand that thisLinReLU
layer is similar to the ExU layer described in the paper, but without the exponential, but where did this come about?I do apologize if the answers are already in the paper and I just overlooked them while reading it!
The text was updated successfully, but these errors were encountered: