Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Basis for FeatureNN and LinReLU #6

Open
patrick-john-ramos opened this issue Feb 1, 2022 · 0 comments
Open

Basis for FeatureNN and LinReLU #6

patrick-john-ramos opened this issue Feb 1, 2022 · 0 comments

Comments

@patrick-john-ramos
Copy link

Hello! According to Section 3 of the NAM paper,

Feature nets in NAMs are selected amongst (1) DNNs containing 3 hidden layers with 64, 64 and 32 units and ReLU activation, and (2) single hidden layer NNs with1024 ExU units and ReLU-1 activation

However, the feature nets described in FeatureNN use either a ExU layer or a LinReLU layer followed by more LinReLU layers topped off with a standard Linear layer. May I ask:

  1. What was the basis of the this feature net architecture?

  2. What was the basis for the LinReLU layer? I understand that this LinReLU layer is similar to the ExU layer described in the paper, but without the exponential, but where did this come about?

I do apologize if the answers are already in the paper and I just overlooked them while reading it!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant