-
Notifications
You must be signed in to change notification settings - Fork 227
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question: simple example poor performance, what am I doing wrong? #163
Comments
At a glance, library usage seems good to me! Perhaps one way to figure this out is to establish a baseline using some other method (kernel, neural network, etc), to figure out what loss values are expected? For example it seems that |
I get memory errors if I try 100 000 points in my dataset. Even with the batch trick kernel_fn = nt.batch(kernel_fn,
device_count=0,
batch_size=1_000) |
Note that in your example you are doing inference with an infinitely-wide neural network ( Re training set, I think it's constructed correctly, I'm just not sure how to reason about the generalization that we should expect from it (per your plot, it seems to be at least OKish?...). And yes, 100K is too much for most GPUs. |
Dear team, great package, I'm very excited to use it.
However, I tried a simple case, and I failed miserably to get a decent performance.
I generate a multi-dimensional dataset with a relatively simple feature
And I followed your examples as
Visual inspection shows terrible predictions, and loss values are large:
I varied the network in many ways and fiddled with
learning_rate
anddiag_reg
, but I hardly changed anything.I'm sure I am doing something wrong, but I cannot see what it is. Any obvious mistake?
Thanks for your help.
The text was updated successfully, but these errors were encountered: