Slowness when training multiple Convolutional Networks within a for loop #20183
Unanswered
muriloasouza
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I am training several Keras models inside a for loop. Check the following code:
When the train starts, each epoch takes around
280ms
. But as trains go on, each epoch lasts around 3s. I have tried to solve this problem using clear_session(), but nothing changed. I also have tried to delete the model when it finishes the .fit and also use gc.collect(), but none worked.This increase in training time only happens if i use the
Conv1D
network. If i use theMLP
, the training time is constant at around90ms
and does not increase. What is happening here? What is causing this increase in training time for theConv1D
? Any ideas how to fix this?Here some examples:
Beta Was this translation helpful? Give feedback.
All reactions