-
Notifications
You must be signed in to change notification settings - Fork 661
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add TD3 and SAC support for multiple envs #481
base: master
Are you sure you want to change the base?
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
cleanrl/td3_continuous_action.py
Outdated
@@ -47,6 +47,8 @@ class Args: | |||
"""total timesteps of the experiments""" | |||
learning_rate: float = 3e-4 | |||
"""the learning rate of the optimizer""" | |||
num_envs: int = 2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would default this to 1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure. Probably also worth discussing how to handle total_timesteps with multiple environments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pseudo-rnd-thoughts It seems like in sb3 they do it like this:
Lets say total_timesteps is 100_000.
Then they actually run 100_000 * num_envs steps, because for each timestep num_envs step are executed.
Description
TD3 and SAC currently doesnt support running multiple environments. Its easy to add by adding a num_envs param and passing it to the env creation and replay buffer initialization.
Types of changes
Checklist:
pre-commit run --all-files
passes (required).mkdocs serve
.If you need to run benchmark experiments for a performance-impacting changes:
--capture_video
.python -m openrlbenchmark.rlops
.python -m openrlbenchmark.rlops
utility to the documentation.python -m openrlbenchmark.rlops ....your_args... --report
, to the documentation.