This contains a RuneScape Private Server (RSPS) specifically modified to facilitate training reinforcement learning agents. It builds upon Elvarg RSPS.
Key modifications in this project can be reviewed in the rl package. For a comprehensive view of all changes, consider running a diff between this and the upstream repository.
- Development of a plugin and event system.
- Creation of a reinforcement learning plugin.
To synchronize with the upstream repository, use:
git subtree pull --prefix=simulation-rsps https://github.com/RSPSApp/elvarg-rsps.git master
Note: This repository has diverged from the upstream due to incompatible changes in the upstream (like their own plugin system).
This contains the simulated game environment. By default, the reinforcement learning plugin will run, launching the remote environment server required for training/evaluation (see pvp-ml for connection details).
It requires running with Java 17.
- Navigate to the ElvargServer directory.
- Launch the server using gradle:
./gradlew run
.
Note: pvp-ml will install Java 17.
Connect to the server with a RSPS client by cloning upstream, and launching the built-in client. This is useful for testing and observing training.
It requires running with Java 11.
- Clone the upstream repository:
git clone https://github.com/RSPSApp/elvarg-rsps
. - Change to the ElvargClient directory:
cd elvarg-rsps/ElvargClient
. - Start the client with gradle:
./gradlew run
. - Log in!
This extends the game logic with a custom reinforcement learning plugin (ReinforcementLearningPlugin
)
(note: the plugin/event systems themselves were added in as part of this as well to make the code cleaner).
- Launches a socket server (RemoteEnvironmentServer) for API interactions. This exposes routes for logging in and out, and for step and reset requests (like the gym interface).
- Enables control over training agents via API (RemoteEnvironmentPlayerBot)
- Supports independent ML-driven agents (AgentBotLoader) and player-controlled agents (EnableAgentCommand).
Adding a new environment requires a few simple steps, assuming the environment contract has already been created.
- Implement an
AgentEnvironment
class for the new environment. - Implement an
EnvironmentDescriptor
class for the new environment, and associated classes (such asEnvironmentParams
). - Add a new type to the
EnvironmentRegistry
enum. This type can now be used for training.
The aim is to replicate OSRS combat as accurately as possible. Achieving this required extensive modifications to the original RSPS, especially in combat mechanics (ex. food consumption and attack delays). Precision in simulating these details is crucial to ensure that the agent's learned policies are applicable to the live game.
Future work could involve generalizing the plugin logic for broader applicability across various RSPS frameworks. This would facilitate easier adaptation and training on different servers if needed.