drl_tennis | Deep Reinforcement Learning agent using MADDPG
kandi X-RAY | drl_tennis Summary
kandi X-RAY | drl_tennis Summary
drl_tennis is a HTML library. drl_tennis has no bugs, it has no vulnerabilities, it has a Strong Copyleft License and it has low support. You can download it from GitHub.
For this project, I have trained an agent for solving a continous control problem: play table tennis against another agent (same policy) to maximize the number of times the agents hit and pass the ball over the net. In this environment, two agents control rackets to bounce a ball over a net. If an agent hits the ball over the net, it receives a reward of +0.1. If an agent lets a ball hit the ground or hits the ball out of bounds, it receives a reward of -0.01. Thus, the goal of each agent is to keep the ball in play. The observation space consists of 8 variables corresponding to the position and velocity of the ball and racket. Each agent receives its own, local observation. Two continuous actions are available, corresponding to movement toward (or away from) the net, and jumping. The task is episodic, and in order to solve the environment, the agents must get an average score of +0.5 (over 100 consecutive episodes, after taking the maximum over both agents).
For this project, I have trained an agent for solving a continous control problem: play table tennis against another agent (same policy) to maximize the number of times the agents hit and pass the ball over the net. In this environment, two agents control rackets to bounce a ball over a net. If an agent hits the ball over the net, it receives a reward of +0.1. If an agent lets a ball hit the ground or hits the ball out of bounds, it receives a reward of -0.01. Thus, the goal of each agent is to keep the ball in play. The observation space consists of 8 variables corresponding to the position and velocity of the ball and racket. Each agent receives its own, local observation. Two continuous actions are available, corresponding to movement toward (or away from) the net, and jumping. The task is episodic, and in order to solve the environment, the agents must get an average score of +0.5 (over 100 consecutive episodes, after taking the maximum over both agents).
Support
Quality
Security
License
Reuse
Support
drl_tennis has a low active ecosystem.
It has 0 star(s) with 0 fork(s). There are 1 watchers for this library.
It had no major release in the last 6 months.
drl_tennis has no issues reported. There are 12 open pull requests and 0 closed requests.
It has a neutral sentiment in the developer community.
The latest version of drl_tennis is current.
Quality
drl_tennis has no bugs reported.
Security
drl_tennis has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
License
drl_tennis is licensed under the GPL-3.0 License. This license is Strong Copyleft.
Strong Copyleft licenses enforce sharing, and you can use them when creating open source projects.
Reuse
drl_tennis releases are not available. You will need to build from source code and install.
Installation instructions, examples and code snippets are available.
Top functions reviewed by kandi - BETA
kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of drl_tennis
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of drl_tennis
drl_tennis Key Features
No Key Features are available at this moment for drl_tennis.
drl_tennis Examples and Code Snippets
No Code Snippets are available at this moment for drl_tennis.
Community Discussions
No Community Discussions are available at this moment for drl_tennis.Refer to stack overflow page for discussions.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install drl_tennis
I have used Linux. You can download the version for your SO, but remember to point to your Tennis environment folder:. Due to issues with conda, not only environment.yml is provided. Another file (requirements.txt) is also attached and should be taken into account. Next with pip: pip install -r requirements.txt.
Download the environment from one of the links below. You need only select the environment that matches your operating system: Linux: click here Mac OSX: click here Windows (32-bit): click here Windows (64-bit): click here (For Windows users) Check out this link if you need help with determining if your computer is running a 32-bit version or 64-bit version of the Windows operating system. (For AWS) If you'd like to train the agent on AWS (and have not enabled a virtual screen), then please use this link to obtain the environment.
Place the file in the unziped folder, and unzip (or decompress) the file.
Create a virtual environment with anaconda and install packages: conda env create -f environment.yml.
Activate the virtual environment: source activate <name of the env>.
Install more packages:
Launch jupyter notebook: jupyter notebook Navigation.ipynb
Execute cells: just first cell (for imports).
Download the environment from one of the links below. You need only select the environment that matches your operating system: Linux: click here Mac OSX: click here Windows (32-bit): click here Windows (64-bit): click here (For Windows users) Check out this link if you need help with determining if your computer is running a 32-bit version or 64-bit version of the Windows operating system. (For AWS) If you'd like to train the agent on AWS (and have not enabled a virtual screen), then please use this link to obtain the environment.
Place the file in the unziped folder, and unzip (or decompress) the file.
Create a virtual environment with anaconda and install packages: conda env create -f environment.yml.
Activate the virtual environment: source activate <name of the env>.
Install more packages:
Launch jupyter notebook: jupyter notebook Navigation.ipynb
Execute cells: just first cell (for imports).
Support
For any new features, suggestions and bugs create an issue on GitHub.
If you have any questions check and ask questions on community page Stack Overflow .
Find more information at:
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page