-Review--Deterministic-Policy-Gradient-Algorithm | [Review and implementation code] | Reinforcement Learning library

 by   170928 Python Version: Current License: No License

kandi X-RAY | -Review--Deterministic-Policy-Gradient-Algorithm Summary

kandi X-RAY | -Review--Deterministic-Policy-Gradient-Algorithm Summary

-Review--Deterministic-Policy-Gradient-Algorithm is a Python library typically used in Artificial Intelligence, Reinforcement Learning, Deep Learning, Pytorch applications. -Review--Deterministic-Policy-Gradient-Algorithm has no bugs, it has no vulnerabilities and it has low support. However -Review--Deterministic-Policy-Gradient-Algorithm build file is not available. You can download it from GitHub.

policy gradient 알고리즘은 continuous action space를 위해서 널리 사용되고 있습니다. policy gradient algorithm은 뉴럴 네트워크를 사용해서 정책 (policy)를 근사하며, 목적 함수를 통해서 뉴럴 네트워크를 학습시키는 방법입니다. policy gradient에서 기본적인 개념은 아래와 같습니다. 즉, 변수에 따라서 표현되는 확률적인 state 에서의 action에 대한 분포가 정책을 의미하게 됩니다. 전통적인 policy gradient 알고리즘에서는 이 정책에 따라서 데이터를 샘플링하고 신경망의 가중치를 축적되는 reward를 최대화 하는 정책을 만드는 방향으로 학습합니다. 그러나, 이 논문에서는 "deterministic policy gradient"를 고려합니다. 기본적으로 축적되는 reward를 최대화 하기 위한 방향으로 신경망의 가중치를 학습하는 방법은 동일합니다. 제안되는 알고리즘은 model-free 하며 action-value function의 gradient 를 통해서 deterministic policy gradient 를 표현할 수 있습니다. 이 논문에서는, deterministic policy gradient 가 stochastic policy gradient의 policy variance가 0인 제한되는 경우임을 보여줍니다. Stochastic policy gradient와 Deterministic policy gradient에는 중요한 차이가 존재합니다. Stochastic policy gradient에 대해서 어려운 개념이기에 2개의 사이트에서 내용을 찾아서 공부하고 첨부하였습니다. 자세한 사항은 아래 참조로 적힌 사이트에서 확인하실 수 있습니다. In stochastic policy gradient, actions are drawn from a distribution parameterized by your policy. For example, your robot’s motor torque might be drawn from a Normal distribution with mean μμ and deviation σσ. Where your policy will predict μμ and σσ. When you draw from this distribution and evaluate your policy, you can move your mean closer to samples that led to higher reward and farther from samples that led to lower reward, and reduce your deviation as you become more confident. When you reduce the variance to 0, we get a policy that is deterministic. In deterministic policy gradient, we directly take the gradients of μμ. In the stochastic case, the policy gradient integrates over both state and action spaces, whereas in the deterministic case it only integrates over the state space. As a result, computing the stochastic policy gradient may require more samples, especially if the action space has many dimensions. From Policy Gradient의 장점과 단점은 다음과 같습니다. 기존의 방법의 비해서 수렴이 더 잘되며 가능한 action이 여러개이거나(high-dimension) > ction자체가 연속적인 경우에 효과적입니다. 즉, 실재의 로봇 control에 적합합니다. 또한 기존의 방법은 반드시 하나의 optimal한 action으로 수렴 는데 policy gradient에서는 stochastic한 policy를 배울 수 있습니다.(예를 들면 가위바위보) 하지만 local optimum에 빠질 수 있으며 policy의 evaluate하는 과정이 비효율적이고 variance가 높습니다. Value-based RL에서는 Value function을 바탕으로 policy계산하므로 Value function이 약간만 달라져도 Policy자체는 왼쪽으로 가다가 오른쪽으로 간다던지하는 크게 변화합니다. 그러한 현상들이 전체적인 알고리즘의 수렴에 불안정성을 더해줍니다. 하지만 Policy자체가 함수화되버리면 학습을 하면서 조금씩 변하는 value function으로 인해서 policy또한 조금씩 변하게 되어서 안정적이고 부드럽게 수렴하게 됩니다. 앞에서 언급했듯이 때로는 Stochastic Policy가 Optimal Policy일 수 있습니다. 가위바위보 게임은 동등하게 가위와 바위와 보를 1/3씩 내는 것이 Optimal한 Policy입니다. 또한 Partially Observed MDP의 경우에도(feature로만 관측이 가능할 경우) Stochastic Policy가 Optimal Policy가 될 수 있습니다. Policy Gradient에서는 Objective Function이라는 것을 정의합니다. 그에는 세 가지 방법이 있습니다. state value, average value, average reward per time-step입니다. 게임에서는 보통 똑같은 state에서 시작하기 때문에 처음 시작 state의 value function이 강화학습이 최대로 하고자 하는 목표가 됩니다. 두 번째는 잘 사용하지 않고 세 번째는 각 time step마다 받는 reward들을 각 state에서 머무르는 비율(stationary distribution)을 곱한 expectation값을 사용합니다. Policy Gradient에서 목표는 이 Objective Function을 최대화시키는 Theta라는 Policy의 Parameter Vector을 찾아내는 것입니다. 그렇다면 어떻게 찾아낼까요? 바로 Gradient Descent입니다. 그래서 Policy Gradient라고 불리는 것입니다. 이 논문에서는 학습 과정에서 full state와 full action space를 탐색하는 것이 필요하기 때문에, "off-policy learning" 알고리즘을 활용합니다. 기본적으로 stochastic policy에 따라서 action을 선택하며 "deterministic target policy"를 학습하는 것이 기본 아이디어 입니다. 그러므로, 미분 가능한 fuction approximator ( ex, DNN )을 사용하여 action-value function을 추정하고 "off-policyt actor-critic"알고리즘을 통해서 deterministic policy gradient 알고리즘을 이끌어 냅니다.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              -Review--Deterministic-Policy-Gradient-Algorithm has a low active ecosystem.
              It has 4 star(s) with 0 fork(s). There are no watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 17 open issues and 0 have been closed. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of -Review--Deterministic-Policy-Gradient-Algorithm is current.

            kandi-Quality Quality

              -Review--Deterministic-Policy-Gradient-Algorithm has no bugs reported.

            kandi-Security Security

              -Review--Deterministic-Policy-Gradient-Algorithm has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

            kandi-License License

              -Review--Deterministic-Policy-Gradient-Algorithm does not have a standard license declared.
              Check the repository for any license declaration and review the terms closely.
              OutlinedDot
              Without a license, all rights are reserved, and you cannot use the library in your applications.

            kandi-Reuse Reuse

              -Review--Deterministic-Policy-Gradient-Algorithm releases are not available. You will need to build from source code and install.
              -Review--Deterministic-Policy-Gradient-Algorithm has no build file. You will be need to create the build yourself to build the component from source.

            Top functions reviewed by kandi - BETA

            kandi has reviewed -Review--Deterministic-Policy-Gradient-Algorithm and discovered the below as its top functions. This is intended to give you an instant insight into -Review--Deterministic-Policy-Gradient-Algorithm implemented functionality, and help decide if they suit your requirements.
            • Move the rectangle
            • Move rewards to the reward
            • Moves the const
            • Moves the canvas
            • Returns the state of the agent
            • Checks if the reward for a given reward
            • Run the optimizer
            • Compute discounted rewards
            • Render the widget
            • Convert coords to coordinates
            • Resets the game
            • Reset rewards
            • Set reward
            • Builds the model
            • Create a bias variable
            • Create a weight variable
            • Update the optimizer
            • Append a sample
            • Get a single action
            Get all kandi verified functions for this library.

            -Review--Deterministic-Policy-Gradient-Algorithm Key Features

            No Key Features are available at this moment for -Review--Deterministic-Policy-Gradient-Algorithm.

            -Review--Deterministic-Policy-Gradient-Algorithm Examples and Code Snippets

            No Code Snippets are available at this moment for -Review--Deterministic-Policy-Gradient-Algorithm.

            Community Discussions

            QUESTION

            Keras: AttributeError: 'Adam' object has no attribute '_name'
            Asked 2022-Apr-16 at 15:05

            I want to compile my DQN Agent but I get error: AttributeError: 'Adam' object has no attribute '_name',

            ...

            ANSWER

            Answered 2022-Apr-16 at 15:05

            Your error came from importing Adam with from keras.optimizer_v1 import Adam, You can solve your problem with tf.keras.optimizers.Adam from TensorFlow >= v2 like below:

            (The lr argument is deprecated, it's better to use learning_rate instead.)

            Source https://stackoverflow.com/questions/71894769

            QUESTION

            What are vectorized environments in reinforcement learning?
            Asked 2022-Mar-25 at 10:37

            I'm having a hard time wrapping my head around what and when vectorized environments should be used. If you can provide an example of a use case, that would be great.

            Documentation of vectorized environments in SB3: https://stable-baselines3.readthedocs.io/en/master/guide/vec_envs.html

            ...

            ANSWER

            Answered 2022-Mar-25 at 10:37

            Vectorized Environments are a method for stacking multiple independent environments into a single environment. Instead of executing and training an agent on 1 environment per step, it allows to train the agent on multiple environments per step.

            Usually you also want these environment to have different seeds, in order to gain more diverse experience. This is very useful to speed up training.

            I think they are called "vectorized" since each training step the agent observes multiple states (inserted in a vector), outputs multiple actions (one for each environment), which are inserted in a vector, and receives multiple rewards. Hence the "vectorized" term

            Source https://stackoverflow.com/questions/71549439

            QUESTION

            How does a gradient backpropagates through random samples?
            Asked 2022-Mar-25 at 03:06

            I'm learning about policy gradients and I'm having hard time understanding how does the gradient passes through a random operation. From here: It is not possible to directly backpropagate through random samples. However, there are two main methods for creating surrogate functions that can be backpropagated through.

            They have an example of the score function:

            ...

            ANSWER

            Answered 2021-Nov-30 at 05:48

            It is indeed true that sampling is not a differentiable operation per se. However, there exist two (broad) ways to mitigate this - [1] The REINFORCE way and [2] The reparameterization way. Since your example is related to [1], I will stick my answer to REINFORCE.

            What REINFORCE does is it entirely gets rid of sampling operation in the computation graph. However, the sampling operation remains outside the graph. So, your statement

            .. how does the gradient passes through a random operation ..

            isn't correct. It does not pass through any random operation. Let's see your example

            Source https://stackoverflow.com/questions/70163823

            QUESTION

            Relationship of Horizon and Discount factor in Reinforcement Learning
            Asked 2022-Mar-13 at 17:50

            What is the connection between discount factor gamma and horizon in RL.

            What I have learned so far is that the horizon is the agent`s time to live. Intuitively, agents with finite horizon will choose actions differently than if it has to live forever. In the latter case, the agent will try to maximize all the expected rewards it may get far in the future.

            But the idea of the discount factor is also the same. Are the values of gamma near zero makes the horizon finite?

            ...

            ANSWER

            Answered 2022-Mar-13 at 17:50

            Horizon refers to how many steps into the future the agent cares about the reward it can receive, which is a little different from the agent's time to live. In general, you could potentially define any arbitrary horizon you want as the objective. You could define a 10 step horizon, in which the agent makes a decision that will enable it to maximize the reward it will receive in the next 10 time steps. Or we could choose a 100, or 1000, or n step horizon!

            Usually, the n-step horizon is defined using n = 1 / (1-gamma). Therefore, 10 step horizon will be achieved using gamma = 0.9, while 100 step horizon can be achieved with gamma = 0.99

            Therefore, any value of gamma less than 1 imply that the horizon is finite.

            Source https://stackoverflow.com/questions/71459191

            QUESTION

            OpenAI-Gym and Keras-RL: DQN expects a model that has one dimension for each action
            Asked 2022-Mar-02 at 10:55

            I am trying to set a Deep-Q-Learning agent with a custom environment in OpenAI Gym. I have 4 continuous state variables with individual limits and 3 integer action variables with individual limits.

            Here is the code:

            ...

            ANSWER

            Answered 2021-Dec-23 at 11:19

            As we talked about in the comments, it seems that the Keras-rl library is no longer supported (the last update in the repository was in 2019), so it's possible that everything is inside Keras now. I take a look at Keras documentation and there are no high-level functions to build a reinforcement learning model, but is possible to use lower-level functions to this.

            • Here is an example of how to use Deep Q-Learning with Keras: link

            Another solution may be to downgrade to Tensorflow 1.0 as it seems the compatibility problem occurs due to some changes in version 2.0. I didn't test, but maybe the Keras-rl + Tensorflow 1.0 may work.

            There is also a branch of Keras-rl to support Tensorflow 2.0, the repository is archived, but there is a chance that it will work for you

            Source https://stackoverflow.com/questions/70261352

            QUESTION

            gym package not identifying ten-armed-bandits-v0 env
            Asked 2022-Feb-08 at 08:01

            Environment:

            • Python: 3.9
            • OS: Windows 10

            When I try to create the ten armed bandits environment using the following code the error is thrown not sure of the reason.

            ...

            ANSWER

            Answered 2022-Feb-08 at 08:01

            It could be a problem with your Python version: k-armed-bandits library was made 4 years ago, when Python 3.9 didn't exist. Besides this, the configuration files in the repo indicates that the Python version is 2.7 (not 3.9).

            If you create an environment with Python 2.7 and follow the setup instructions it works correctly on Windows:

            Source https://stackoverflow.com/questions/70858340

            QUESTION

            ValueError: Input 0 of layer "max_pooling2d" is incompatible with the layer: expected ndim=4, found ndim=5. Full shape received: (None, 3, 51, 39, 32)
            Asked 2022-Feb-01 at 07:31

            I have two different problems occurs at the same time.

            I am having dimensionality problems with MaxPooling2d and having same dimensionality problem with DQNAgent.

            The thing is, I can fix them seperately but cannot at the same time.

            First Problem

            I am trying to build a CNN network with several layers. After I build my model, when I try to run it, it gives me an error.

            ...

            ANSWER

            Answered 2022-Feb-01 at 07:31

            Issue is with input_shape. input_shape=input_shape[1:]

            Working sample code

            Source https://stackoverflow.com/questions/70808035

            QUESTION

            Stablebaselines3 logging reward with custom gym
            Asked 2021-Dec-25 at 01:10

            I have this custom callback to log the reward in my custom vectorized environment, but the reward appears in console as always [0] and is not logged in tensorboard at all

            ...

            ANSWER

            Answered 2021-Dec-25 at 01:10

            You need to add [0] as indexing,

            so where you wrote self.logger.record('reward', self.training_env.get_attr('total_reward')) you just need to index with self.logger.record('reward', self.training_env.get_attr ('total_reward')[0])

            Source https://stackoverflow.com/questions/70468394

            QUESTION

            What is the purpose of [np.arange(0, self.batch_size), action] after the neural network?
            Asked 2021-Dec-23 at 11:07

            I followed a PyTorch tutorial to learn reinforcement learning(TRAIN A MARIO-PLAYING RL AGENT) but I am confused about the following code:

            ...

            ANSWER

            Answered 2021-Dec-23 at 11:07

            Essentially, what happens here is that the output of the net is being sliced to get the desired part of the Q table.

            The (somewhat confusing) index of [np.arange(0, self.batch_size), action] indexes each axis. So, for axis with index 1, we pick the item indicated by action. For index 0, we pick all items between 0 and self.batch_size.

            If self.batch_size is the same as the length of dimension 0 of this array, then this slice can be simplified to [:, action] which is probably more familiar to most users.

            Source https://stackoverflow.com/questions/70458347

            QUESTION

            DQN predicts same action value for every state (cart pole)
            Asked 2021-Dec-22 at 15:55

            I'm trying to implement a DQN. As a warm up I want to solve CartPole-v0 with a MLP consisting of two hidden layers along with input and output layers. The input is a 4 element array [cart position, cart velocity, pole angle, pole angular velocity] and output is an action value for each action (left or right). I am not exactly implementing a DQN from the "Playing Atari with DRL" paper (no frame stacking for inputs etc). I also made a few non standard choices like putting done and the target network prediction of action value in the experience replay, but those choices shouldn't affect learning.

            In any case I'm having a lot of trouble getting the thing to work. No matter how long I train the agent it keeps predicting a higher value for one action over another, for example Q(s, Right)> Q(s, Left) for all states s. Below is my learning code, my network definition, and some results I get from training

            ...

            ANSWER

            Answered 2021-Dec-19 at 16:09

            There was nothing wrong with the network definition. It turns out the learning rate was too high and reducing it 0.00025 (as in the original Nature paper introducing the DQN) led to an agent which can solve CartPole-v0.

            That said, the learning algorithm was incorrect. In particular I was using the wrong target action-value predictions. Note the algorithm laid out above does not use the most recent version of the target network to make predictions. This leads to poor results as training progresses because the agent is learning based on stale target data. The way to fix this is to just put (s, a, r, s', done) into the replay memory and then make target predictions using the most up to date version of the target network when sampling a mini batch. See the code below for an updated learning loop.

            Source https://stackoverflow.com/questions/70382999

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install -Review--Deterministic-Policy-Gradient-Algorithm

            You can download it from GitHub.
            You can use -Review--Deterministic-Policy-Gradient-Algorithm like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/170928/-Review--Deterministic-Policy-Gradient-Algorithm.git

          • CLI

            gh repo clone 170928/-Review--Deterministic-Policy-Gradient-Algorithm

          • sshUrl

            git@github.com:170928/-Review--Deterministic-Policy-Gradient-Algorithm.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link