kandi background
kandi background
Explore Kits
kandi background
Explore Kits
Explore all Reinforcement Learning open source software, libraries, packages, source code, cloud functions and APIs.

Popular New Releases in Reinforcement Learning

0.23.1

v1.7.0 - Linux

ML-Agents Release 19

Spring 2020

gym

0.23.1

AirSim

v1.7.0 - Linux

ml-agents

ML-Agents Release 19

pwnagotchi

Practical_RL

Spring 2020

Popular Libraries in Reinforcement Learning

Trending New libraries in Reinforcement Learning

Top Authors in Reinforcement Learning

1

27 Libraries

10258

2

21 Libraries

43203

3

13 Libraries

14902

4

12 Libraries

14357

5

12 Libraries

1103

6

9 Libraries

752

7

9 Libraries

1942

8

8 Libraries

1445

9

7 Libraries

2478

10

7 Libraries

135

1

27 Libraries

10258

2

21 Libraries

43203

3

13 Libraries

14902

4

12 Libraries

14357

5

12 Libraries

1103

6

9 Libraries

752

7

9 Libraries

1942

8

8 Libraries

1445

9

7 Libraries

2478

10

7 Libraries

135

Trending Kits in Reinforcement Learning

No Trending Kits are available at this moment for Reinforcement Learning

Trending Discussions on Reinforcement Learning

    tensorboard not showing results using ray rllib
    Why does my model not learn? Very high loss
    Action masking for continuous action space in reinforcement learning
    Using BatchedPyEnvironment in tf_agents
    Keras GradientType: Calculating gradients with respect to the output node
    RuntimeError: Found dtype Double but expected Float - PyTorch
    What is the purpose of [np.arange(0, self.batch_size), action] after the neural network?
    Weird-looking curve in DRL
    keras-rl model with multiple outputs
    no method matching logpdf when sampling from uniform distribution

QUESTION

tensorboard not showing results using ray rllib

Asked 2022-Mar-28 at 09:14

I am trainig a reinforcement learning model on google colab using tune and rllib. At first I was able to show the training results useing tensorboard but it is no longer working and I can't seem to find where it comes from, I didn't change anything so I feel a bit lost here.

What it shows (the directory is the right one) :

My current directory :

The training phase:

1ray.init(ignore_reinit_error=True)
2
3tune.run("PPO",
4         config = {"env" : CustomEnv2,
5                  #  "evaluation_interval" : 2,
6                  #  "evaluation_num_episodes" : 2,
7                   "num_workers" :1},
8         num_samples=1,
9        #  checkpoint_at_end=True,
10         stop={"training_iteration": 10},
11         local_dir = './test1')
12

Plotting results:

1ray.init(ignore_reinit_error=True)
2
3tune.run("PPO",
4         config = {"env" : CustomEnv2,
5                  #  "evaluation_interval" : 2,
6                  #  "evaluation_num_episodes" : 2,
7                   "num_workers" :1},
8         num_samples=1,
9        #  checkpoint_at_end=True,
10         stop={"training_iteration": 10},
11         local_dir = './test1')
12%load_ext tensorboard 
13
14%tensorboard --logdir='/content/test1/PPO/PPO_CustomEnv2_024da_00000_0_2022-03-23_09-02-47'
15

ANSWER

Answered 2022-Mar-25 at 02:06

You are using Rllib, right? I actually don't see the tensorboard file (i.e. events.out.tfevents.xxx.xxx) in your path. Maybe you should check if you have this file first.

Source https://stackoverflow.com/questions/71584763