alidenewade commited on
Commit
08a65d2
·
verified ·
1 Parent(s): efaf8a7

Upload folder using huggingface_hub

Browse files
.summary/0/events.out.tfevents.1730984134.ali ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:63b94cdef6e415ddbb0526608b0aeec84bab1ed76f502a53dc2253bb836dc4ad
3
+ size 3066
README.md CHANGED
@@ -15,7 +15,7 @@ model-index:
15
  type: doom_health_gathering_supreme
16
  metrics:
17
  - type: mean_reward
18
- value: 4.00 +/- 0.58
19
  name: mean_reward
20
  verified: false
21
  ---
 
15
  type: doom_health_gathering_supreme
16
  metrics:
17
  - type: mean_reward
18
+ value: 4.09 +/- 0.28
19
  name: mean_reward
20
  verified: false
21
  ---
checkpoint_p0/checkpoint_000001957_8015872.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:297499a87cb551ed79e0a23e15ebfe6c9cce1d619daba0f8cf9ee95af2bd0a36
3
+ size 34929669
replay.mp4 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e97dd79cc84566043bc7f37148d9b9076a10c3e97a1ca07a0c143811c8fd34b0
3
- size 5723967
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4dadc597826f82f8cc03bfea38fe89b15e84fe47b1a75b3a86e6c815b8edfa9f
3
+ size 6270807
sf_log.txt CHANGED
@@ -5888,3 +5888,569 @@ main_loop: 558.4363
5888
  [2024-11-07 14:52:32,278][04584] Avg episode reward: 4.498, avg true_objective: 3.998
5889
  [2024-11-07 14:52:32,283][04584] Num frames 4000...
5890
  [2024-11-07 14:52:40,832][04584] Replay video saved to /root/hfRL/ml/LunarLander-v2/train_dir/default_experiment/replay.mp4!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5888
  [2024-11-07 14:52:32,278][04584] Avg episode reward: 4.498, avg true_objective: 3.998
5889
  [2024-11-07 14:52:32,283][04584] Num frames 4000...
5890
  [2024-11-07 14:52:40,832][04584] Replay video saved to /root/hfRL/ml/LunarLander-v2/train_dir/default_experiment/replay.mp4!
5891
+ [2024-11-07 14:52:50,207][04584] The model has been pushed to https://huggingface.co/alidenewade/rl_course_vizdoom_health_gathering_supreme
5892
+ [2024-11-07 14:55:34,046][04584] Environment doom_basic already registered, overwriting...
5893
+ [2024-11-07 14:55:34,050][04584] Environment doom_two_colors_easy already registered, overwriting...
5894
+ [2024-11-07 14:55:34,053][04584] Environment doom_two_colors_hard already registered, overwriting...
5895
+ [2024-11-07 14:55:34,054][04584] Environment doom_dm already registered, overwriting...
5896
+ [2024-11-07 14:55:34,055][04584] Environment doom_dwango5 already registered, overwriting...
5897
+ [2024-11-07 14:55:34,057][04584] Environment doom_my_way_home_flat_actions already registered, overwriting...
5898
+ [2024-11-07 14:55:34,058][04584] Environment doom_defend_the_center_flat_actions already registered, overwriting...
5899
+ [2024-11-07 14:55:34,059][04584] Environment doom_my_way_home already registered, overwriting...
5900
+ [2024-11-07 14:55:34,060][04584] Environment doom_deadly_corridor already registered, overwriting...
5901
+ [2024-11-07 14:55:34,063][04584] Environment doom_defend_the_center already registered, overwriting...
5902
+ [2024-11-07 14:55:34,065][04584] Environment doom_defend_the_line already registered, overwriting...
5903
+ [2024-11-07 14:55:34,067][04584] Environment doom_health_gathering already registered, overwriting...
5904
+ [2024-11-07 14:55:34,069][04584] Environment doom_health_gathering_supreme already registered, overwriting...
5905
+ [2024-11-07 14:55:34,070][04584] Environment doom_battle already registered, overwriting...
5906
+ [2024-11-07 14:55:34,072][04584] Environment doom_battle2 already registered, overwriting...
5907
+ [2024-11-07 14:55:34,073][04584] Environment doom_duel_bots already registered, overwriting...
5908
+ [2024-11-07 14:55:34,075][04584] Environment doom_deathmatch_bots already registered, overwriting...
5909
+ [2024-11-07 14:55:34,075][04584] Environment doom_duel already registered, overwriting...
5910
+ [2024-11-07 14:55:34,078][04584] Environment doom_deathmatch_full already registered, overwriting...
5911
+ [2024-11-07 14:55:34,079][04584] Environment doom_benchmark already registered, overwriting...
5912
+ [2024-11-07 14:55:34,081][04584] register_encoder_factory: <function make_vizdoom_encoder at 0x7f3d19f46950>
5913
+ [2024-11-07 14:55:34,099][04584] Loading existing experiment configuration from /root/hfRL/ml/LunarLander-v2/train_dir/default_experiment/config.json
5914
+ [2024-11-07 14:55:34,107][04584] Experiment dir /root/hfRL/ml/LunarLander-v2/train_dir/default_experiment already exists!
5915
+ [2024-11-07 14:55:34,109][04584] Resuming existing experiment from /root/hfRL/ml/LunarLander-v2/train_dir/default_experiment...
5916
+ [2024-11-07 14:55:34,110][04584] Weights and Biases integration disabled
5917
+ [2024-11-07 14:55:34,116][04584] Environment var CUDA_VISIBLE_DEVICES is 0
5918
+
5919
+ [2024-11-07 14:55:36,623][04584] Starting experiment with the following configuration:
5920
+ help=False
5921
+ algo=APPO
5922
+ env=doom_health_gathering_supreme
5923
+ experiment=default_experiment
5924
+ train_dir=/root/hfRL/ml/LunarLander-v2/train_dir
5925
+ restart_behavior=resume
5926
+ device=gpu
5927
+ seed=None
5928
+ num_policies=1
5929
+ async_rl=True
5930
+ serial_mode=False
5931
+ batched_sampling=False
5932
+ num_batches_to_accumulate=2
5933
+ worker_num_splits=2
5934
+ policy_workers_per_policy=1
5935
+ max_policy_lag=1000
5936
+ num_workers=8
5937
+ num_envs_per_worker=4
5938
+ batch_size=1024
5939
+ num_batches_per_epoch=1
5940
+ num_epochs=1
5941
+ rollout=32
5942
+ recurrence=32
5943
+ shuffle_minibatches=False
5944
+ gamma=0.99
5945
+ reward_scale=1.0
5946
+ reward_clip=1000.0
5947
+ value_bootstrap=False
5948
+ normalize_returns=True
5949
+ exploration_loss_coeff=0.001
5950
+ value_loss_coeff=0.5
5951
+ kl_loss_coeff=0.0
5952
+ exploration_loss=symmetric_kl
5953
+ gae_lambda=0.95
5954
+ ppo_clip_ratio=0.1
5955
+ ppo_clip_value=0.2
5956
+ with_vtrace=False
5957
+ vtrace_rho=1.0
5958
+ vtrace_c=1.0
5959
+ optimizer=adam
5960
+ adam_eps=1e-06
5961
+ adam_beta1=0.9
5962
+ adam_beta2=0.999
5963
+ max_grad_norm=4.0
5964
+ learning_rate=0.0001
5965
+ lr_schedule=constant
5966
+ lr_schedule_kl_threshold=0.008
5967
+ lr_adaptive_min=1e-06
5968
+ lr_adaptive_max=0.01
5969
+ obs_subtract_mean=0.0
5970
+ obs_scale=255.0
5971
+ normalize_input=True
5972
+ normalize_input_keys=None
5973
+ decorrelate_experience_max_seconds=0
5974
+ decorrelate_envs_on_one_worker=True
5975
+ actor_worker_gpus=[]
5976
+ set_workers_cpu_affinity=True
5977
+ force_envs_single_thread=False
5978
+ default_niceness=0
5979
+ log_to_file=True
5980
+ experiment_summaries_interval=10
5981
+ flush_summaries_interval=30
5982
+ stats_avg=100
5983
+ summaries_use_frameskip=True
5984
+ heartbeat_interval=20
5985
+ heartbeat_reporting_interval=600
5986
+ train_for_env_steps=8000000
5987
+ train_for_seconds=10000000000
5988
+ save_every_sec=120
5989
+ keep_checkpoints=2
5990
+ load_checkpoint_kind=latest
5991
+ save_milestones_sec=-1
5992
+ save_best_every_sec=5
5993
+ save_best_metric=reward
5994
+ save_best_after=100000
5995
+ benchmark=False
5996
+ encoder_mlp_layers=[512, 512]
5997
+ encoder_conv_architecture=convnet_simple
5998
+ encoder_conv_mlp_layers=[512]
5999
+ use_rnn=True
6000
+ rnn_size=512
6001
+ rnn_type=gru
6002
+ rnn_num_layers=1
6003
+ decoder_mlp_layers=[]
6004
+ nonlinearity=elu
6005
+ policy_initialization=orthogonal
6006
+ policy_init_gain=1.0
6007
+ actor_critic_share_weights=True
6008
+ adaptive_stddev=True
6009
+ continuous_tanh_scale=0.0
6010
+ initial_stddev=1.0
6011
+ use_env_info_cache=False
6012
+ env_gpu_actions=False
6013
+ env_gpu_observations=True
6014
+ env_frameskip=4
6015
+ env_framestack=1
6016
+ pixel_format=CHW
6017
+ use_record_episode_statistics=False
6018
+ with_wandb=False
6019
+ wandb_user=None
6020
+ wandb_project=sample_factory
6021
+ wandb_group=None
6022
+ wandb_job_type=SF
6023
+ wandb_tags=[]
6024
+ with_pbt=False
6025
+ pbt_mix_policies_in_one_env=True
6026
+ pbt_period_env_steps=5000000
6027
+ pbt_start_mutation=20000000
6028
+ pbt_replace_fraction=0.3
6029
+ pbt_mutation_rate=0.15
6030
+ pbt_replace_reward_gap=0.1
6031
+ pbt_replace_reward_gap_absolute=1e-06
6032
+ pbt_optimize_gamma=False
6033
+ pbt_target_objective=true_objective
6034
+ pbt_perturb_min=1.1
6035
+ pbt_perturb_max=1.5
6036
+ num_agents=-1
6037
+ num_humans=0
6038
+ num_bots=-1
6039
+ start_bot_difficulty=None
6040
+ timelimit=None
6041
+ res_w=128
6042
+ res_h=72
6043
+ wide_aspect_ratio=False
6044
+ eval_env_frameskip=1
6045
+ fps=35
6046
+ command_line=--env=doom_health_gathering_supreme --num_workers=8 --num_envs_per_worker=4 --train_for_env_steps=4000000
6047
+ cli_args={'env': 'doom_health_gathering_supreme', 'num_workers': 8, 'num_envs_per_worker': 4, 'train_for_env_steps': 4000000}
6048
+ git_hash=unknown
6049
+ git_repo_name=not a git repository
6050
+ [2024-11-07 14:55:36,625][04584] Saving configuration to /root/hfRL/ml/LunarLander-v2/train_dir/default_experiment/config.json...
6051
+ [2024-11-07 14:55:36,628][04584] Rollout worker 0 uses device cpu
6052
+ [2024-11-07 14:55:36,629][04584] Rollout worker 1 uses device cpu
6053
+ [2024-11-07 14:55:36,631][04584] Rollout worker 2 uses device cpu
6054
+ [2024-11-07 14:55:36,633][04584] Rollout worker 3 uses device cpu
6055
+ [2024-11-07 14:55:36,635][04584] Rollout worker 4 uses device cpu
6056
+ [2024-11-07 14:55:36,637][04584] Rollout worker 5 uses device cpu
6057
+ [2024-11-07 14:55:36,639][04584] Rollout worker 6 uses device cpu
6058
+ [2024-11-07 14:55:36,641][04584] Rollout worker 7 uses device cpu
6059
+ [2024-11-07 14:55:36,708][04584] Using GPUs [0] for process 0 (actually maps to GPUs [0])
6060
+ [2024-11-07 14:55:36,710][04584] InferenceWorker_p0-w0: min num requests: 2
6061
+ [2024-11-07 14:55:36,746][04584] Starting all processes...
6062
+ [2024-11-07 14:55:36,748][04584] Starting process learner_proc0
6063
+ [2024-11-07 14:55:36,796][04584] Starting all processes...
6064
+ [2024-11-07 14:55:36,802][04584] Starting process inference_proc0-0
6065
+ [2024-11-07 14:55:36,803][04584] Starting process rollout_proc0
6066
+ [2024-11-07 14:55:36,803][04584] Starting process rollout_proc1
6067
+ [2024-11-07 14:55:36,803][04584] Starting process rollout_proc2
6068
+ [2024-11-07 14:55:36,804][04584] Starting process rollout_proc3
6069
+ [2024-11-07 14:55:36,805][04584] Starting process rollout_proc4
6070
+ [2024-11-07 14:55:36,806][04584] Starting process rollout_proc5
6071
+ [2024-11-07 14:55:36,807][04584] Starting process rollout_proc6
6072
+ [2024-11-07 14:55:36,808][04584] Starting process rollout_proc7
6073
+ [2024-11-07 14:55:43,371][07866] Worker 0 uses CPU cores [0]
6074
+ [2024-11-07 14:55:44,070][07873] Worker 1 uses CPU cores [1]
6075
+ [2024-11-07 14:55:44,330][07871] Worker 3 uses CPU cores [3]
6076
+ [2024-11-07 14:55:44,346][07852] Using GPUs [0] for process 0 (actually maps to GPUs [0])
6077
+ [2024-11-07 14:55:44,347][07852] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0
6078
+ [2024-11-07 14:55:44,350][07865] Using GPUs [0] for process 0 (actually maps to GPUs [0])
6079
+ [2024-11-07 14:55:44,351][07865] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0
6080
+ [2024-11-07 14:55:44,383][07852] Num visible devices: 1
6081
+ [2024-11-07 14:55:44,418][07865] Num visible devices: 1
6082
+ [2024-11-07 14:55:44,418][07852] Starting seed is not provided
6083
+ [2024-11-07 14:55:44,418][07852] Using GPUs [0] for process 0 (actually maps to GPUs [0])
6084
+ [2024-11-07 14:55:44,418][07852] Initializing actor-critic model on device cuda:0
6085
+ [2024-11-07 14:55:44,419][07852] RunningMeanStd input shape: (3, 72, 128)
6086
+ [2024-11-07 14:55:44,422][07852] RunningMeanStd input shape: (1,)
6087
+ [2024-11-07 14:55:44,488][07852] ConvEncoder: input_channels=3
6088
+ [2024-11-07 14:55:44,600][07874] Worker 5 uses CPU cores [5]
6089
+ [2024-11-07 14:55:44,619][07885] Worker 7 uses CPU cores [0, 1, 2, 3, 4, 5, 6]
6090
+ [2024-11-07 14:55:44,664][07884] Worker 6 uses CPU cores [6]
6091
+ [2024-11-07 14:55:44,740][07870] Worker 2 uses CPU cores [2]
6092
+ [2024-11-07 14:55:44,747][07852] Conv encoder output size: 512
6093
+ [2024-11-07 14:55:44,747][07852] Policy head output size: 512
6094
+ [2024-11-07 14:55:44,767][07852] Created Actor Critic model with architecture:
6095
+ [2024-11-07 14:55:44,768][07852] ActorCriticSharedWeights(
6096
+ (obs_normalizer): ObservationNormalizer(
6097
+ (running_mean_std): RunningMeanStdDictInPlace(
6098
+ (running_mean_std): ModuleDict(
6099
+ (obs): RunningMeanStdInPlace()
6100
+ )
6101
+ )
6102
+ )
6103
+ (returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace)
6104
+ (encoder): VizdoomEncoder(
6105
+ (basic_encoder): ConvEncoder(
6106
+ (enc): RecursiveScriptModule(
6107
+ original_name=ConvEncoderImpl
6108
+ (conv_head): RecursiveScriptModule(
6109
+ original_name=Sequential
6110
+ (0): RecursiveScriptModule(original_name=Conv2d)
6111
+ (1): RecursiveScriptModule(original_name=ELU)
6112
+ (2): RecursiveScriptModule(original_name=Conv2d)
6113
+ (3): RecursiveScriptModule(original_name=ELU)
6114
+ (4): RecursiveScriptModule(original_name=Conv2d)
6115
+ (5): RecursiveScriptModule(original_name=ELU)
6116
+ )
6117
+ (mlp_layers): RecursiveScriptModule(
6118
+ original_name=Sequential
6119
+ (0): RecursiveScriptModule(original_name=Linear)
6120
+ (1): RecursiveScriptModule(original_name=ELU)
6121
+ )
6122
+ )
6123
+ )
6124
+ )
6125
+ (core): ModelCoreRNN(
6126
+ (core): GRU(512, 512)
6127
+ )
6128
+ (decoder): MlpDecoder(
6129
+ (mlp): Identity()
6130
+ )
6131
+ (critic_linear): Linear(in_features=512, out_features=1, bias=True)
6132
+ (action_parameterization): ActionParameterizationDefault(
6133
+ (distribution_linear): Linear(in_features=512, out_features=5, bias=True)
6134
+ )
6135
+ )
6136
+ [2024-11-07 14:55:45,005][07852] Using optimizer <class 'torch.optim.adam.Adam'>
6137
+ [2024-11-07 14:55:45,120][07872] Worker 4 uses CPU cores [4]
6138
+ [2024-11-07 14:55:46,269][07852] Loading state from checkpoint /root/hfRL/ml/LunarLander-v2/train_dir/default_experiment/checkpoint_p0/checkpoint_000001955_8007680.pth...
6139
+ [2024-11-07 14:55:46,317][07852] Loading model from checkpoint
6140
+ [2024-11-07 14:55:46,320][07852] Loaded experiment state at self.train_step=1955, self.env_steps=8007680
6141
+ [2024-11-07 14:55:46,320][07852] Initialized policy 0 weights for model version 1955
6142
+ [2024-11-07 14:55:46,327][07852] LearnerWorker_p0 finished initialization!
6143
+ [2024-11-07 14:55:46,327][07852] Using GPUs [0] for process 0 (actually maps to GPUs [0])
6144
+ [2024-11-07 14:55:46,561][07865] RunningMeanStd input shape: (3, 72, 128)
6145
+ [2024-11-07 14:55:46,563][07865] RunningMeanStd input shape: (1,)
6146
+ [2024-11-07 14:55:46,580][07865] ConvEncoder: input_channels=3
6147
+ [2024-11-07 14:55:46,746][07865] Conv encoder output size: 512
6148
+ [2024-11-07 14:55:46,747][07865] Policy head output size: 512
6149
+ [2024-11-07 14:55:46,824][04584] Inference worker 0-0 is ready!
6150
+ [2024-11-07 14:55:46,826][04584] All inference workers are ready! Signal rollout workers to start!
6151
+ [2024-11-07 14:55:47,036][07866] Doom resolution: 160x120, resize resolution: (128, 72)
6152
+ [2024-11-07 14:55:47,038][07872] Doom resolution: 160x120, resize resolution: (128, 72)
6153
+ [2024-11-07 14:55:47,069][07874] Doom resolution: 160x120, resize resolution: (128, 72)
6154
+ [2024-11-07 14:55:47,090][07871] Doom resolution: 160x120, resize resolution: (128, 72)
6155
+ [2024-11-07 14:55:47,104][07873] Doom resolution: 160x120, resize resolution: (128, 72)
6156
+ [2024-11-07 14:55:47,104][07870] Doom resolution: 160x120, resize resolution: (128, 72)
6157
+ [2024-11-07 14:55:47,247][07885] Doom resolution: 160x120, resize resolution: (128, 72)
6158
+ [2024-11-07 14:55:47,261][07884] Doom resolution: 160x120, resize resolution: (128, 72)
6159
+ [2024-11-07 14:55:48,282][07871] Decorrelating experience for 0 frames...
6160
+ [2024-11-07 14:55:48,298][07872] Decorrelating experience for 0 frames...
6161
+ [2024-11-07 14:55:48,386][07885] Decorrelating experience for 0 frames...
6162
+ [2024-11-07 14:55:48,436][07866] Decorrelating experience for 0 frames...
6163
+ [2024-11-07 14:55:48,449][07884] Decorrelating experience for 0 frames...
6164
+ [2024-11-07 14:55:48,975][07871] Decorrelating experience for 32 frames...
6165
+ [2024-11-07 14:55:48,987][07870] Decorrelating experience for 0 frames...
6166
+ [2024-11-07 14:55:49,060][07872] Decorrelating experience for 32 frames...
6167
+ [2024-11-07 14:55:49,080][07885] Decorrelating experience for 32 frames...
6168
+ [2024-11-07 14:55:49,117][04584] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 8007680. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
6169
+ [2024-11-07 14:55:49,212][07884] Decorrelating experience for 32 frames...
6170
+ [2024-11-07 14:55:49,240][07866] Decorrelating experience for 32 frames...
6171
+ [2024-11-07 14:55:49,253][07874] Decorrelating experience for 0 frames...
6172
+ [2024-11-07 14:55:49,700][07870] Decorrelating experience for 32 frames...
6173
+ [2024-11-07 14:55:49,826][07872] Decorrelating experience for 64 frames...
6174
+ [2024-11-07 14:55:49,939][07871] Decorrelating experience for 64 frames...
6175
+ [2024-11-07 14:55:50,064][07885] Decorrelating experience for 64 frames...
6176
+ [2024-11-07 14:55:50,088][07874] Decorrelating experience for 32 frames...
6177
+ [2024-11-07 14:55:50,292][07870] Decorrelating experience for 64 frames...
6178
+ [2024-11-07 14:55:50,415][07873] Decorrelating experience for 0 frames...
6179
+ [2024-11-07 14:55:50,424][07866] Decorrelating experience for 64 frames...
6180
+ [2024-11-07 14:55:50,533][07872] Decorrelating experience for 96 frames...
6181
+ [2024-11-07 14:55:50,673][07884] Decorrelating experience for 64 frames...
6182
+ [2024-11-07 14:55:50,733][07871] Decorrelating experience for 96 frames...
6183
+ [2024-11-07 14:55:50,841][07885] Decorrelating experience for 96 frames...
6184
+ [2024-11-07 14:55:50,955][07870] Decorrelating experience for 96 frames...
6185
+ [2024-11-07 14:55:50,987][07873] Decorrelating experience for 32 frames...
6186
+ [2024-11-07 14:55:51,133][07866] Decorrelating experience for 96 frames...
6187
+ [2024-11-07 14:55:51,244][07874] Decorrelating experience for 64 frames...
6188
+ [2024-11-07 14:55:51,659][07873] Decorrelating experience for 64 frames...
6189
+ [2024-11-07 14:55:51,785][07874] Decorrelating experience for 96 frames...
6190
+ [2024-11-07 14:55:51,817][07884] Decorrelating experience for 96 frames...
6191
+ [2024-11-07 14:55:52,517][07873] Decorrelating experience for 96 frames...
6192
+ [2024-11-07 14:55:54,118][04584] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 8007680. Throughput: 0: 231.1. Samples: 1156. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
6193
+ [2024-11-07 14:55:54,125][07852] Signal inference workers to stop experience collection...
6194
+ [2024-11-07 14:55:54,125][04584] Avg episode reward: [(0, '2.241')]
6195
+ [2024-11-07 14:55:54,133][07865] InferenceWorker_p0-w0: stopping experience collection
6196
+ [2024-11-07 14:55:56,697][04584] Heartbeat connected on Batcher_0
6197
+ [2024-11-07 14:55:56,708][04584] Heartbeat connected on InferenceWorker_p0-w0
6198
+ [2024-11-07 14:55:56,717][04584] Heartbeat connected on RolloutWorker_w0
6199
+ [2024-11-07 14:55:56,721][04584] Heartbeat connected on RolloutWorker_w1
6200
+ [2024-11-07 14:55:56,726][04584] Heartbeat connected on RolloutWorker_w2
6201
+ [2024-11-07 14:55:56,738][04584] Heartbeat connected on RolloutWorker_w5
6202
+ [2024-11-07 14:55:56,742][04584] Heartbeat connected on RolloutWorker_w3
6203
+ [2024-11-07 14:55:56,744][04584] Heartbeat connected on RolloutWorker_w6
6204
+ [2024-11-07 14:55:56,746][04584] Heartbeat connected on RolloutWorker_w4
6205
+ [2024-11-07 14:55:56,749][04584] Heartbeat connected on RolloutWorker_w7
6206
+ [2024-11-07 14:55:59,116][04584] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 8007680. Throughput: 0: 272.4. Samples: 2724. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
6207
+ [2024-11-07 14:55:59,117][04584] Avg episode reward: [(0, '2.386')]
6208
+ [2024-11-07 14:56:04,116][04584] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 8007680. Throughput: 0: 181.6. Samples: 2724. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0)
6209
+ [2024-11-07 14:56:04,121][04584] Avg episode reward: [(0, '2.386')]
6210
+ [2024-11-07 14:56:06,231][07852] Signal inference workers to resume experience collection...
6211
+ [2024-11-07 14:56:06,233][07865] InferenceWorker_p0-w0: resuming experience collection
6212
+ [2024-11-07 14:56:06,236][07852] Stopping Batcher_0...
6213
+ [2024-11-07 14:56:06,237][07852] Loop batcher_evt_loop terminating...
6214
+ [2024-11-07 14:56:06,247][04584] Component Batcher_0 stopped!
6215
+ [2024-11-07 14:56:06,361][04584] Component RolloutWorker_w0 stopped!
6216
+ [2024-11-07 14:56:06,362][07866] Stopping RolloutWorker_w0...
6217
+ [2024-11-07 14:56:06,365][07866] Loop rollout_proc0_evt_loop terminating...
6218
+ [2024-11-07 14:56:06,366][07874] Stopping RolloutWorker_w5...
6219
+ [2024-11-07 14:56:06,367][07874] Loop rollout_proc5_evt_loop terminating...
6220
+ [2024-11-07 14:56:06,366][04584] Component RolloutWorker_w5 stopped!
6221
+ [2024-11-07 14:56:06,377][07865] Weights refcount: 2 0
6222
+ [2024-11-07 14:56:06,422][07865] Stopping InferenceWorker_p0-w0...
6223
+ [2024-11-07 14:56:06,423][07865] Loop inference_proc0-0_evt_loop terminating...
6224
+ [2024-11-07 14:56:06,422][04584] Component InferenceWorker_p0-w0 stopped!
6225
+ [2024-11-07 14:56:06,461][07884] Stopping RolloutWorker_w6...
6226
+ [2024-11-07 14:56:06,462][07884] Loop rollout_proc6_evt_loop terminating...
6227
+ [2024-11-07 14:56:06,461][04584] Component RolloutWorker_w6 stopped!
6228
+ [2024-11-07 14:56:06,467][07871] Stopping RolloutWorker_w3...
6229
+ [2024-11-07 14:56:06,468][07871] Loop rollout_proc3_evt_loop terminating...
6230
+ [2024-11-07 14:56:06,468][04584] Component RolloutWorker_w3 stopped!
6231
+ [2024-11-07 14:56:06,479][07872] Stopping RolloutWorker_w4...
6232
+ [2024-11-07 14:56:06,479][07872] Loop rollout_proc4_evt_loop terminating...
6233
+ [2024-11-07 14:56:06,479][04584] Component RolloutWorker_w4 stopped!
6234
+ [2024-11-07 14:56:06,516][07885] Stopping RolloutWorker_w7...
6235
+ [2024-11-07 14:56:06,515][04584] Component RolloutWorker_w7 stopped!
6236
+ [2024-11-07 14:56:06,517][07885] Loop rollout_proc7_evt_loop terminating...
6237
+ [2024-11-07 14:56:06,608][07870] Stopping RolloutWorker_w2...
6238
+ [2024-11-07 14:56:06,609][07870] Loop rollout_proc2_evt_loop terminating...
6239
+ [2024-11-07 14:56:06,610][04584] Component RolloutWorker_w2 stopped!
6240
+ [2024-11-07 14:56:06,634][07873] Stopping RolloutWorker_w1...
6241
+ [2024-11-07 14:56:06,635][04584] Component RolloutWorker_w1 stopped!
6242
+ [2024-11-07 14:56:06,640][07873] Loop rollout_proc1_evt_loop terminating...
6243
+ [2024-11-07 14:56:07,253][07852] Saving /root/hfRL/ml/LunarLander-v2/train_dir/default_experiment/checkpoint_p0/checkpoint_000001957_8015872.pth...
6244
+ [2024-11-07 14:56:07,252][04584] Heartbeat connected on LearnerWorker_p0
6245
+ [2024-11-07 14:56:07,807][07852] Removing /root/hfRL/ml/LunarLander-v2/train_dir/default_experiment/checkpoint_p0/checkpoint_000001806_7397376.pth
6246
+ [2024-11-07 14:56:07,821][07852] Saving /root/hfRL/ml/LunarLander-v2/train_dir/default_experiment/checkpoint_p0/checkpoint_000001957_8015872.pth...
6247
+ [2024-11-07 14:56:08,058][07852] Stopping LearnerWorker_p0...
6248
+ [2024-11-07 14:56:08,058][07852] Loop learner_proc0_evt_loop terminating...
6249
+ [2024-11-07 14:56:08,073][04584] Component LearnerWorker_p0 stopped!
6250
+ [2024-11-07 14:56:08,075][04584] Waiting for process learner_proc0 to stop...
6251
+ [2024-11-07 14:56:09,968][04584] Waiting for process inference_proc0-0 to join...
6252
+ [2024-11-07 14:56:09,970][04584] Waiting for process rollout_proc0 to join...
6253
+ [2024-11-07 14:56:09,971][04584] Waiting for process rollout_proc1 to join...
6254
+ [2024-11-07 14:56:09,973][04584] Waiting for process rollout_proc2 to join...
6255
+ [2024-11-07 14:56:09,975][04584] Waiting for process rollout_proc3 to join...
6256
+ [2024-11-07 14:56:09,978][04584] Waiting for process rollout_proc4 to join...
6257
+ [2024-11-07 14:56:09,980][04584] Waiting for process rollout_proc5 to join...
6258
+ [2024-11-07 14:56:09,982][04584] Waiting for process rollout_proc6 to join...
6259
+ [2024-11-07 14:56:09,985][04584] Waiting for process rollout_proc7 to join...
6260
+ [2024-11-07 14:56:09,988][04584] Batcher 0 profile tree view:
6261
+ batching: 0.0548, releasing_batches: 0.0018
6262
+ [2024-11-07 14:56:09,990][04584] InferenceWorker_p0-w0 profile tree view:
6263
+ update_model: 0.0160
6264
+ wait_policy: 0.0005
6265
+ wait_policy_total: 3.7567
6266
+ one_step: 0.0108
6267
+ handle_policy_step: 3.3550
6268
+ deserialize: 0.0684, stack: 0.0110, obs_to_device_normalize: 0.7746, forward: 2.0215, send_messages: 0.1367
6269
+ prepare_outputs: 0.2665
6270
+ to_cpu: 0.1882
6271
+ [2024-11-07 14:56:09,992][04584] Learner 0 profile tree view:
6272
+ misc: 0.0001, prepare_batch: 2.3351
6273
+ train: 11.7240
6274
+ epoch_init: 0.0001, minibatch_init: 0.0000, losses_postprocess: 0.0015, kl_divergence: 0.4367, after_optimizer: 0.9795
6275
+ calculate_losses: 2.5855
6276
+ losses_init: 0.0000, forward_head: 0.4665, bptt_initial: 1.1746, tail: 0.3188, advantages_returns: 0.0022, losses: 0.4083
6277
+ bptt: 0.2146
6278
+ bptt_forward_core: 0.2144
6279
+ update: 7.7171
6280
+ clip: 0.6435
6281
+ [2024-11-07 14:56:09,993][04584] RolloutWorker_w0 profile tree view:
6282
+ wait_for_trajectories: 0.0039, enqueue_policy_requests: 0.0693, env_step: 0.7017, overhead: 0.0387, complete_rollouts: 0.0014
6283
+ save_policy_outputs: 0.0719
6284
+ split_output_tensors: 0.0275
6285
+ [2024-11-07 14:56:09,996][04584] RolloutWorker_w7 profile tree view:
6286
+ wait_for_trajectories: 0.0008, enqueue_policy_requests: 0.0552, env_step: 1.1240, overhead: 0.0403, complete_rollouts: 0.0010
6287
+ save_policy_outputs: 0.0683
6288
+ split_output_tensors: 0.0209
6289
+ [2024-11-07 14:56:10,000][04584] Loop Runner_EvtLoop terminating...
6290
+ [2024-11-07 14:56:10,003][04584] Runner profile tree view:
6291
+ main_loop: 33.2571
6292
+ [2024-11-07 14:56:10,006][04584] Collected {0: 8015872}, FPS: 246.3
6293
+ [2024-11-07 14:56:10,173][04584] Loading existing experiment configuration from /root/hfRL/ml/LunarLander-v2/train_dir/default_experiment/config.json
6294
+ [2024-11-07 14:56:10,175][04584] Overriding arg 'num_workers' with value 4 passed from command line
6295
+ [2024-11-07 14:56:10,177][04584] Adding new argument 'no_render'=True that is not in the saved config file!
6296
+ [2024-11-07 14:56:10,179][04584] Adding new argument 'save_video'=True that is not in the saved config file!
6297
+ [2024-11-07 14:56:10,182][04584] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
6298
+ [2024-11-07 14:56:10,184][04584] Adding new argument 'video_name'=None that is not in the saved config file!
6299
+ [2024-11-07 14:56:10,185][04584] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file!
6300
+ [2024-11-07 14:56:10,187][04584] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
6301
+ [2024-11-07 14:56:10,189][04584] Adding new argument 'push_to_hub'=False that is not in the saved config file!
6302
+ [2024-11-07 14:56:10,191][04584] Adding new argument 'hf_repository'=None that is not in the saved config file!
6303
+ [2024-11-07 14:56:10,193][04584] Adding new argument 'policy_index'=0 that is not in the saved config file!
6304
+ [2024-11-07 14:56:10,194][04584] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
6305
+ [2024-11-07 14:56:10,195][04584] Adding new argument 'train_script'=None that is not in the saved config file!
6306
+ [2024-11-07 14:56:10,197][04584] Adding new argument 'enjoy_script'=None that is not in the saved config file!
6307
+ [2024-11-07 14:56:10,202][04584] Using frameskip 1 and render_action_repeat=4 for evaluation
6308
+ [2024-11-07 14:56:10,254][04584] RunningMeanStd input shape: (3, 72, 128)
6309
+ [2024-11-07 14:56:10,256][04584] RunningMeanStd input shape: (1,)
6310
+ [2024-11-07 14:56:10,293][04584] ConvEncoder: input_channels=3
6311
+ [2024-11-07 14:56:10,355][04584] Conv encoder output size: 512
6312
+ [2024-11-07 14:56:10,357][04584] Policy head output size: 512
6313
+ [2024-11-07 14:56:10,406][04584] Loading state from checkpoint /root/hfRL/ml/LunarLander-v2/train_dir/default_experiment/checkpoint_p0/checkpoint_000001957_8015872.pth...
6314
+ [2024-11-07 14:56:13,878][04584] Num frames 100...
6315
+ [2024-11-07 14:56:14,106][04584] Num frames 200...
6316
+ [2024-11-07 14:56:14,341][04584] Num frames 300...
6317
+ [2024-11-07 14:56:14,572][04584] Avg episode rewards: #0: 3.840, true rewards: #0: 3.840
6318
+ [2024-11-07 14:56:14,574][04584] Avg episode reward: 3.840, avg true_objective: 3.840
6319
+ [2024-11-07 14:56:14,609][04584] Num frames 400...
6320
+ [2024-11-07 14:56:14,867][04584] Num frames 500...
6321
+ [2024-11-07 14:56:15,349][04584] Num frames 600...
6322
+ [2024-11-07 14:56:16,008][04584] Num frames 700...
6323
+ [2024-11-07 14:56:16,669][04584] Avg episode rewards: #0: 3.840, true rewards: #0: 3.840
6324
+ [2024-11-07 14:56:16,683][04584] Avg episode reward: 3.840, avg true_objective: 3.840
6325
+ [2024-11-07 14:56:16,990][04584] Num frames 800...
6326
+ [2024-11-07 14:56:17,996][04584] Num frames 900...
6327
+ [2024-11-07 14:56:18,675][04584] Num frames 1000...
6328
+ [2024-11-07 14:56:19,397][04584] Num frames 1100...
6329
+ [2024-11-07 14:56:19,814][04584] Avg episode rewards: #0: 3.840, true rewards: #0: 3.840
6330
+ [2024-11-07 14:56:19,818][04584] Avg episode reward: 3.840, avg true_objective: 3.840
6331
+ [2024-11-07 14:56:20,211][04584] Num frames 1200...
6332
+ [2024-11-07 14:56:20,758][04584] Num frames 1300...
6333
+ [2024-11-07 14:56:21,379][04584] Num frames 1400...
6334
+ [2024-11-07 14:56:21,844][04584] Num frames 1500...
6335
+ [2024-11-07 14:56:22,003][04584] Avg episode rewards: #0: 3.840, true rewards: #0: 3.840
6336
+ [2024-11-07 14:56:22,005][04584] Avg episode reward: 3.840, avg true_objective: 3.840
6337
+ [2024-11-07 14:56:22,207][04584] Num frames 1600...
6338
+ [2024-11-07 14:56:22,423][04584] Num frames 1700...
6339
+ [2024-11-07 14:56:22,654][04584] Num frames 1800...
6340
+ [2024-11-07 14:56:22,864][04584] Num frames 1900...
6341
+ [2024-11-07 14:56:22,961][04584] Avg episode rewards: #0: 3.840, true rewards: #0: 3.840
6342
+ [2024-11-07 14:56:22,962][04584] Avg episode reward: 3.840, avg true_objective: 3.840
6343
+ [2024-11-07 14:56:23,201][04584] Num frames 2000...
6344
+ [2024-11-07 14:56:23,415][04584] Num frames 2100...
6345
+ [2024-11-07 14:56:23,717][04584] Num frames 2200...
6346
+ [2024-11-07 14:56:23,940][04584] Num frames 2300...
6347
+ [2024-11-07 14:56:24,003][04584] Avg episode rewards: #0: 3.840, true rewards: #0: 3.840
6348
+ [2024-11-07 14:56:24,005][04584] Avg episode reward: 3.840, avg true_objective: 3.840
6349
+ [2024-11-07 14:56:24,222][04584] Num frames 2400...
6350
+ [2024-11-07 14:56:24,428][04584] Num frames 2500...
6351
+ [2024-11-07 14:56:24,677][04584] Num frames 2600...
6352
+ [2024-11-07 14:56:24,929][04584] Num frames 2700...
6353
+ [2024-11-07 14:56:25,159][04584] Num frames 2800...
6354
+ [2024-11-07 14:56:25,249][04584] Avg episode rewards: #0: 4.309, true rewards: #0: 4.023
6355
+ [2024-11-07 14:56:25,250][04584] Avg episode reward: 4.309, avg true_objective: 4.023
6356
+ [2024-11-07 14:56:25,442][04584] Num frames 2900...
6357
+ [2024-11-07 14:56:25,633][04584] Num frames 3000...
6358
+ [2024-11-07 14:56:25,835][04584] Num frames 3100...
6359
+ [2024-11-07 14:56:26,055][04584] Num frames 3200...
6360
+ [2024-11-07 14:56:26,248][04584] Avg episode rewards: #0: 4.455, true rewards: #0: 4.080
6361
+ [2024-11-07 14:56:26,251][04584] Avg episode reward: 4.455, avg true_objective: 4.080
6362
+ [2024-11-07 14:56:26,359][04584] Num frames 3300...
6363
+ [2024-11-07 14:56:26,561][04584] Num frames 3400...
6364
+ [2024-11-07 14:56:26,732][04584] Num frames 3500...
6365
+ [2024-11-07 14:56:26,889][04584] Num frames 3600...
6366
+ [2024-11-07 14:56:27,027][04584] Avg episode rewards: #0: 4.387, true rewards: #0: 4.053
6367
+ [2024-11-07 14:56:27,030][04584] Avg episode reward: 4.387, avg true_objective: 4.053
6368
+ [2024-11-07 14:56:27,125][04584] Num frames 3700...
6369
+ [2024-11-07 14:56:27,303][04584] Num frames 3800...
6370
+ [2024-11-07 14:56:27,472][04584] Num frames 3900...
6371
+ [2024-11-07 14:56:27,634][04584] Num frames 4000...
6372
+ [2024-11-07 14:56:27,752][04584] Avg episode rewards: #0: 4.332, true rewards: #0: 4.032
6373
+ [2024-11-07 14:56:27,754][04584] Avg episode reward: 4.332, avg true_objective: 4.032
6374
+ [2024-11-07 14:56:40,409][04584] Replay video saved to /root/hfRL/ml/LunarLander-v2/train_dir/default_experiment/replay.mp4!
6375
+ [2024-11-07 14:56:40,960][04584] Loading existing experiment configuration from /root/hfRL/ml/LunarLander-v2/train_dir/default_experiment/config.json
6376
+ [2024-11-07 14:56:40,962][04584] Overriding arg 'num_workers' with value 4 passed from command line
6377
+ [2024-11-07 14:56:40,964][04584] Adding new argument 'no_render'=True that is not in the saved config file!
6378
+ [2024-11-07 14:56:40,966][04584] Adding new argument 'save_video'=True that is not in the saved config file!
6379
+ [2024-11-07 14:56:40,973][04584] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file!
6380
+ [2024-11-07 14:56:40,976][04584] Adding new argument 'video_name'=None that is not in the saved config file!
6381
+ [2024-11-07 14:56:40,979][04584] Adding new argument 'max_num_frames'=100000 that is not in the saved config file!
6382
+ [2024-11-07 14:56:40,988][04584] Adding new argument 'max_num_episodes'=10 that is not in the saved config file!
6383
+ [2024-11-07 14:56:40,990][04584] Adding new argument 'push_to_hub'=True that is not in the saved config file!
6384
+ [2024-11-07 14:56:40,993][04584] Adding new argument 'hf_repository'='alidenewade/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file!
6385
+ [2024-11-07 14:56:40,995][04584] Adding new argument 'policy_index'=0 that is not in the saved config file!
6386
+ [2024-11-07 14:56:40,996][04584] Adding new argument 'eval_deterministic'=False that is not in the saved config file!
6387
+ [2024-11-07 14:56:41,005][04584] Adding new argument 'train_script'=None that is not in the saved config file!
6388
+ [2024-11-07 14:56:41,007][04584] Adding new argument 'enjoy_script'=None that is not in the saved config file!
6389
+ [2024-11-07 14:56:41,009][04584] Using frameskip 1 and render_action_repeat=4 for evaluation
6390
+ [2024-11-07 14:56:41,091][04584] RunningMeanStd input shape: (3, 72, 128)
6391
+ [2024-11-07 14:56:41,095][04584] RunningMeanStd input shape: (1,)
6392
+ [2024-11-07 14:56:41,124][04584] ConvEncoder: input_channels=3
6393
+ [2024-11-07 14:56:41,187][04584] Conv encoder output size: 512
6394
+ [2024-11-07 14:56:41,188][04584] Policy head output size: 512
6395
+ [2024-11-07 14:56:41,211][04584] Loading state from checkpoint /root/hfRL/ml/LunarLander-v2/train_dir/default_experiment/checkpoint_p0/checkpoint_000001957_8015872.pth...
6396
+ [2024-11-07 14:56:41,745][04584] Num frames 100...
6397
+ [2024-11-07 14:56:41,957][04584] Num frames 200...
6398
+ [2024-11-07 14:56:42,160][04584] Num frames 300...
6399
+ [2024-11-07 14:56:42,410][04584] Avg episode rewards: #0: 3.840, true rewards: #0: 3.840
6400
+ [2024-11-07 14:56:42,414][04584] Avg episode reward: 3.840, avg true_objective: 3.840
6401
+ [2024-11-07 14:56:42,475][04584] Num frames 400...
6402
+ [2024-11-07 14:56:42,737][04584] Num frames 500...
6403
+ [2024-11-07 14:56:43,040][04584] Num frames 600...
6404
+ [2024-11-07 14:56:43,312][04584] Num frames 700...
6405
+ [2024-11-07 14:56:43,592][04584] Num frames 800...
6406
+ [2024-11-07 14:56:43,713][04584] Avg episode rewards: #0: 4.660, true rewards: #0: 4.160
6407
+ [2024-11-07 14:56:43,714][04584] Avg episode reward: 4.660, avg true_objective: 4.160
6408
+ [2024-11-07 14:56:43,872][04584] Num frames 900...
6409
+ [2024-11-07 14:56:44,191][04584] Num frames 1000...
6410
+ [2024-11-07 14:56:44,433][04584] Num frames 1100...
6411
+ [2024-11-07 14:56:44,628][04584] Num frames 1200...
6412
+ [2024-11-07 14:56:44,766][04584] Avg episode rewards: #0: 4.827, true rewards: #0: 4.160
6413
+ [2024-11-07 14:56:44,772][04584] Avg episode reward: 4.827, avg true_objective: 4.160
6414
+ [2024-11-07 14:56:44,879][04584] Num frames 1300...
6415
+ [2024-11-07 14:56:45,064][04584] Num frames 1400...
6416
+ [2024-11-07 14:56:45,236][04584] Num frames 1500...
6417
+ [2024-11-07 14:56:45,421][04584] Num frames 1600...
6418
+ [2024-11-07 14:56:47,592][04584] Avg episode rewards: #0: 4.580, true rewards: #0: 4.080
6419
+ [2024-11-07 14:56:47,594][04584] Avg episode reward: 4.580, avg true_objective: 4.080
6420
+ [2024-11-07 14:56:47,724][04584] Num frames 1700...
6421
+ [2024-11-07 14:56:47,905][04584] Num frames 1800...
6422
+ [2024-11-07 14:56:48,109][04584] Num frames 1900...
6423
+ [2024-11-07 14:56:48,295][04584] Num frames 2000...
6424
+ [2024-11-07 14:56:48,425][04584] Avg episode rewards: #0: 4.680, true rewards: #0: 4.080
6425
+ [2024-11-07 14:56:48,428][04584] Avg episode reward: 4.680, avg true_objective: 4.080
6426
+ [2024-11-07 14:56:48,562][04584] Num frames 2100...
6427
+ [2024-11-07 14:56:48,738][04584] Num frames 2200...
6428
+ [2024-11-07 14:56:48,913][04584] Num frames 2300...
6429
+ [2024-11-07 14:56:49,095][04584] Num frames 2400...
6430
+ [2024-11-07 14:56:49,191][04584] Avg episode rewards: #0: 4.540, true rewards: #0: 4.040
6431
+ [2024-11-07 14:56:49,193][04584] Avg episode reward: 4.540, avg true_objective: 4.040
6432
+ [2024-11-07 14:56:49,333][04584] Num frames 2500...
6433
+ [2024-11-07 14:56:49,501][04584] Num frames 2600...
6434
+ [2024-11-07 14:56:49,658][04584] Num frames 2700...
6435
+ [2024-11-07 14:56:49,848][04584] Num frames 2800...
6436
+ [2024-11-07 14:56:49,918][04584] Avg episode rewards: #0: 4.440, true rewards: #0: 4.011
6437
+ [2024-11-07 14:56:49,921][04584] Avg episode reward: 4.440, avg true_objective: 4.011
6438
+ [2024-11-07 14:56:50,094][04584] Num frames 2900...
6439
+ [2024-11-07 14:56:50,431][04584] Num frames 3000...
6440
+ [2024-11-07 14:56:50,666][04584] Num frames 3100...
6441
+ [2024-11-07 14:56:50,917][04584] Avg episode rewards: #0: 4.365, true rewards: #0: 3.990
6442
+ [2024-11-07 14:56:50,920][04584] Avg episode reward: 4.365, avg true_objective: 3.990
6443
+ [2024-11-07 14:56:50,945][04584] Num frames 3200...
6444
+ [2024-11-07 14:56:51,105][04584] Num frames 3300...
6445
+ [2024-11-07 14:56:51,283][04584] Num frames 3400...
6446
+ [2024-11-07 14:56:51,493][04584] Num frames 3500...
6447
+ [2024-11-07 14:56:51,666][04584] Num frames 3600...
6448
+ [2024-11-07 14:56:51,793][04584] Avg episode rewards: #0: 4.489, true rewards: #0: 4.044
6449
+ [2024-11-07 14:56:51,795][04584] Avg episode reward: 4.489, avg true_objective: 4.044
6450
+ [2024-11-07 14:56:51,923][04584] Num frames 3700...
6451
+ [2024-11-07 14:56:52,126][04584] Num frames 3800...
6452
+ [2024-11-07 14:56:52,324][04584] Num frames 3900...
6453
+ [2024-11-07 14:56:52,514][04584] Num frames 4000...
6454
+ [2024-11-07 14:56:52,732][04584] Avg episode rewards: #0: 4.588, true rewards: #0: 4.088
6455
+ [2024-11-07 14:56:52,736][04584] Avg episode reward: 4.588, avg true_objective: 4.088
6456
+ [2024-11-07 14:57:02,444][04584] Replay video saved to /root/hfRL/ml/LunarLander-v2/train_dir/default_experiment/replay.mp4!