Issues
- 3
- 4
Issue importing keras-rl on tensorflow-macos
#380 opened by Sebewe - 0
ValueError: Could not interpret optimizer identifier: <keras.src.optimizers.adam.Adam object at 0x79d9071160e0>
#396 opened by YikunHan42 - 0
Frame Skipping in DQN
#395 opened by sandra-sys - 0
- 5
- 0
gym.Env.reset() no longer returns observation of type np.array but a tuple of (observation, info)
#392 opened by AldairCB - 7
Cannot import CallbackList
#362 opened by FitMachineLearning - 12
Please help me, I have a problem with DQNAgent.
#371 opened by hemsatrakol - 2
Value error when running DQN.fit
#388 opened by GravermanDev - 16
len is not well defined for symbolic tensors *AND* using a `tf.Tensor` as a Python `bool` is not allowed in Graph execution
#348 opened by EduardDurech - 1
- 1
- 3
- 1
how to implement a custom environment?
#382 opened by harleyxu-xhl - 1
- 1
[Question] Custom Environment
#375 opened by abeerM - 3
Training performance is quite slow
#379 opened by Cpt-Falcon - 0
Multiple Actions in DQN (binary action vector)
#387 opened by 2019hc04089 - 1
- 4
C:\Python\Python37\lib\site-packages\keras_rl-0.4.2-py3.7.egg\rl\agents\dqn.py in __init__(self, model, policy, test_policy, enable_double_dqn, enable_dueling_network, dueling_type, *args, **kwargs)
#374 opened by shravansuthar210 - 2
Module 'keras.utils.generic_utils' has no attribute 'populate_dict_with_module_objects'
#377 opened by rhalaly - 1
Numpy data wrangling in rl/callbacks.py
#373 opened by jan-gebauer - 6
Examples don't work anymore
#366 opened by iirekm - 1
I am trying to visualize the results of an example I ran (sarsa.cartpole). The example ran fine and I got the results of the training, however, I can't visualize the results. Following the instructions given (from rl.callbacks import WandbLogger) raises an error [cannot import name 'WandbLogger']. The package (wandb) is installed and all the dependecies are installed. If this does not work, how else can I visualize the results and store the generated data (episode, reward, action,...etc)?
#369 opened by malsaidi93 - 1
raise ValueError 'Critic "{}" does not have enough inputs when run ddpg_pendulum.py
#364 opened by HankerSia - 1
Dimension does not match for tuple space
#367 opened by jxiw - 1
DDPG worked well but not CDQN or NAF !
#372 opened by B-Yassine - 2
DQL
#363 opened by johan606303 - 10
- 3
Is this project dead?
#354 opened by khayamgondal - 1
- 1
Using Keras RL in production
#355 opened by zolekode - 7
EXAMPLE : Loading Models After fully trained
#350 opened by STRATZ-Ken - 1
When the A3C and PPO can be implemented?
#352 opened by GIS-PuppetMaster - 4
Keras RL for another externel Enviroment
#356 opened by Jekso - 1
I would like to use as env my Flow-Project script
#347 opened by gioiav - 3
AttributeError: 'Adam' object has no attribute '_name'
#345 opened by palbha - 2
will the dqn.fit reset the model weight?
#337 opened by yhzhang1 - 1
Where is the environment specified in DQNAgent
#343 opened by kdawar1 - 1
- 0
Processor's def process_reward(self, reward), def process_step(self, observation, reward, done, info): wont return the value of reward for atari and retro.
#349 opened by toksis - 1
- 1
TypeError with WhiteningNormalizerProcessor
#338 opened by stefanbschneider - 2
- 1
/keras-rl/examples/dqn_atari.py" AttributeError: module 'keras.backend' has no attribute 'image_dim_ordering'
#336 opened by loveyandex - 1
/keras-rl/examples/dqn_atari.py" AttributeError: module 'keras.backend' has no attribute 'image_dim_ordering'
#335 opened by loveyandex - 1
how to distinguish that a decision is greedy or not
#333 opened by vxgu86 - 1
docs are sad
#332 opened by bionicles - 1