Segmentation fault
Closed this issue · 3 comments
(leapsim) sisyphus@sisyphus-Legion-R9000P-ARX8:~/LEAP_Hand_Sim/leapsim$ python3 train.py wandb_activate=false num_envs=1 headless=false test=true task=LeapHandRot checkpoint=runs/pretrained/nn/LeapHand.pth
Importing module 'gym_38' (/home/sisyphus/isaacgym/python/isaacgym/_bindings/linux-x86_64/gym_38.so)
Setting GYM_USD_PLUG_INFO_PATH to /home/sisyphus/isaacgym/python/isaacgym/_bindings/linux-x86_64/usd/plugInfo.json
train.py:34: UserWarning:
The version_base parameter is not specified.
Please specify a compatability version level, or None.
Will assume defaults for version 1.1
@hydra.main(config_name="config", config_path="./cfg")
/home/sisyphus/anaconda3/envs/leapsim/lib/python3.8/site-packages/hydra/_internal/defaults_list.py:251: UserWarning: In 'config': Defaults list is missing `_self_`. See https://hydra.cc/docs/1.2/upgrades/1.0_to_1.1/default_composition_order for more information
warnings.warn(msg, UserWarning)
/home/sisyphus/anaconda3/envs/leapsim/lib/python3.8/site-packages/hydra/_internal/defaults_list.py:415: UserWarning: In config: Invalid overriding of hydra/job_logging:
Default list overrides requires 'override' keyword.
See https://hydra.cc/docs/1.2/upgrades/1.0_to_1.1/defaults_list_override for more information.
deprecation_warning(msg)
/home/sisyphus/anaconda3/envs/leapsim/lib/python3.8/site-packages/hydra/_internal/hydra.py:119: UserWarning: Future Hydra versions will no longer change working directory at job runtime by default.
See https://hydra.cc/docs/1.2/upgrades/1.1_to_1.2/changes_to_job_working_dir/ for more information.
ret = run_job(
PyTorch version 2.0.0+cu118
Device count 1
/home/sisyphus/isaacgym/python/isaacgym/_bindings/src/gymtorch
Using /home/sisyphus/.cache/torch_extensions/py38_cu118 as PyTorch extensions root...
Emitting ninja build file /home/sisyphus/.cache/torch_extensions/py38_cu118/gymtorch/build.ninja...
Building extension module gymtorch...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module gymtorch...
task:
name: LeapHandRot
physics_engine: physx
on_evaluation: False
env:
numEnvs: 1
numObservations: 102
numActions: 16
envSpacing: 0.25
phase_period: 2
exec_lag: 1
episodeLength: 400
enableDebugVis: False
aggregateMode: 1
controller:
torque_control: False
controlFrequencyInv: 6
pgain: 3
dgain: 0.1
genGrasps: False
clipObservations: 5.0
clipActions: 1.0
reset_height_threshold: 0.4
grasp_cache_name: leap_hand_in_palm_cube
grasp_cache_len: 1024
forceScale: 10.0
randomForceProbScalar: 0.25
forceDecay: 0.9
forceDecayInterval: 0.08
reward:
angvelClipMin: -0.25
angvelClipMax: 0.25
rotateRewardScale: 0.0
objLinvelPenaltyScale: -0.3
poseDiffPenaltyScale: -0.1
torquePenaltyScale: -0.1
workPenaltyScale: -1.0
additional_rewards:
rotate_finite_diff: 1.25
object_fallen: -10
override_object_init_z: 0.57
override_object_init_x: -0.03
override_object_init_y: 0.04
canonical_pose: [0.1458, -1.047, 1.7247, -0.2187, 1.2985, 1.6139, 0.9757, 1.0728, -0.1054, 0.029, 1.6039, 0.1227, -0.1084, 0.9652, 1.7317, 0.1071]
num_contact_fingers: 0
baseObjScale: 0.8
randomization:
randomizeMass: True
randomizeMassLower: 0.01
randomizeMassUpper: 0.25
randomizeCOM: True
randomizeCOMLower: -0.01
randomizeCOMUpper: 0.01
randomizeFriction: True
randomizeFrictionLower: 0.3
randomizeFrictionUpper: 3.0
randomizeScale: True
scaleListInit: True
randomizeScaleList: [0.95, 0.9, 1.0, 1.05, 1.1]
randomizeScaleLower: 0.75
randomizeScaleUpper: 0.8
randomizePDGains: True
randomizePGainLower: 2.9
randomizePGainUpper: 3.1
randomizeDGainLower: 0.09
randomizeDGainUpper: 0.11
privInfo:
enableObjPos: True
enableObjScale: True
enableObjMass: True
enableObjCOM: True
enableObjFriction: True
object:
type: cube
sampleProb: [1.0]
sim_to_real_indices: [1, 0, 2, 3, 9, 8, 10, 11, 13, 12, 14, 15, 4, 5, 6, 7]
real_to_sim_indices: [1, 0, 2, 3, 12, 13, 14, 15, 5, 4, 6, 7, 9, 8, 10, 11]
asset:
handAsset: assets/leap_hand/robot.urdf
enableCameraSensors: False
sim:
dt: 0.0083333
substeps: 1
up_axis: z
use_gpu_pipeline: True
gravity: [0.0, 0.0, -9.81]
physx:
num_threads: 4
solver_type: 1
use_gpu: True
num_position_iterations: 8
num_velocity_iterations: 0
max_gpu_contact_pairs: 8388608
num_subscenes: 4
contact_offset: 0.002
rest_offset: 0.0
bounce_threshold_velocity: 0.2
max_depenetration_velocity: 1000.0
default_buffer_size_multiplier: 5.0
contact_collection: 2
train:
params:
seed: 42
algo:
name: a2c_continuous
model:
name: continuous_a2c_logstd
network:
name: actor_critic
separate: False
space:
continuous:
mu_activation: None
sigma_activation: None
mu_init:
name: default
sigma_init:
name: const_initializer
val: 0
fixed_sigma: True
mlp:
units: [512, 256, 128]
activation: elu
d2rl: False
initializer:
name: default
regularizer:
name: None
rnn:
name: gru
units: 256
layers: 1
before_mlp: True
concat_input: True
layer_norm: True
load_checkpoint: True
load_path: /home/sisyphus/LEAP_Hand_Sim/leapsim/runs/pretrained/nn/LeapHand.pth
config:
name: LeapHand
full_experiment_name: LeapHand
env_name: rlgpu
multi_gpu: False
ppo: True
mixed_precision: False
normalize_input: True
normalize_value: True
value_bootstrap: True
num_actors: 1
reward_shaper:
scale_value: 0.01
normalize_advantage: True
gamma: 0.99
tau: 0.95
learning_rate: 0.005
lr_schedule: adaptive
schedule_type: standard
kl_threshold: 0.02
score_to_win: 100000
max_epochs: 5000
save_best_after: 100
save_frequency: 200
print_stats: True
grad_norm: 1.0
entropy_coef: 0.0
truncate_grads: True
e_clip: 0.2
horizon_length: 32
minibatch_size: 32768
mini_epochs: 5
critic_coef: 4
clip_value: True
seq_len: 4
bounds_loss_coef: 0.0001
player:
deterministic: True
games_num: 100000
print_stats: True
task_name: LeapHandRot
experiment:
num_envs: 1
seed: 42
torch_deterministic: False
max_iterations:
physics_engine: physx
pipeline: gpu
sim_device: cuda:0
rl_device: cuda:0
graphics_device_id: 0
num_threads: 4
solver_type: 1
num_subscenes: 4
test: True
checkpoint: /home/sisyphus/LEAP_Hand_Sim/leapsim/runs/pretrained/nn/LeapHand.pth
multi_gpu: False
default_run_name: LeapHand
log_to_sheet: True
sheet_name: leap-hand-manip
creds_path: ~/creds.json
wandb_activate: False
wandb_group:
wandb_entity: wandb_username
wandb_project: leap_sim
capture_video: False
capture_video_freq: 183
capture_video_len: 100
force_render: True
headless: False
Setting seed: 42
self.seed = 42
Started to play
---- Primitive List ----
['cube']
---- Object List ----
['cube']
/home/sisyphus/anaconda3/envs/leapsim/lib/python3.8/site-packages/gym/spaces/box.py:127: UserWarning: WARN: Box bound precision lowered by casting to float32
logger.warn(f"Box bound precision lowered by casting to {self.dtype}")
[Warning] [carb.gym.plugin] useGpu is set, forcing single scene (0 subscenes)
Not connected to PVD
+++ Using GPU PhysX
Physics Engine: PhysX
Physics Device: cuda:0
GPU Pipeline: enabled
WARNING: lavapipe is not a conformant vulkan implementation, testing use only.
Using VHACD cache directory '/home/sisyphus/.isaacgym/vhacd'
Started convex decomposition for mesh '/home/sisyphus/LEAP_Hand_Sim/leapsim/tasks/../../assets/leap_hand/dip.stl'
Started convex decomposition for mesh '/home/sisyphus/LEAP_Hand_Sim/leapsim/tasks/../../assets/leap_hand/fingertip.stl'
Started convex decomposition for mesh '/home/sisyphus/LEAP_Hand_Sim/leapsim/tasks/../../assets/leap_hand/mcp_joint.stl'
Started convex decomposition for mesh '/home/sisyphus/LEAP_Hand_Sim/leapsim/tasks/../../assets/leap_hand/palm_lower.stl'
Started convex decomposition for mesh '/home/sisyphus/LEAP_Hand_Sim/leapsim/tasks/../../assets/leap_hand/pip.stl'
Started convex decomposition for mesh '/home/sisyphus/LEAP_Hand_Sim/leapsim/tasks/../../assets/leap_hand/thumb_dip.stl'
Started convex decomposition for mesh '/home/sisyphus/LEAP_Hand_Sim/leapsim/tasks/../../assets/leap_hand/thumb_fingertip.stl'
Started convex decomposition for mesh '/home/sisyphus/LEAP_Hand_Sim/leapsim/tasks/../../assets/leap_hand/thumb_pip.stl'
Finished convex decomposition for mesh '/home/sisyphus/LEAP_Hand_Sim/leapsim/tasks/../../assets/leap_hand/pip.stl': 1 hulls
Finished convex decomposition for mesh '/home/sisyphus/LEAP_Hand_Sim/leapsim/tasks/../../assets/leap_hand/fingertip.stl': 5 hulls
Finished convex decomposition for mesh '/home/sisyphus/LEAP_Hand_Sim/leapsim/tasks/../../assets/leap_hand/thumb_fingertip.stl': 10 hulls
Finished convex decomposition for mesh '/home/sisyphus/LEAP_Hand_Sim/leapsim/tasks/../../assets/leap_hand/mcp_joint.stl': 12 hulls
Finished convex decomposition for mesh '/home/sisyphus/LEAP_Hand_Sim/leapsim/tasks/../../assets/leap_hand/dip.stl': 9 hulls
Finished convex decomposition for mesh '/home/sisyphus/LEAP_Hand_Sim/leapsim/tasks/../../assets/leap_hand/thumb_dip.stl': 12 hulls
Finished convex decomposition for mesh '/home/sisyphus/LEAP_Hand_Sim/leapsim/tasks/../../assets/leap_hand/thumb_pip.stl': 43 hulls
Finished convex decomposition for mesh '/home/sisyphus/LEAP_Hand_Sim/leapsim/tasks/../../assets/leap_hand/palm_lower.stl': 34 hulls
{'observation_space': Box(-inf, inf, (102,), float32), 'action_space': Box(-1.0, 1.0, (16,), float32), 'agents': 1, 'value_size': 1}
build mlp: 256
RunningMeanStd: (1,)
RunningMeanStd: (102,)
=> loading checkpoint '/home/sisyphus/LEAP_Hand_Sim/leapsim/runs/pretrained/nn/LeapHand.pth'
Unhandled descriptor set 449
段错误 (核心已转储)
(leapsim) sisyphus@sisyphus-Legion-R9000P-ARX8:~/LEAP_Hand_Sim/leapsim$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 126751
max locked memory (kbytes, -l) 65536
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 102400
cpu time (seconds, -t) unlimited
max user processes (-u) 126751
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
After running the command python3 train.py wandb_activate=false num_envs=1 headless=false test=true task=LeapHandRot checkpoint=runs/pretrained/nn/LeapHand.pth
, a window briefly pops up, where a blurry robotic hand can be seen. However, after a few seconds, the program crashes and generates the above output. What could be the problem? My computer has 32GB of RAM, an RTX 4060 laptop GPU, and an AMD Ryzen 7745HX CPU.
Are you able to run in headless mode?
python3 train.py wandb_activate=false num_envs=1 headless=false test=true task=LeapHandRot checkpoint=runs/pretrained/nn/LeapHand.pth headless=true
Are you able to run in headless mode?
python3 train.py wandb_activate=false num_envs=1 headless=false test=true task=LeapHandRot checkpoint=runs/pretrained/nn/LeapHand.pth headless=true
yes, I can run it in headless mode, why is that? I think maybe my NVIDIA driver went wrong or something?
(leapsim) sisyphus@sisyphus-Legion-R9000P-ARX8:~/LEAP_Hand_Sim/leapsim$ python3 train.py wandb_activate=false num_envs=1 headless=false test=true task=LeapHandRot checkpoint=runs/pretrained/nn/LeapHand.pth headless=true
Importing module 'gym_38' (/home/sisyphus/isaacgym/python/isaacgym/_bindings/linux-x86_64/gym_38.so)
Setting GYM_USD_PLUG_INFO_PATH to /home/sisyphus/isaacgym/python/isaacgym/_bindings/linux-x86_64/usd/plugInfo.json
train.py:34: UserWarning:
The version_base parameter is not specified.
Please specify a compatability version level, or None.
Will assume defaults for version 1.1
@hydra.main(config_name="config", config_path="./cfg")
/home/sisyphus/anaconda3/envs/leapsim/lib/python3.8/site-packages/hydra/_internal/defaults_list.py:251: UserWarning: In 'config': Defaults list is missing `_self_`. See https://hydra.cc/docs/1.2/upgrades/1.0_to_1.1/default_composition_order for more information
warnings.warn(msg, UserWarning)
/home/sisyphus/anaconda3/envs/leapsim/lib/python3.8/site-packages/hydra/_internal/defaults_list.py:415: UserWarning: In config: Invalid overriding of hydra/job_logging:
Default list overrides requires 'override' keyword.
See https://hydra.cc/docs/1.2/upgrades/1.0_to_1.1/defaults_list_override for more information.
deprecation_warning(msg)
/home/sisyphus/anaconda3/envs/leapsim/lib/python3.8/site-packages/hydra/_internal/hydra.py:119: UserWarning: Future Hydra versions will no longer change working directory at job runtime by default.
See https://hydra.cc/docs/1.2/upgrades/1.1_to_1.2/changes_to_job_working_dir/ for more information.
ret = run_job(
PyTorch version 2.0.0+cu118
Device count 1
/home/sisyphus/isaacgym/python/isaacgym/_bindings/src/gymtorch
Using /home/sisyphus/.cache/torch_extensions/py38_cu118 as PyTorch extensions root...
Emitting ninja build file /home/sisyphus/.cache/torch_extensions/py38_cu118/gymtorch/build.ninja...
Building extension module gymtorch...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module gymtorch...
task:
name: LeapHandRot
physics_engine: physx
on_evaluation: False
env:
numEnvs: 1
numObservations: 102
numActions: 16
envSpacing: 0.25
phase_period: 2
exec_lag: 1
episodeLength: 400
enableDebugVis: False
aggregateMode: 1
controller:
torque_control: False
controlFrequencyInv: 6
pgain: 3
dgain: 0.1
genGrasps: False
clipObservations: 5.0
clipActions: 1.0
reset_height_threshold: 0.4
grasp_cache_name: leap_hand_in_palm_cube
grasp_cache_len: 1024
forceScale: 10.0
randomForceProbScalar: 0.25
forceDecay: 0.9
forceDecayInterval: 0.08
reward:
angvelClipMin: -0.25
angvelClipMax: 0.25
rotateRewardScale: 0.0
objLinvelPenaltyScale: -0.3
poseDiffPenaltyScale: -0.1
torquePenaltyScale: -0.1
workPenaltyScale: -1.0
additional_rewards:
rotate_finite_diff: 1.25
object_fallen: -10
override_object_init_z: 0.57
override_object_init_x: -0.03
override_object_init_y: 0.04
canonical_pose: [0.1458, -1.047, 1.7247, -0.2187, 1.2985, 1.6139, 0.9757, 1.0728, -0.1054, 0.029, 1.6039, 0.1227, -0.1084, 0.9652, 1.7317, 0.1071]
num_contact_fingers: 0
baseObjScale: 0.8
randomization:
randomizeMass: True
randomizeMassLower: 0.01
randomizeMassUpper: 0.25
randomizeCOM: True
randomizeCOMLower: -0.01
randomizeCOMUpper: 0.01
randomizeFriction: True
randomizeFrictionLower: 0.3
randomizeFrictionUpper: 3.0
randomizeScale: True
scaleListInit: True
randomizeScaleList: [0.95, 0.9, 1.0, 1.05, 1.1]
randomizeScaleLower: 0.75
randomizeScaleUpper: 0.8
randomizePDGains: True
randomizePGainLower: 2.9
randomizePGainUpper: 3.1
randomizeDGainLower: 0.09
randomizeDGainUpper: 0.11
privInfo:
enableObjPos: True
enableObjScale: True
enableObjMass: True
enableObjCOM: True
enableObjFriction: True
object:
type: cube
sampleProb: [1.0]
sim_to_real_indices: [1, 0, 2, 3, 9, 8, 10, 11, 13, 12, 14, 15, 4, 5, 6, 7]
real_to_sim_indices: [1, 0, 2, 3, 12, 13, 14, 15, 5, 4, 6, 7, 9, 8, 10, 11]
asset:
handAsset: assets/leap_hand/robot.urdf
enableCameraSensors: False
sim:
dt: 0.0083333
substeps: 1
up_axis: z
use_gpu_pipeline: True
gravity: [0.0, 0.0, -9.81]
physx:
num_threads: 4
solver_type: 1
use_gpu: True
num_position_iterations: 8
num_velocity_iterations: 0
max_gpu_contact_pairs: 8388608
num_subscenes: 4
contact_offset: 0.002
rest_offset: 0.0
bounce_threshold_velocity: 0.2
max_depenetration_velocity: 1000.0
default_buffer_size_multiplier: 5.0
contact_collection: 2
train:
params:
seed: 42
algo:
name: a2c_continuous
model:
name: continuous_a2c_logstd
network:
name: actor_critic
separate: False
space:
continuous:
mu_activation: None
sigma_activation: None
mu_init:
name: default
sigma_init:
name: const_initializer
val: 0
fixed_sigma: True
mlp:
units: [512, 256, 128]
activation: elu
d2rl: False
initializer:
name: default
regularizer:
name: None
rnn:
name: gru
units: 256
layers: 1
before_mlp: True
concat_input: True
layer_norm: True
load_checkpoint: True
load_path: /home/sisyphus/LEAP_Hand_Sim/leapsim/runs/pretrained/nn/LeapHand.pth
config:
name: LeapHand
full_experiment_name: LeapHand
env_name: rlgpu
multi_gpu: False
ppo: True
mixed_precision: False
normalize_input: True
normalize_value: True
value_bootstrap: True
num_actors: 1
reward_shaper:
scale_value: 0.01
normalize_advantage: True
gamma: 0.99
tau: 0.95
learning_rate: 0.005
lr_schedule: adaptive
schedule_type: standard
kl_threshold: 0.02
score_to_win: 100000
max_epochs: 5000
save_best_after: 100
save_frequency: 200
print_stats: True
grad_norm: 1.0
entropy_coef: 0.0
truncate_grads: True
e_clip: 0.2
horizon_length: 32
minibatch_size: 32768
mini_epochs: 5
critic_coef: 4
clip_value: True
seq_len: 4
bounds_loss_coef: 0.0001
player:
deterministic: True
games_num: 100000
print_stats: True
task_name: LeapHandRot
experiment:
num_envs: 1
seed: 42
torch_deterministic: False
max_iterations:
physics_engine: physx
pipeline: gpu
sim_device: cuda:0
rl_device: cuda:0
graphics_device_id: 0
num_threads: 4
solver_type: 1
num_subscenes: 4
test: True
checkpoint: /home/sisyphus/LEAP_Hand_Sim/leapsim/runs/pretrained/nn/LeapHand.pth
multi_gpu: False
default_run_name: LeapHand
log_to_sheet: True
sheet_name: leap-hand-manip
creds_path: ~/creds.json
wandb_activate: False
wandb_group:
wandb_entity: wandb_username
wandb_project: leap_sim
capture_video: False
capture_video_freq: 183
capture_video_len: 100
force_render: True
headless: True
Setting seed: 42
self.seed = 42
Started to play
---- Primitive List ----
['cube']
---- Object List ----
['cube']
/home/sisyphus/anaconda3/envs/leapsim/lib/python3.8/site-packages/gym/spaces/box.py:127: UserWarning: WARN: Box bound precision lowered by casting to float32
logger.warn(f"Box bound precision lowered by casting to {self.dtype}")
[Warning] [carb.gym.plugin] useGpu is set, forcing single scene (0 subscenes)
Not connected to PVD
+++ Using GPU PhysX
Physics Engine: PhysX
Physics Device: cuda:0
GPU Pipeline: enabled
Using VHACD cache directory '/home/sisyphus/.isaacgym/vhacd'
Started convex decomposition for mesh '/home/sisyphus/LEAP_Hand_Sim/leapsim/tasks/../../assets/leap_hand/dip.stl'
Started convex decomposition for mesh '/home/sisyphus/LEAP_Hand_Sim/leapsim/tasks/../../assets/leap_hand/fingertip.stl'
Started convex decomposition for mesh '/home/sisyphus/LEAP_Hand_Sim/leapsim/tasks/../../assets/leap_hand/mcp_joint.stl'
Started convex decomposition for mesh '/home/sisyphus/LEAP_Hand_Sim/leapsim/tasks/../../assets/leap_hand/palm_lower.stl'
Started convex decomposition for mesh '/home/sisyphus/LEAP_Hand_Sim/leapsim/tasks/../../assets/leap_hand/pip.stl'
Started convex decomposition for mesh '/home/sisyphus/LEAP_Hand_Sim/leapsim/tasks/../../assets/leap_hand/thumb_dip.stl'
Started convex decomposition for mesh '/home/sisyphus/LEAP_Hand_Sim/leapsim/tasks/../../assets/leap_hand/thumb_fingertip.stl'
Started convex decomposition for mesh '/home/sisyphus/LEAP_Hand_Sim/leapsim/tasks/../../assets/leap_hand/thumb_pip.stl'
Finished convex decomposition for mesh '/home/sisyphus/LEAP_Hand_Sim/leapsim/tasks/../../assets/leap_hand/pip.stl': 1 hulls
Finished convex decomposition for mesh '/home/sisyphus/LEAP_Hand_Sim/leapsim/tasks/../../assets/leap_hand/fingertip.stl': 5 hulls
Finished convex decomposition for mesh '/home/sisyphus/LEAP_Hand_Sim/leapsim/tasks/../../assets/leap_hand/thumb_fingertip.stl': 10 hulls
Finished convex decomposition for mesh '/home/sisyphus/LEAP_Hand_Sim/leapsim/tasks/../../assets/leap_hand/mcp_joint.stl': 12 hulls
Finished convex decomposition for mesh '/home/sisyphus/LEAP_Hand_Sim/leapsim/tasks/../../assets/leap_hand/dip.stl': 9 hulls
Finished convex decomposition for mesh '/home/sisyphus/LEAP_Hand_Sim/leapsim/tasks/../../assets/leap_hand/thumb_dip.stl': 12 hulls
Finished convex decomposition for mesh '/home/sisyphus/LEAP_Hand_Sim/leapsim/tasks/../../assets/leap_hand/thumb_pip.stl': 43 hulls
Finished convex decomposition for mesh '/home/sisyphus/LEAP_Hand_Sim/leapsim/tasks/../../assets/leap_hand/palm_lower.stl': 34 hulls
{'observation_space': Box(-inf, inf, (102,), float32), 'action_space': Box(-1.0, 1.0, (16,), float32), 'agents': 1, 'value_size': 1}
build mlp: 256
RunningMeanStd: (1,)
RunningMeanStd: (102,)
=> loading checkpoint '/home/sisyphus/LEAP_Hand_Sim/leapsim/runs/pretrained/nn/LeapHand.pth'
reward: 81.20762634277344 steps: 399.0
reward: 81.89468383789062 steps: 399.0
reward: 83.9068832397461 steps: 399.0
reward: 86.25658416748047 steps: 399.0
reward: 89.69570922851562 steps: 399.0
reward: 85.71553039550781 steps: 399.0
reward: 88.1849594116211 steps: 399.0
reward: 84.27640533447266 steps: 399.0
^CTraceback (most recent call last):
File "train.py", line 175, in <module>
launch_rlg_hydra()
File "/home/sisyphus/anaconda3/envs/leapsim/lib/python3.8/site-packages/hydra/main.py", line 94, in decorated_main
_run_hydra(
File "/home/sisyphus/anaconda3/envs/leapsim/lib/python3.8/site-packages/hydra/_internal/utils.py", line 394, in _run_hydra
_run_app(
File "/home/sisyphus/anaconda3/envs/leapsim/lib/python3.8/site-packages/hydra/_internal/utils.py", line 457, in _run_app
run_and_report(
File "/home/sisyphus/anaconda3/envs/leapsim/lib/python3.8/site-packages/hydra/_internal/utils.py", line 220, in run_and_report
return func()
File "/home/sisyphus/anaconda3/envs/leapsim/lib/python3.8/site-packages/hydra/_internal/utils.py", line 458, in <lambda>
lambda: hydra.run(
File "/home/sisyphus/anaconda3/envs/leapsim/lib/python3.8/site-packages/hydra/_internal/hydra.py", line 119, in run
ret = run_job(
File "/home/sisyphus/anaconda3/envs/leapsim/lib/python3.8/site-packages/hydra/core/utils.py", line 186, in run_job
ret.return_value = task_function(task_cfg)
File "train.py", line 164, in launch_rlg_hydra
runner.run({
File "/home/sisyphus/anaconda3/envs/leapsim/lib/python3.8/site-packages/rl_games/torch_runner.py", line 123, in run
self.run_play(args)
File "/home/sisyphus/anaconda3/envs/leapsim/lib/python3.8/site-packages/rl_games/torch_runner.py", line 108, in run_play
player.run()
File "/home/sisyphus/anaconda3/envs/leapsim/lib/python3.8/site-packages/rl_games/common/player.py", line 210, in run
obses, r, done, info = self.env_step(self.env, action)
File "/home/sisyphus/anaconda3/envs/leapsim/lib/python3.8/site-packages/rl_games/common/player.py", line 72, in env_step
obs, rewards, dones, infos = env.step(actions)
File "/home/sisyphus/LEAP_Hand_Sim/leapsim/tasks/base/vec_task.py", line 290, in step
self.update_low_level_control()
File "/home/sisyphus/LEAP_Hand_Sim/leapsim/tasks/leap_hand_rot.py", line 850, in update_low_level_control
self._refresh_gym()
File "/home/sisyphus/LEAP_Hand_Sim/leapsim/tasks/leap_hand_rot.py", line 873, in _refresh_gym
self.object_angvel = self.root_state_tensor[self.object_indices, 10:13]
KeyboardInterrupt
I updated my nvidia driver's version to 12.2, and the problem is solved. It's because my nvidia driver's version didn't match the CUDA and cuDNN's. When I install CUDA,I didn't choose to install the driver within the 11.8 CUDA package, instead I independently installed another version 12.2. Now both of my driver and CUDA are 12.2, and problem solved. Thank anag004 for advice! It's very helpful!