sidhomj/DeepTCR

sequence_inference error

Albert-Shuai opened this issue · 6 comments

Hi:

I am trying to do sequence_inference based on a trained model, while the following error occurs:

I am not sure how I shall change my code to make it work. May I ask for your suggestions? Thanks!

tensorflow/core/common_runtime/colocation_graph.cc:1218] Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [
/job:localhost/replica:0/task:0/device:CPU:0].
See below for details of this colocation group:
Colocation Debug Info:
Colocation group had the following types and supported devices:
Root Member(assigned_device_name_index_=-1 requested_device_name_='/device:GPU:0' assigned_device_name_='' resource_device_name_='/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[]
ResourceApplyAdam: CPU
ReadVariableOp: CPU
AssignVariableOp: CPU
VarIsInitializedOp: CPU
Const: CPU
VarHandleOp: CPU

Colocation members, user-requested devices, and framework assigned devices, if any:
dense_1/bias/Initializer/zeros (Const) /device:GPU:0
dense_1/bias (VarHandleOp) /device:GPU:0
dense_1/bias/IsInitialized/VarIsInitializedOp (VarIsInitializedOp) /device:GPU:0
dense_1/bias/Assign (AssignVariableOp) /device:GPU:0
dense_1/bias/Read/ReadVariableOp (ReadVariableOp) /device:GPU:0
dense_1/BiasAdd/ReadVariableOp (ReadVariableOp) /device:GPU:0
dense_1/bias/Adam/Initializer/zeros (Const) /device:GPU:0
dense_1/bias/Adam (VarHandleOp) /device:GPU:0
dense_1/bias/Adam/IsInitialized/VarIsInitializedOp (VarIsInitializedOp) /device:GPU:0
dense_1/bias/Adam/Assign (AssignVariableOp) /device:GPU:0
dense_1/bias/Adam/Read/ReadVariableOp (ReadVariableOp) /device:GPU:0
dense_1/bias/Adam_1/Initializer/zeros (Const) /device:GPU:0
dense_1/bias/Adam_1 (VarHandleOp) /device:GPU:0
dense_1/bias/Adam_1/IsInitialized/VarIsInitializedOp (VarIsInitializedOp) /device:GPU:0
dense_1/bias/Adam_1/Assign (AssignVariableOp) /device:GPU:0
dense_1/bias/Adam_1/Read/ReadVariableOp (ReadVariableOp) /device:GPU:0
Adam/update_dense_1/bias/ResourceApplyAdam (ResourceApplyAdam) /device:GPU:0
save/AssignVariableOp_35 (AssignVariableOp) /device:GPU:0
save/AssignVariableOp_36 (AssignVariableOp) /device:GPU:0
save/AssignVariableOp_37 (AssignVariableOp) /device:GPU:0

Hi. I tried it on different computers, while all of them reported the same error, so I am wondering what I can do to that.

FYI, I am using Sample_Inference under DeepTCR_WF

And a toy code I am using is like the following:
seq = np.array(['CASSASS','CASSASSA','CASSASSS'])
sample = np.array(['s1','s1','s2'])
DTCR_WF.Sample_Inference(sample_labels = sample, beta_sequences=seq)

And it will return None, instead of a matrix as described

If you want to do sequence inference with the repertoire model, you should not provide a sample label. It should left as None.

If you want to do sequence inference with the repertoire model, you should not provide a sample label. It should left as None.

Thanks! But what if I want to predict the label of a sample? Shall I use Sample_Inference()? If so, why this method returns None?

Thanks so much for the response, and sorry for the stupid question. I though it would return an array like what sequence_inference did, while it is stored inside the object.