BrikerMan/Kashgari

[Question] labeling模型进行推理时随次数的增加而进程占用的内存逐渐增多,是否存在内存内存泄漏现象?

mejackomg opened this issue · 2 comments

kashgari 1.1.5,tf1.15,我的代码:
import tensorflow as tf
import kashgari
from tensorflow.python.keras.backend import set_session, clear_session

由于需要使用多个模型,对每个模型定义了全局graph和session

载入checkpoint文件

graph = tf.Graph()
tf_global_config = tf.ConfigProto(inter_op_parallelism_threads=1, intra_op_parallelism_threads=1)
sess = tf.Session(config=tf_global_config, graph=graph)
with graph.as_default():
set_session(sess)
model = kashgari.utils.load_model("./rbt6_bigru_crf/", load_weights=False)
model.tf_model.load_weights("./rbt6_bigru_crf/saved-model-53-0.398.hdf5")

推理

with graph.as_default():
set_session(sess)
model.predict(tokens)

在stackflow上看到有说是因为推理时tf往graph中新增了op导致的,在载入模型后使用graph.finalize()确实会报错:
File "/Users/caijing/opt/anaconda3/envs/nlp_embedding/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py", line 2998, in _check_not_finalized
raise RuntimeError("Graph is finalized and cannot be modified.")

请问大家遇到过这种内存泄漏的情况吗?
应用环境不能用tf_serving,所以只能采用这种方式进行在线推理。

看上去像是keras的锅:keras-team/keras#13118

stale commented

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.