`max_lifetime_session` in SessionPool doesn't work
atsu0127 opened this issue · 6 comments
- What versions are you using?
platform.platform: Linux-4.18.0-425.3.1.el8.x86_64-x86_64-with-glibc2.28
sys.maxsize > 2**32: True
platform.python_version: 3.10.1
cx_Oracle.version: 8.3.0
cx_Oracle.clientversion: (21, 4, 0, 0, 0)
- Describe the problem
I had run the script described in 3 and checked gv$session with the following SQL.
ALTER SESSION SET NLS_DATE_FORMAT='YYYY-MM-DD HH24:MI:SS';
col USERNAME for a10
set lines 300
select INST_ID, SID, SERIAL#, username, status, logon_time from gv$session where username = '<username>' order by LOGON_TIME;Since the first time.sleep waits for more than max_lifetime_session, I thought that a new session would be created in the subsequent dbms_session.sleep, but the session that existed when the pool was created was used as is.
Also, we do not plan to use DRCP and have not set it up(conn_str is as follows).
<scan_name>@<scan_port>/<service_name>
- Include a runnable Python script that shows the problem.
- Create Table for test
CREATE TABLE TEST_TBL(id number, name varchar2(20), db varchar2(20), drink varchar2(20));- run a python script below
import cx_Oracle
import pprint
import time
db_config = ('<username>', '<password>', '<conn_str>')
pool = cx_Oracle.SessionPool(user=db_config[0], password=db_config[1],
dsn=db_config[2], min=2,
max=5, increment=1, max_lifetime_session=30)
def fetch(sql: str, bind: list[str|int] =[]):
with pool.acquire() as connection:
with connection.cursor() as cursor:
cursor.execute(sql, bind)
rows = cursor.fetchall()
return rows
def exec(sql: str, bind: list[str|int] =[]):
with cx_Oracle.connect(db_config[0], db_config[1], db_config[2]) as connection:
with connection.cursor() as cursor:
try:
cursor.execute(sql, bind)
result = connection.commit()
return result
except Exception as e:
print("exec失敗")
return e
def dbms_sleep(seconds):
with pool.acquire() as conn:
cursor = conn.cursor()
print("dbms sleep start:", seconds)
cursor.callproc("dbms_session.sleep", (seconds,))
print("dbms sleep end:", seconds)
sql_insert_all = """INSERT ALL
INTO test_tbl VALUES (1,'u1', 'oracle', 'sake')
INTO test_tbl VALUES (2,'u2', 'postgresql', 'beer')
INTO test_tbl VALUES (3,'u3', 'oracle', 'wine')
INTO test_tbl VALUES (4,'u4', 'postgresql', 'beer')
INTO test_tbl VALUES (5,'u5', 'mysql', 'wine')
select * from dual"""
sql_select_all = 'select * from test_tbl'
sql_delete='DELETE FROM test_tbl'
result = exec(sql_insert_all)
if result is not None:
print("insert failed: ", result)
exit(1)
print("initial data")
rows = fetch(sql_select_all)
pprint.pprint(rows)
print("start")
print("sleeping...")
# sleep longer than max_lifetime_session
time.sleep(70)
print("sleeping...end")
# this session will use a new session
dbms_sleep(10)
print("sleeping...")
time.sleep(70)
print("sleeping...end")
dbms_sleep(10)
print("Delete Data")
result = exec(sql_delete)
if result is not None:
print("delete failed: ", result)
exit(1)
print("Done")- The cx_Oracle behavior of this setting is inherited from the OCI Session Pool OCI_ATTR_SPOOL_MAX_LIFETIME_SESSION attribute behavior.
- The OCI Session Pool behavior may vary with Oracle Client version. In recent versions of Oracle Client, more pool maintenance occurs in the background but in earlier versions may only have been triggered only at acquire or connection close time. (I say this in case people test your scenario with 19c client libraries).
- The 'Thin' mode of python-oracledb (the newest version with a new name of cx_Oracle) may have subtle differences since it doesn't use the OCI Session Pool. The 'Thick' mode does use the OCI Session Pool.
- Our cx_Oracle / python-oracledb doc isn't the greatest on what max_lifetime_session means. By co-incidence we discussed pool shrinkage yesterday and I noted that doc needs improvement.
- You could test inside Python by checking
"SELECT SYS_CONTEXT('USERENV','SID') FROM DUAL"for connections acquired from the pool. Update: Instead of calling dbms_session.sleep(), just do a sleep in Python between closing and acquiring a connection. - Since pools are living & breathing, try with a pool max size of 1.
- look at tests/test_2400_pool.py for some behaviors
- You really don't want to be changing from the default of 0 (no timeout) unless you have a very good reason. Forcing session recreation impacts performance and scalability.
- What are you trying to achieve for your application? Why?
Thank you for confirming.
I set max=1 and confirmed the change of SID through the same procedure. However, even after a longer period than max_lifetime_session, the SID did not change.
I was planning to use max_lifetime_session in order to reduce the workload during planned maintenance by regularly renewing DB sessions. In order to resolve this issue with retries, I am considering the following two options:
- Set FAN events to true and describe error handling.
- Set TAC on the DB side.
I will close this issue. Thank you for your support.
I'm going to reopen this since I was also seeing something I wanted to check. Having it open will remind me to look at it.
This is what I was testing:
import os
import time
import oracledb
MLT = 20
SLEEP = 120
un = os.environ.get('PYTHON_USERNAME')
pw = os.environ.get('PYTHON_PASSWORD')
cs = os.environ.get('PYTHON_CONNECTSTRING')
oracledb.init_oracle_client()
sql = "select sid || ' ' || serial# from v$session where sid = sys_context('USERENV','SID')"
#------------------------------------------------------------------------------
def start_pool():
pool_min = 0
pool_max = 1
pool_inc = 0
pool = oracledb.create_pool(user=un, password=pw, dsn=cs,
min=pool_min,
max=pool_max,
increment=pool_inc,
max_lifetime_session=MLT,
session_callback=init_session)
return pool
def init_session(connection, requestedTag_ignored):
with connection.cursor() as cursor:
s, = connection.cursor().execute(sql).fetchone()
print(f"In init_session: SID and SERIAL#: {s}")
#------------------------------------------------------------------------------
pool = start_pool()
with pool.acquire() as connection:
s, = connection.cursor().execute(sql).fetchone()
print(f"In block 1: SID and SERIAL# {s}")
print(f"Sleeping {SLEEP} seconds with max_lifetime_session of {MLT}")
connection.cursor().callproc("dbms_session.sleep", (MLT,))
time.sleep(2)
with pool.acquire() as connection:
s, = connection.cursor().execute(sql).fetchone()
print(f"In block 2: SID and SERIAL# {s}")
with pool.acquire() as connection:
s, = connection.cursor().execute(sql).fetchone()
print(f"In block 3: SID and SERIAL# {s}")
The output is like:
cjones@mdt:~/p$ python3 cp.py
In init_session: SID and SERIAL#: 73 14816
In block 1: SID and SERIAL# 73 14816
Sleeping 120 seconds with max_lifetime_session of 20
In block 2: SID and SERIAL# 73 14816
In init_session: SID and SERIAL#: 73 64501
In block 3: SID and SERIAL# 73 64501
cjones@mdt:~/p$
Showing the session was recreated. However when I played with some times, or set the pool min size to 1, I got different behavior. I suspect there are some heuristics in the implementation.
But fundamentally, trying to reduce the number of sessions to help failover is optimizing for arguably the wrong case. It would generally be better to keep sessions open so normal app access was efficient. If you are worried about the number of sessions, then try DRCP.
This issue has been automatically marked as inactive because it has not been updated recently. It will be closed if no further activity occurs. Thank you for your contributions.
This issue has been automatically closed because it has not been updated for a month.