pacman82/odbc-api

Unexpected return value 'SqlReturn(-2)' for ODBC function 'SQLGetConnectAttr'

bbigras opened this issue · 11 comments

It seems to happen when I call conn.is_dead().

thread 'tokio-runtime-worker' panicked at /home/bbigras/.cargo/registry/src/index.crates.io-6f17d22bba15001f/odbc-api-3.0.1/src/handles/sql_result.rs:80:18:
Unexpected return value 'SqlReturn(-2)' for ODBC function 'SQLGetConnectAttr'

hfsql
odbc-api v3.0.1
libiodbc-3.52.16

-2 means "invalid handle". However I am wondering if it is not also rather that connection pooling and detection of dead connections are not supported by iodbc at all. I try to check if this feature has been introduced in ODBC 3.8. If so I will hide it behind a compile time feature, so you can not call it by accident if using iodbc.

It is ODBC 3.5, so it could be supported by iodbc. See: https://learn.microsoft.com/en-us/sql/odbc/reference/syntax/sqlsetconnectattr-function?view=sql-server-ver16

Does of course not imply it actually is, or that it would work with your driver. Are you using any unsafe code? The error indicates that the Connection has already been destroyed. Calling functions on an invalid Connection handle should be prevented by odbc-api wrapping the handles in a RAII type.

Are you using any unsafe code?

Not in my app directly.

I'm using crates like axum and tokio that do have unsafe code according to cargo geiger.

The error indicates that the Connection has already been destroyed.

I'm using odbc-api with bb8.

If I use the following code (with a pool of 1 SQL connection), I can run a query about 10 times before trying to call is_dead() and having the error.

use std::sync::atomic::{AtomicUsize, Ordering};
static COUNT: AtomicUsize = AtomicUsize::new(0);

#[async_trait]
impl ManageConnection for Manager {
    fn has_broken(&self, conn: &mut Self::Connection) -> bool {
        trace!("has_broken()");

        let old_count = COUNT.fetch_add(1, Ordering::SeqCst);
        dbg!(&old_count);
        if old_count > 10 {
            conn.is_dead().unwrap_or(true)
        } else {
            false
        }
    }
}

Where and how do you create and own the connections?

like this:

use anyhow::{Context, Error};
use async_trait::async_trait;
use bb8::ManageConnection;
use lazy_static::lazy_static;
use odbc_api::buffers::{BufferDesc, ColumnarBuffer, NullableSlice};
use odbc_api::{Connection, ConnectionOptions, Cursor, Environment};
use tracing::{debug, instrument, trace};

use std::env;

use std::sync::atomic::{AtomicUsize, Ordering};
static COUNT: AtomicUsize = AtomicUsize::new(0);

lazy_static! {
    pub static ref ENV: Environment = Environment::new().unwrap();
}

pub struct Manager {}

#[async_trait]
impl ManageConnection for Manager {
    type Connection = Connection<'static>;
    type Error = Error;

    #[instrument(skip(self))]
    async fn connect(&self) -> Result<Self::Connection, Self::Error> {
        trace!("connect()");
        debug!("ODBC: connecting");

        let bd_driver = env::var("BD_DRIVER").context("BD_DRIVER non-défini")?;
        let bd_host = env::var("BD_HOST").context("BD_HOST non-défini")?;
        let bd_port = env::var("BD_PORT").context("BD_PORT non-défini")?;
        let bd_database = env::var("BD_DATABASE").context("BD_DATABASE non-défini")?;
        let bd_user = env::var("BD_USER").context("BD_USER non-défini")?;
        let bd_password = env::var("BD_PASSWORD").context("BD_PASSWORD non-défini")?;

        let conn_str = format!(
            "DRIVER={};Server Name={};Server Port={};Database={};UID={};PWD={}",
            bd_driver, bd_host, bd_port, bd_database, bd_user, bd_password
        );

        let conn = tokio::task::spawn_blocking(move || {
            ENV.connect_with_connection_string(&conn_str, ConnectionOptions::default())
        })
        .await??;

        debug!("ODBC: connected");

        Ok(conn)
    }

    #[instrument(level = "debug", skip_all, ret)]
    async fn is_valid(&self, conn: &mut Self::Connection) -> Result<(), Self::Error> {
        trace!("is_valid()");
        // debug!("ODBC: check if connection is still valid");

        // workaround since I can't use "SELECT 1"
        let cursor = conn
            .execute("SELECT ID FROM some_table WHERE ID = 2;", ())?
            .context("no result")?;

        let mut buffers = ColumnarBuffer::from_descs(1, [BufferDesc::I64 { nullable: true }]);

        let mut row_set_cursor = cursor.bind_buffer(&mut buffers)?;

        let batch = row_set_cursor.fetch()?.context("no batch")?;

        {
            let col1 = batch.column(0);
            let b: NullableSlice<i64> = col1.as_nullable_slice().unwrap();
            let (_values, _indicators) = b.raw_values();
            // ensure!(values, &[2]);
        }

        Ok(())
    }

    #[instrument(level = "debug", skip_all, ret)]
    fn has_broken(&self, conn: &mut Self::Connection) -> bool {
        trace!("has_broken()");

        let old_count = COUNT.fetch_add(1, Ordering::SeqCst);
        dbg!(&old_count);
        if old_count > 10 {
            conn.is_dead().unwrap_or(true)
        } else {
            false
        }
    }
}

Hello @bbigras ,

odbc-api 4.0.1 is released. It features additional debug logging around allocating and de-allocating Connection handles.

My current working hypothesis is that the ODBC driver you are using confuses a lost connection with an invalid handle. Here are the four things which could in my opinion cause an invalid handle:

  1. ODBC Driver invalidates handle erroneously.
  2. ODBC Driver Manager (iodbc) invalidates handle erroneously.
  3. Application code (including the connection pool library you are using) has a "use after free" bug. In Rust this is only possible in unsafe code. Honestly I doubt it. Yet I have no time to review third party connection pool libraries.
  4. odbc-api frees or invalidates handles otherwise, without ensuring that their wrappers are destroyed, violating its invariant. I am currently assuming this is not the case. Mostly because these are straight forward RAII wrappers. If this would be the case it would also be reproducable with Microsoft SQL Server or PostgreSQL, with unixODBC or on Windows.

3. + 4. are hard to get wrong. On the other hand I could honestly see someone thinking, "well the connection to the database is interrupted, I guess this means the handle is invalid". Unless I can see a example which I can reproduce in the test setup using Postgres or Microsoft, I claim odbc-api to be innocent in this. However the new debug log output should enable you to verify for yourself, without inspecting to much code, whether or not the application actually frees the handles. Look out for SQLFreeHandle.

Best, Markus

P.s. If you are using a third party connection pool, rather than the connection pooling native to the iODBC driver manager I assume you found that it does not work? If so, why should is_dead work to begin with? Are there calls to is_dead which succeeded on freshly created valid connections? Or does the method just always fail?

If you are using a third party connection pool, rather than the connection pooling native to the iODBC driver manager I assume you found that it does not work?

I have called Result::unwrap() on an Err value: FailedSettingConnectionPooling when I call Environment::set_connection_pooling(AttrConnectionPooling::DriverAware).unwrap().

It seems you told me that it wasn't working in AscendingCreations/r2d2_odbc_api#2 (comment) and #148 (comment)

If so, why should is_dead work to begin with?

I have no idea. I thought that I could call is_dead without using the connection pooling native to the iODBC driver manager.

Are there calls to is_dead which succeeded on freshly created valid connections? Or does the method just always fail?

It seems to always fail.

"use after free" bug

Any easy way to detect that? Maybe with gdb or the rust sanitize thing.

It seems you need another way to check for dead connections. As I understand it is_dead is part of the driver managers native connection pooling. While iodbc claims to be ODBC 3.5 it just does not seem to implement that part of the standard. Since for your application you know the database you are working with, my personal approach would likely be to send a dummy request and check if it returns without error to see if the connection is still good.

Only alternative I can think of is to ask the iODBC maintainers to support connection pooling.

It seems you need another way to check for dead connections.

I think I can do without.

With bb8, I can also implement is_valid and use a SQL query to check if the connection is still working.

Something like is_dead would have been nice since my shitty database server takes like 1 sec for a simple sql query and I can't seem to run queries like SELECT 1.

I guess this issue can be closed. Unless maybe you want to handle the error or disable is_dead when using iodbc.

Unless maybe you want to handle the error or disable is_dead when using iodbc.

I am thinking about it.