Python client for the Impala distributed query engine.
Fully supported:
-
Lightweight,
pip
-installable package for connecting to Impala databases -
Fully DB API 2.0 (PEP 249)-compliant Python client (similar to sqlite or MySQL clients)
-
Support for HiveServer2 and Beeswax; support for Kerberos
-
Converter to pandas
DataFrame
, allowing easy integration into the Python data stack (including scikit-learn and matplotlib)
In various phases of maturity:
-
SQLAlchemy connector; integration with Blaze
-
BigDataFrame
abstraction for performingpandas
-style analytics on large datasets (similar to Spark's RDD abstraction); computation is pushed into the Impala engine. -
scikit-learn
-flavored wrapper for MADlib-style prediction, allowing for large-scale, distributed machine learning (see the Impala port of MADlib) -
Compiling UDFs written in Python into low-level machine code for execution by Impala (powered by Numba/LLVM)
Required for DB API connectivity:
-
python2.6
orpython2.7
-
thrift>=0.8
(Python package only; no need for code-gen)
Required for UDFs:
-
numba
(which has a few requirements, like LLVM) -
boost
(becauseudf.h
depends onboost/cstdint.hpp
)
Required for SQLAlchemy integration (and Blaze):
sqlalchemy
Required for BigDataFrame
:
pandas
Required for utilizing automated shipping/registering of code/UDFs/BDFs/etc:
pywebhdfs
For manipulating results as pandas DataFrame
s, we recommend installing pandas
regardless.
Generally, we recommend installing all the libraries above; the UDF libraries
will be the most difficult, and are not required if you will not use any Python
UDFs. Interacting with Impala using the ImpalaContext
will simplify shipping
data and will perform cleanup on temporary data/tables.
This project is installed with setuptools
.
Install the latest release (0.8.1
) with pip
:
pip install impyla
For the latest (dev) version, clone the repo:
git clone https://github.com/cloudera/impyla.git
cd impyla
make # optional: only for Numba-compiled UDFs; requires LLVM/clang
python setup.py install
Impyla implements the Python DB API v2.0 (PEP 249) database interface (refer to it for API details):
from impala.dbapi import connect
conn = connect(host='my.host.com', port=21050)
cursor = conn.cursor()
cursor.execute('SELECT * FROM mytable LIMIT 100')
print cursor.description # prints the result set's schema
results = cursor.fetchall()
Note: the specified port number should be for the HiveServer2 service (defaults to 21050 in CM), not Beeswax (defaults to 21000) which is what the Impala shell uses.
The Cursor
object also supports the iterator interface, which is buffered
(controlled by cursor.arraysize
):
cursor.execute('SELECT * FROM mytable LIMIT 100')
for row in cursor:
process(row)
You can also get back a pandas DataFrame object
from impala.util import as_pandas
df = as_pandas(cur)
# carry df through scikit-learn, for example