msiemens/tinydb

Cannot keep reference to table without keeping a reference to the database

michou opened this issue · 4 comments

This throws a ValueError: I/O operation on closed file.:

db = TinyDB('./db.json').table('foo')
db.insert({'something': 'else'})
db.insert({'int': 13})
print(db.search(Query()['int'] == 13))
print(db.all())

while this works:

db = TinyDB('./db.json')
table = db.table('foo')
table.insert({'something': 'else'})
table.insert({'int': 13})
print(table.search(Query()['int'] == 13))
print(table.all())

It took me and @alexghr a couple of hours to figure out it had to do with the garbage collector deleting the db object (and thus closing the database).
Would it make sense to have the documentation explicitly state that losing the reference to the database will mess up your tables?

I found it especially confusing because the documents state that TinyDB uses a table named _default as default table. All operations on the database object (like db.insert(...)) operate on this table. The way I read it, this meant operations on the database object were identical in behavior with operations on a table object. Which is not the case.

Would it make sense to have the documentation explicitly state that losing the reference to the database will mess up your tables?

Actually I count that as a bug. If nothing comes up, a bugfix release should be published by the end of the week.

Bug Background

The offending lines are tinydb/database.py:131-133:

def __del__(self):
    if self._opened is True:
        self.close()

Here the storage is closed when the TinyDB object gets garbage collected. As you've pointed out, after running TinyDB('./db.json').table('foo') there are no more references to the TinyDB object. Thus, the underlying storage gets closed and future access fails.

TinyDB v3.1.3 which includes the fix is now released :)

Yay!