debug mechanism
ctzurcanu opened this issue · 20 comments
trying to understand what is going on..
i guess a lot of the PDO work happens in
457 if ($result[$hash]->execute($data) === true)
could there be a HTTP parameter so we can see (debug mode) what exact query is intended to be executed?
i am trying to hack into this to make it compatible with CouchDB REST and i am not as good at coding as you are..
@ctzurcanu I'm not really sure a debug mechanism would be wise to implement. Besides potentially messing up the JSON output it could also expose more information than what would be desirable.
Perhaps a log file would be a viable option. What do you think?
a log file is good enough. anything to get me started :)
i will have a better documenting here of the next bugs I will file (I think i found at least one)
I'll see what I can do about a log file then.
If you found a bug, please do share! :-)
@ctzurcanu I decided I'm not going to implement a logging mechanism on ArrestDB after all, but if you want, you can replace the ArrestDB::Query()
method on your end with the following code snippet:
public static function Query($query = null)
{
static $db = null;
static $result = array();
try
{
if (isset($db, $query) === true)
{
if (strncasecmp($db->getAttribute(\PDO::ATTR_DRIVER_NAME), 'mysql', 5) === 0)
{
$query = strtr($query, '"', '`');
}
if (empty($result[$hash = crc32($query)]) === true)
{
$result[$hash] = $db->prepare($query);
}
$data = array_slice(func_get_args(), 1);
if (count($data, COUNT_RECURSIVE) > count($data))
{
$data = iterator_to_array(new \RecursiveIteratorIterator(new \RecursiveArrayIterator($data)), false);
}
file_put_contents(__DIR__ . '/query.log', '[QUERY] ' . vsprintf(str_replace(array('%', '?'), array('%%', "'%s'"), $query), $data) . "\n", FILE_APPEND);
if ($result[$hash]->execute($data) === true)
{
$sequence = null;
if ((strncmp($db->getAttribute(\PDO::ATTR_DRIVER_NAME), 'pgsql', 5) === 0) && (sscanf($query, 'INSERT INTO %s', $sequence) > 0))
{
$sequence = sprintf('%s_id_seq', trim($sequence, '"'));
}
switch (strstr($query, ' ', true))
{
case 'INSERT':
case 'REPLACE':
return $db->lastInsertId($sequence);
case 'UPDATE':
case 'DELETE':
return $result[$hash]->rowCount();
case 'SELECT':
case 'EXPLAIN':
case 'PRAGMA':
case 'SHOW':
return $result[$hash]->fetchAll();
}
return true;
}
return false;
}
else if (isset($query) === true)
{
$options = array
(
\PDO::ATTR_CASE => \PDO::CASE_NATURAL,
\PDO::ATTR_DEFAULT_FETCH_MODE => \PDO::FETCH_ASSOC,
\PDO::ATTR_EMULATE_PREPARES => false,
\PDO::ATTR_ERRMODE => \PDO::ERRMODE_EXCEPTION,
\PDO::ATTR_ORACLE_NULLS => \PDO::NULL_NATURAL,
\PDO::ATTR_STRINGIFY_FETCHES => false,
);
if (preg_match('~^sqlite://([[:print:]]++)$~i', $query, $dsn) > 0)
{
$options += array
(
\PDO::ATTR_TIMEOUT => 3,
);
$db = new \PDO(sprintf('sqlite:%s', $dsn[1]), null, null, $options);
$pragmas = array
(
'automatic_index' => 'ON',
'cache_size' => '8192',
'foreign_keys' => 'ON',
'journal_size_limit' => '67110000',
'locking_mode' => 'NORMAL',
'page_size' => '4096',
'recursive_triggers' => 'ON',
'secure_delete' => 'ON',
'synchronous' => 'NORMAL',
'temp_store' => 'MEMORY',
'journal_mode' => 'WAL',
'wal_autocheckpoint' => '4096',
);
if (strncasecmp(PHP_OS, 'WIN', 3) !== 0)
{
$memory = 131072;
if (($page = intval(shell_exec('getconf PAGESIZE'))) > 0)
{
$pragmas['page_size'] = $page;
}
if (is_readable('/proc/meminfo') === true)
{
if (is_resource($handle = fopen('/proc/meminfo', 'rb')) === true)
{
while (($line = fgets($handle, 1024)) !== false)
{
if (sscanf($line, 'MemTotal: %d kB', $memory) == 1)
{
$memory = round($memory / 131072) * 131072; break;
}
}
fclose($handle);
}
}
$pragmas['cache_size'] = intval($memory * 0.25 / ($pragmas['page_size'] / 1024));
$pragmas['wal_autocheckpoint'] = $pragmas['cache_size'] / 2;
}
foreach ($pragmas as $key => $value)
{
$db->exec(sprintf('PRAGMA %s=%s;', $key, $value));
}
}
else if (preg_match('~^(mysql|pgsql)://(?:(.+?)(?::(.+?))?@)?([^/:@]++)(?::(\d++))?/(\w++)/?$~i', $query, $dsn) > 0)
{
if (strncasecmp($query, 'mysql', 5) === 0)
{
$options += array
(
\PDO::ATTR_AUTOCOMMIT => true,
\PDO::MYSQL_ATTR_INIT_COMMAND => 'SET NAMES "utf8" COLLATE "utf8_general_ci", time_zone = "+00:00";',
\PDO::MYSQL_ATTR_USE_BUFFERED_QUERY => true,
);
}
$db = new \PDO(sprintf('%s:host=%s;port=%s;dbname=%s', $dsn[1], $dsn[4], $dsn[5], $dsn[6]), $dsn[2], $dsn[3], $options);
}
}
}
catch (\Exception $e)
{
file_put_contents(__DIR__ . '/query.log', '[EXCEPTION] ' . $e->getMessage() . "\n", FILE_APPEND); return false;
}
return (isset($db) === true) ? $db : false;
}
That will log all errors and queries to ./query.log
.
I think the variable interpolation in the prepared statements works, but note that I haven't tested it.
Thank you for your effort.
However exactly in the usecase I would have caught the bug, the script fails to print to file :(
Namely:
When: PUT <path>/table/1 with field assignments:
the logged file:
[QUERY] SELECT * FROM `tree_cache2` LIMIT 10 OFFSET 0;
----------to this point all worked ok -----------
[QUERY]
[EXCEPTION] SQLSTATE[HY093]: Invalid parameter number
the error is:
<b>Warning</b>: vsprintf(): Too few arguments in <b>/Applications/XAMPP/xamppfiles/htdocs/subject.ro/db2/index.php</b> on line <b>494</b><br />
line 494 is your: file_put_contents(__DIR__ . '/query.log', '[QUERY] ' . vsprintf(str_replace(array('%', '?'), array('%%', "'%s'"), $query), $data) . "\n", FILE_APPEND);
@ctzurcanu It seems to work here:
[QUERY] SELECT * FROM "foo";
[QUERY] SELECT * FROM "foo" LIMIT 10 OFFSET 0;
[QUERY] SELECT * FROM "foo" WHERE "bar" = '42' LIMIT 10 OFFSET 0;
[QUERY] SELECT * FROM "foo" WHERE "bar" LIKE 'baz' LIMIT 10 OFFSET 0;
[QUERY] SELECT * FROM "foo" WHERE "bar" LIKE 'baz%' LIMIT 10 OFFSET 0;
What URL did you try to access? And have you changed other parts of the code?
as I was saying: it worked up to the point of the first PUT (that was an update). In other words: I said selects worked: "----------to this point all worked ok -----------"
Ah, sorry - it skipped my eye. Let me take a look.
And to answer: "And have you changed other parts of the code?"
Of course I did, but I try to do it in areas unrelated to this issue to make sure I do not take your time where it is not an issue of your original code. To the best of my ability.
I thought the error was related to SELECT statements and they worked fine here, hence the question.
But I've found the bug, my bad. Fixing it now. ;-)
@ctzurcanu This is now fixed in 1bf6ee2. It was a bug introduced by the bulk insert commit.
The log error message and the other bug you mentioned should be gone now (hopefully).
Thanks for reporting this!
is there any way to still have the logging ability in your code? it may be important for any future bugs :)
@ctzurcanu Of course, the ArrestDB::Query()
method remains the exactly same.
Sorry.. the class of bugs I discovered still persists..
Now the update does not throw any error nor does it log anything.
gives a response of:
{"error":{"code":204,"status":"No Content"}}
but the "php://input" caries data for fields.. I can see it in the browser debug.
@ctzurcanu Can you provide a complete test case with the client code as well so I can try it on my end?
I tested POST and PUT endpoints with Postman and didn't had any problems with it.
this is something I feared. The client is quite large and it will take a while to model my particulars on jsfiddle for this javascript call.
I will try to do it with postman (which i did not know about - thank you) But it may also take a while.
but if you can make sense of:
curl 'http://localhost/subject.ro/db2/tree_cache2/1' -X PUT -H 'Cookie: PHPSESSID=eaff5d2c329b718e8f51c96f4b3f0ea3; laravel_payload=eyJpdiI6Ik5qTGY1ZEk1SlhSOW5JMGEwY05XU1NNOG1lM1dkcEM0YmJSd3FuTjRrTnc9IiwidmFsdWUiOiJmZkxKNFZtUFZXR0UxbmJ0UFh5dFhIOXJ1eEJTRkM4emR3Y2gxWVg3czZwT3RIODVINys5MHQ0dmc2c1pqRkQ2TW1GYmZlQ0tuSEJpalBhMnNtWUNNcm9YdmhrRTBUaWlSaUkrWmU3Rk5nSUZSMDV1WStVanQrUTdZc2JMTmlUS2FGcXhhR2FyRDd1TlpKaHdVMzIwZ2hTbEdhR0lWeWlLSkJqVnlSaDFub1lIdWtVQ1FCaGlcL1UzMUhYcnpYSzJzNmhOXC9pd2I5alJQWm5zdEhoZzRhOG5YVDhXUWE4YlNreFpaaHE1Q0ZDMVlsYzM1b3JSRzl0RkhMSDlNTkh6TG9ic3hWbVlOR1EzT1VDVjN3XC85bjZOVXVkWitKektWZkVDbVpKYlRyRzh5Yz0iLCJtYWMiOiI0MzUwZGI2MTk2YzYwNGI2MWI2NTgxMWQ3NTQ2MWRkMDdiNzk3NmU2ODUwMWI1YTM4ZDliMWJmZTNlNGY5Y2EyIn0%3D; chkcookie=1366488888760; __utma=1.886770279.1360611520.1372430500.1372435095.60; __utmc=1; __atsa=sh=print%2Cemail%2Ccompact; __atuvc=0%7C50%2C5%7C51%2C120%7C2%2C5%7C3%2C36%7C4; mjx.menu=zscale%3A400%25%26%3Bzoom%3AClick' -H 'Origin: http://localhost' -H 'Accept-Encoding: gzip,deflate,sdch' -H 'Accept-Language: en-US,en;q=0.8' -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.77 Safari/537.36' -H 'Content-Type: application/x-www-form-urlencoded; charset=UTF-8' -H 'Accept: */*' -H 'Referer: http://localhost/couchdb/start/indx2.html' -H 'X-Requested-With: XMLHttpRequest' -H 'Connection: keep-alive' --data '_id=&_rev=&id=1&parent_id=&lang=ES&label=Terminologia+Morphologica&uri=http%3A%2F%2Flocalhost%2Ftematres%2Fvocab%2F%3Ftema%3D59266&ancestors_json=&kids_json=%5B%7B%22data%22%3A%22Terminologia+Anatomica%22%2C%22state%22%3A%22closed%22%2C%22attr%22%3A%7B%22data-uri%22%3A%22http%3A%5C%2F%5C%2Flocalhost%5C%2Ftematres%5C%2Fvocab%5C%2F%3Ftema%3D51522%22%7D%7D%2C%7B%22data%22%3A%22Terminologia+Embryologica%22%2C%22state%22%3A%22closed%22%2C%22attr%22%3A%7B%22data-uri%22%3A%22http%3A%5C%2F%5C%2Flocalhost%5C%2Ftematres%5C%2Fvocab%5C%2F%3Ftema%3D1%22%7D%7D%2C%7B%22data%22%3A%22Terminologia+Histologica%22%2C%22state%22%3A%22closed%22%2C%22attr%22%3A%7B%22data-uri%22%3A%22http%3A%5C%2F%5C%2Flocalhost%5C%2Ftematres%5C%2Fvocab%5C%2F%3Ftema%3D46901%22%7D%7D%5D' --compressed
that would put it on the fast-track
the relevant data is "_id=&_rev=&id=1&parent_id=&lang=ES&label=Terminologia+Morphologica" the rest is optional.
yes! I was able to replicate it with Postman:
a PUT to
http://localhost/subject.ro/db2/tree_cache2/1
with form data:
id: 1
lang: ES
return:
{
"error": {
"code": 204,
"status": "No Content"
}
}
@ctzurcanu I guess I should have documented that... ArrestDB doesn't handle multipart/form-data
payloads, not even in POST
requests. You have to use application/x-www-form-urlencoded
until I implement multipart parsing. With the CURL CLI, I think this is done with the --data
parameter.
Please let me know if this solves it so I can mark the issue as closed.
this is very good info.
and update shows in the log.
Now I'm back in business :) Thank you for such a fast reaction!