Fatal error while trying to refresh the metadata after an erroneuous attempt to prepare the schema
livejake opened this issue · 2 comments
I'm seeing the following error after going through the QuickStart setup and using my own data.
2019-03-03T01:37:24.010 INFO REPL - 0 documents have been cloned to main.jobs
2019-03-03T01:38:14.494 INFO REPL - 1000 documents have been cloned to main.jobs
2019-03-03T01:38:44.983 INFO REPL - 2000 documents have been cloned to main.jobs
2019-03-03T01:38:59.047 INFO REPL - 3000 documents have been cloned to main.jobs
2019-03-03T01:39:09.963 INFO REPL - 5000 documents have been cloned to main.jobs
2019-03-03T01:44:50.514 FATAL TOROD - Fatal error while trying to refresh the metadata after an erroneuous attempt to prepare the schema
2019-03-03T01:44:50.520 ERROR LIFECYCLE - Error reported by com.torodb.torod.impl.sql.schema.Logic@10b4ff31. Stopping ToroDB Stampede
2019-03-03T01:44:50.521 INFO LIFECYCLE - Shutting down ToroDB Stampede
2019-03-03T01:44:50.525 INFO REPL - Shutting down replication service
2019-03-03T01:44:50.526 ERROR REPL - Fatal error while starting recovery mode: Metadata hasn't been initalized
2019-03-03T01:44:50.527 ERROR REPL - Catched an error on the replication layer. Escalating it
2019-03-03T01:44:50.527 ERROR LIFECYCLE - Error reported by replication supervisor. Stopping ToroDB Stampede
2019-03-03T01:44:50.527 INFO REPL - Recived a request to stop the recovering service
2019-03-03T01:44:50.586 INFO REPL - Topology service shutted down
2019-03-03T01:44:50.592 INFO REPL - Replication service shutted down
Looking through the this using --log4j2-file log4j2.xml, I see the following error preceding the Fatal error.
com.torodb.backend.exceptions.BackendException: On context ADD_COLUMN with sqlState 54011: ERROR: tables can have at most 1600 columns
What is the best approach here? I see you can filter databases and collections, but is there a way to exclude part of a collection that is generating tons of columns? Or should I try to copy the mongo database and remove part of the collection?
Hi @livejake, you actually hit a ToroDB Stampede limit, I will leave this bug open but we are out of resources to fix it any time soon.
Probably the best option to workaround the limitation would be to transform the document in the source database so that the structure is not plain. A trivial example would be something like:
db.main.jobs.find().forEach(function (i) {
var o={};
for (var key in i) {
if (key != "_id") {
if (!o[key.substring(0,1)]) {
o[key.substring(0,1)] = {};
};
o[key.substring(0,1)][key] = i[key];
} else {
o[key] = i[key];
}
};
db.main.mapped.jobs.insert(o);
});
The problem with this approach is that replication will not work as is since collection with mapped copy of original collection will become stale.