Benchmark Onyx
kidroca opened this issue · 7 comments
Problem
Measure the performance of Onyx
Execution time
- How much time it takes from start to finish for a public method exposed by the API
- E.g. how much time passes between the initial call to
connect
and resolving data to the caller. Same for other methods likemerge
,set
,get
...
Method call counts and keys used
- Is
connect
called often to retrieve the same key? - How many times?
- When does that happen? Are there many calls during init - timestamps of the calls
Onyx memory footprint
- How much of the data provided by Onyx is in memory at a given time
- Have a way to check which of the Objects returned by Onyx are still in memory, and the amount of memory they take
Solution
TBD
You don't have to patch timer logic inside Onyx code
You can decorates existing methods like this:
import Onyx from './lib/Onyx';
import withOnyx from './lib/withOnyx';
if (process.env.ONYX_INSTRUMENTATION == 'true') {
const decorator = require('./lib/benchmarkDecorator');
decorator.decorate(Onyx);
}
export default Onyx;
export {withOnyx};
And the decorator would do something like this
function decorate(Onyx) {
const promiseReturningMethods = ['merge', 'set', 'get', ...];
promiseReturningMethods.forEach(methodName => {
const original = Onyx[methodName];
const decorated = (...args) => {
const logId = Log.timeStart(methodName, args);
return original.apply(Onyx, args)
.finally(() => Log.timeEnd(logId))
};
Onyx[methodName] = decorated;
});
}
If Onyx had a prototype (was a class) you would've been able to also track internal usage by this decorating pattern
Something similar can be done for methods like connect
that can't return a promise because they return connectionID
This could be further extended to track call counts and memory usage.
- Data returned by Onyx can be tagged and you'd be able to use dev tools to see how much of the data is in memory, what's the actual content, how much memory does it take etc..
Oh, this is cool! Some questions to help me understand it:
-
Are you suggesting that this be implemented as some kind of "profile mode", where when enabled it begins tracking this data? Or are you suggesting this is something "always on"? (I would vote an explicit "profile mode")
-
Are you suggesting that the profiling data be harvested centrally via some kind of server, or just perhaps output in the debug console? (I would vote just logged into the debug console, so we'd use this internally for testing specific builds.)
-
What question is this profiling attempting to answer, and what will we do with the answer?
- For example, the question can't be "Is it faster to read from an in-memory cache than disk?" because we already know the answer is "yes".
- Similarly, it wouldn't be useful to ask "how much time is spent reading from disk?" because I don't know what we'd do with that information: we know it's "a lot", and we know that caching past reads will reduce that (at the expense of RAM).
Basically, I don't think there's any doubt that adding a RAM cache to Onyx would improve read performance. The only question is "by how much" and "can we afford it?" Because my understanding (but please correct me if I'm wrong) is that we regularly run out of RAM even without this cache, so in practice we just don't have enough RAM to spare. So it doesn't really matter how well a cache might work: we just don't have enough RAM to use it.
Incidentally, I'm looking into the implementation of AsyncStorage.get() on Android, and it looks pretty inefficient -- so far as I can tell, it isn't using prepared statements, so it's compiling a totally distinct SQL statement for every single get. This feels like we could pretty easily optimize by:
-
Upon opening the database, call SQLiteDatabase.compileStatement() to create a prepared statement with something like "SELECT value FROM catalyst WHERE key=?;" and save it for repeated use
-
Change AsyncStorage.getItemImpl() to call SQLiteProgram.bindString(0, key) on the prepared statement, and then SQLiteStatement.simpleQueryForString()
-
Do the same thing for setImpl(), which is currently using SQLiteDatabase.insertWithOnConflict(), which similarly appears to not use prepared statements and thus is unnecessarily compiling a separate SQL statement for each insert.
-
Do a native implementation of AsyncStorage.multiGet() to select multiple keys in one query.
I think the benchmarking I'd like to see would be of AsyncStorage today, and AsyncStorage with the above optimizations. I bet we could significantly improve read/write performance of AsyncStorage itself on Android, and then contribute this upstream so everyone using ReactNative can benefit.
Additionally, we should look at what settings are used when opening the sqlite database itself.
Specifically, I'd suggest PRAGMA synchronous = OFF
. This might risk the local database getting corrupted if the phone itself crashes (ie, runs out of batteries) while the app is running, but I think that's a pretty rare occurrence -- and if it does, we can just download all the content again from the server. Configured this way, writes will return to the caller instantly, without waiting to flush to disk.
Additionally, we should look into enabling memory mapping of SQLite, as that will likely provide another substantial performance boost.
The nice thing about memory mapping SQLite is it provides us a RAM cache for free, using the virtual memory subsystem of Linux. So we wouldn't need to actually build a RAM cache into Onyx, because recently used sqlite pages would already be in RAM.
Are you suggesting that this be implemented as some kind of "profile mode"
Yes, but it's not an on demand switch. You either make a build with this enabled or not. This way no unused benchmark code would end up being part of the production bundle
Are you suggesting that the profiling data be harvested centrally via some kind of server, or just perhaps output in the debug console?
Log.timeStart
, Log.timEnd
is an abstraction it can be implemented however you need it. It can be used to monitor for a while and print a summary, post it somewhere, or print information while each execution runs, other combinations.
- You can even make an E2E test and measure any benefits or degradations
What question is this profiling attempting to answer, and what will we do with the answer?
How changes to Onyx affect performance.
Some examples not related to in memory cache:
- Do resolving
set
optimistically without waiting for the actual write is a significant enough improvement - You question: "How much time is spent reading from disk?"
- Are 5 reads for the same key happening at the same time or close enough
Basically, I don't think there's any doubt that adding a RAM cache to Onyx would improve read performance. The only question is "by how much" and "can we afford it?" Because my understanding (but please correct me if I'm wrong) is that we regularly run out of RAM even without this cache, so in practice we just don't have enough RAM to spare. So it doesn't really matter how well a cache might work: we just don't have enough RAM to use it.
I guess this will be one more thing ending up in the "unproven solutions" bucket, but you are actually using more RAM because Onyx is constantly reading from disk. I've tried to explain that on the slack thread.
I don't want to delve into details, but here's a short list:
- everything read from file is creating a new object even when multiple components use it - already explained this part in slack
- every call to
AsyncStorage.getItem
makes a new query - addressed by your comments above - every call to
AsyncStorage.getItem
need synchronization through the native bridge. The more calls the bridge needs to handle at once the worse. I've identified 45 individualAsyncStorage
calls while the app initializes. Here's a good read if you want more details: https://soshace.com/performance-optimizations-for-react-native-applications/.
If e.cash is running out of RAM I'm 99% sure it's caused by a memory leak, and not because the actual data is taking too much RAM
How much do you expect memory usage to grow after app initialization e.g. I don't think opening a chat or a few at once would ever go beyond 50-60MB of that initial memory footprint. And then as you switch between chats resources should be freed and allocated again.
For comparison 1MB is about 210 pages of uncompressed plain text, or around 1000 pages with a good compression. Even if you have 210 pages in memory they would take about 1MB (well a bit more if it's split between multiple keys, but I hope you get the point).
The Chat underlying implementation is a VirtualList - resources that aren't visible are freed from memory - they would only take as much as their text content represented in Onyx.