lukehsiao/CribSense

Gather metrics to compare the effects of optimizations for report

Closed this issue · 3 comments

Here’s what Shane and I are looking for in your second update report. Your first report gave a good overview of the project and how it’s changed from the original proposal. I suspect that hasn’t changed much in the past two weeks. So there is no need to go into that. Instead, we’d like to see 2 pages of technical details.

You’ve encountered some challenges — what were they, and how can you show them quantitatively? For example, the JARVAS team might show some ranging results and accuracy, the baby monitor team might show some pixel processing rate results with and without some of the optimizations, and the pinball group might show the RAM use of some sample pinball configurations. Include figures describing your design, algorithms, or configuration languages. In short, give a report on some of the details of your technical progress.

We can measure this using the 2min video on the SSH pi.

Going to measure way back at commit 774ff8f, which is before any multithreading.

Without any optimizations (basically just the changes necessary to build on the pi), running in commandline mode. We have a pixel processing rate of

307200 px/frame * 1200 frames / 936 sec = 393,846 pixels per second

when measured using the 2min, 10fps, 640x480 video.

The std::async implementation of multiple threads brings this up to

307200 * 1200 / 393 sec = 938,015 pixels per second

The remaining speedup needed to keep up with the video stream is achieved through cropping and compiler flags. These slightly improve pixels per second (up to about 1,243,724 pixels per second, and reduce the number of pixels we look at per frame.