PXL 2000 DECODER This is a *very* experimental decoder that converts raw PXL 2000 analog audio to a set of png frames and an audio wav file. An external program (like avconv/ffmpeg) is required to combine the raw image and audio data into a video format. This app will create a sample avconv script that can be used on Linux. In theory, you may be able to take a very weak signal and boost/compress it to recover video data that will not play in a PXL2000 camcorder. QUICKSTART System Requirements: Java JDK (6+) avconv (or ffmpeg) Note, if you don't have avconv, ffmpeg will also work as a drop-in replacement. avconv is a fork of ffmpeg. To compile, package and run, use: bash build.sh The graphical interface should launch by default. Click "wav file" Select the sample data file (included): pxl2000_192khtz_section.wav Click "start/stop" (you can click this multiple times, and adjust tuning) Then, go to the capture/00001/ directory and run the avconv script bash convert.sh The convert script will include the calculated frames per second. The video willl be saved to a file named movie.flv Code has been compiled and tested on Java 1.7.0_65. CONVERTING WAV FILES If using wav files, set the sample rate to 192khtz. Also, it's important the audio signal is loud, but not clipped. If the signal is too quiet, you will lose a lot of greyscale data. If the signal is too loud, you will lose sync data. For a badly damaged signal, you'll get better audio sync if you decode smaller sections of data at a time. USING THE GUI (DEFAULT INTERFACE) convert wav file: select the wav file, and click start/stop. convert line in: select "line in" and click start/stop. Image tab - can adjust brightness/contrast. Audio tab - adjust the speed of the re-sampled audio. Sync tab - controls to adjust for sync problems. A file "convert.sh" will be created that can combine the frames using a third party tool called avconv. The GUI is default, though there is also a command line mode COMMAND LINE INTERFACE/FLAGS boolean switches use a single hyphen (-flag) key/value options use two hyphens (--key value) BOOLEAN SWITCH: -cli use command line mode (no gui) VALUE OPTIONS: --thread_sleep <NUMBER> sleep between frames --min_tic <NUMBER> how many samples between peaks --max_tick <NUMBER> how many samples between peaks --width <NUMBER> size of raw image buffer --height <NUMBER> size of raw image buffer --speed <NUMBER> audio conversion speed (*normal) --low_level <NUMBER> white level cutoff --high_level <NUMBER> black level cutoff --max_row_no_sync <NUMBER> protect pixels from sync break --sync_threshold <NUMBER> ratio between baseline and sync pulse --base_inertia <NUMBER> increase sample size of rolling avg --sync_inertia <NUMBER> increase sample size of rolling avg --buffer_size <NUMBER> set audio read buffer size --capture_dir <STRING> where data is saved --theme <STRING> Java Swing theme name --max_frames <NUMBER> Cap the number of frames created BOOLEAN DEBUG FLAGS (DUMP RAW DATA) -debug general debug messages -debug_am dump raw AM data -debug_dy dump first derivative -debug_peak dump peaks data -debug_delta dump delta data -debug_pixels dump pixel buffer MISC -am_only skip first derivative. possibly faster/grainy -tuning don't save images, just show a preview in GUI -no_repaint do not completely repaint image for each frame FILES CREATED frame_0000x.png individual frames audio.wav resampled audio info.txt conversion stats (dropped sync, fps, time) convert.sh sample script that builds movie from png and audio.wav movie.flv video generated by convert.sh timecode.txt timecode information about each frame PXL DATA NOTES: Right track is data. Left track is audio. Tape speed in PXL 2000 is around 8x normal speed. Images are shown around 15 fps (or less if signal is damaged). A complete frame of data is around .55 seconds (normal speed). There are ~230 AM samples in one row of video, taking top and bottom of AM. There are ~91 rows in an image. Audio sampling at 192khtz to capture peaks in signal >10khtz. This creates about 10-12 sample ticks per cycle. High signal is black, low signal is white. In handling a damaged video signal, the decoder assumes the dropped frames are distributed equally throughout the video. Timecode for each frame is saved in timecode.txt. VIDEO DECODING: Here's what the decoder is doing: 1. Sample raw data @192khtz 2. Get first derivative 3. Extract peaks from derivative 4. Find deltas between peaks (helps smooth out DC offset) 5. Find sync pulses for row and image 6. Translate deltas into greyscale. High is black, low is white. SYNC PULSE DETECTION: Currently the code uses two rolling averages to detect sync pulses: baseLine: the average "data" signal level. It stops tracking if the signal gets too high syncLevel: an average of high level data. This resets to any peak found. signalTrip: sync pulses are found by looking at the ratio of the signal to the baseLine and syncLevel. For example look for places where the signal is larger than twice the baseLine. SAMPLE DATA This project includes one frame of sample data (picture of drumset): pxl2000_192khtz_frame.wav And an example of a set of frames: pxl2000_192khtz_section.wav AUTHORS 2014-09-27 Kevin Seifert (Java code, AM analysis) Mike Leslie (C code, FM analysis)
sevkeifert/pxl-2000-decoder
This software converts analog signals from a PXL 2000 cassette to digital video
JavaNOASSERTION