Compilation?
Opened this issue · 14 comments
I had previously tried sevkeifert's and found it to be somewhat touchy.
Not being a Java guy, how does one compile your code?
At what bitrate should PXL tapes be digitized for your decoder (192kHz)?
Thanks for keeping the dream alive! I've got dozens of old take to save!
It's still a work in progress, and not to a point where it's ready to be used yet, unfortunately. However, it's still under very active development in what little free time I have, and I'm highly motivated to get it working well so I can get my tapes translated and viewable.
I ran across sevkeifert's project a couple weeks ago and I'm quite impressed by everything he got accomplished. But I, too found it a bit touch and wasn't able to get much out of it with my tapes. At first, I started trying to modify his code, but in the end I decided the approach I wanted to try was different enough that I'd start from scratch. So far, I am able to get the frames out as images reliably from the datasets I have (ones I recorded, and the ones that sevkeifert posted). That part doesn't seem touchy. There is a brightness/ contrast setting that does need to be tuned per recording at this point (although I think I can make that automatic, too).
What I haven't done yet is anything beyond getting image frames. So, grabbing the audio track and scaling it, putting together the scripts needed to generate a video, a decent UI, etc.
I went with the same format as sevkeifert-- 192kHz, (and of course, stereo). That seems sufficient to get the data out, and I'd hate to have to record it again or transcode it just to use a different tool.
If you have any short samples of video you'd like to make publicly available for testing, it would be great to incorporate samples from several different sources. I could add a "samples" directory here for such samples.
I would love to provide samples... But here's the thing.
Using the old code, part of the problem was figuring out what the optimized digitizations levels should be for the original tapes. The SW gave wildly different results for lo/hi level digitized PXL cassettes. And it was a pain to have to go back and try it over and over to dial it in.
If this code's workflow could auto-normalize and/or detect when the signal is so noisy and give hints that it might need a higher digitization signal... that would be helpful for people digitizing their tapes.
So - until I can predictably digitize and see the output, I'm reluctant to provide .wav as samples, ya know?
If you nail the pipeline.. you could conceivably have a cottage industry just digitizing peoples' old PXL cassettes!
I totally get it. I think the hardest part is getting reliable frames and images, and I think I've got that working fairly well at this point. Auto-calibrating brightness per frame will be pretty easy. Once everything seems to be working well for the samples I have, I'll be more interested in getting more samples to make it more robust, but the software will be publicly usable by then.
As for the cottage industry-- I'm sure there's a small one out there, but I'm planning on just releasing this code with a friendly open source license. The more people that can retrieve their old memories, the better.
Just to give you an idea of my timeframe on this-- I'm planning to get the translation done before Christmas, so I can share old videos with my family then, although I'm hoping to have this put together before then.
I just uploaded some changes that should start to make this usable by others. You can build by running build.sh
Then, drop your wav file into the source directory (yeah, kinda ugly-- I'll do something about it later) and run the following command:
java -jar PxlMagic.jar [VideoFile.Wav]
[VideoFile.Wav] must be an uncompressed WAV file recorded at 192000 samples per second
This will first run the FFT step (which can take a long time -- potentially hours for a full cassette of video) and when it's done, it will create a bunch of files in a new directory with the name of the WAV file. I.e. If you used VideoFile.Wav, it would create all of its output in a new directory called VideoFile.
In that directory, you can find all of the images and an extracted wav file, as well as a few (potentially large) intermediate files.
You can then use ffmpeg to stitch the output into an MP4:
ffmpeg -r 15 -f image2 -i image%05d.png -i final_audio.wav out.mp4
I'm using OpenJDK 10.0.2, although I'm not sure that really matters.
If you try it, let me know how it works for you.
The code looks great. I ran a few tests, but I'm getting a error on decoding. I tried a couple different wav's, For example
./build.sh Test12.wav
Produces:
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:657)
at java.util.ArrayList.get(ArrayList.java:433)
at pxlConverter.findValidSoundLocations(pxlConverter.java:856)
at Main.main(Main.java:54)
My guess is there's no data collected in the pass findValidSoundLocations?
I'm using java 8 (though it compiles fine)
openjdk version "1.8.0_181"
OpenJDK Runtime Environment (build 1.8.0_181-8u181-b13-0ubuntu0.16.04.1-b13)
OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)
Thank you for the feedback-- I really appreciate it. I forgot I had those old test files checked in. Both of them had the right and left channels swapped. Also, Test12 didn't have enough recording before and after the frame pulses to be captured by my current algorithm. I've added two new files, either of which should work fine. Also, you should be able to try any samples you have. Let me know if you run into any more issues. Overall, I definitely need better error messaging.
I was able to process some freshly digitized PXL2000 tapes - with caveat..
One 45 minute tape processed, but generated only 256 png image files. No obvious errors.
The two tapes I've digitized... look like doo-doo. Audio is fine, but the video is mostly noise, with some visually detectible signal. I'm curious whether you have advise for whether digitization levels matter. These were Chrome tapes, so I played as Chrome, with no Dolby/NR. Here is a typical resulting frame:
Have you found that any post-digitization audio processing can result in better video? I don't want to digitize too many tapes until I've locked-in on a workflow that gives decent results.
Thanks!
I have some of mine that look good and some that look like what you posted above. I'm not sure why, yet. Interestingly, I do have one tape that seems to transition from one frame to the next, going from "good" to "bad." I'll need to look at it closer to see if it seems to be a difference in the recording or something in the code. I do have the code doing automatic level detection per frame, so that might be having issues.
The last patch works great for me! I ran a couple tests and the sync tracking is working perfectly on my samples. I think the quality actually looks better than the original camcorder did :-) I'll need to digitize some more tapes.
@xpollen8, That last image just looks like the sync was just lost mid way through (the diagonal line looks like the frame sync edge).
I just used a standard cassette player, with no special features. This is how the signal looks for me, when digitized:
Volume levels do make a huge difference, since the signal format is difficult to record. (Either it wants to clip, or the data is too quiet). Overall, I just made sure it looked like a sine when zoomed in, and that there was some "meat" to the data segments. When recording, it could help to normalize/compress the input.
Looking at some of my samples where the video is really grainy, this is what I see when I zoom in on a single line:
This really looks to me like there's another signal superimposed on the one we care about. I'm wondering if it would be possible to run each line through a bandpass filter and still get a good signal out. I don't know what effect such a filter would have on blurring the image. Probably somewhere between horrible and awesome?
@sevkeifert The diagonal in the frame I posted is actual the curb(!) - the video was taken while cycling down a street.
Hmm that is a weird signal... I agree, it does look like another frequency is superimposed. You can even see a notch moving through the peaks.
I wonder if the magnetism between rolls of tape could be bleeding over, on the video channels. If that is the case it should be close to the initial frequency but maybe not exact. The tape rotation would change speed as the reel radius changes/unwinds. That or maybe an erased image coming back if the tape was recorded on twice?
Is there any pattern to how these sections show up? For example, like pulses, or long continous sections? Do the fade in gradually? Or abruptly?
How does one go about building the required jar file?
The easiest way is probably to install intellij & load the project in that IDE and build from there. At some point, I'd like to get back to this project and put a nice front-end on it, but I'm not sure when or if I'll get to that.