How to encode from bitmap image (bitmap data byte) to mp4 file?
xxtkidxx opened this issue · 3 comments
I have a list of Bitmap file collected from industrial camera, I want to encode to mp4 file (supports GPU encoding).
I don't know how to convert bitmap data bytes to frames for FFMpeg,
How can i do this ?
To the best of my knowledge, there isn't a built-in method for merging byte arrays into a video. The closest you get is JoinImageSequence, defined here:
FFMpegCore/FFMpegCore/FFMpeg/FFMpeg.cs
Line 97 in eb221c3
You could probably create a custom method to replicate that functionality, using that method as a base. For example, a quick and dirty and completely untested attempt:
using System.Drawing;
using FFMpegCore.Enums;
using FFMpegCore.Exceptions;
using FFMpegCore.Helpers;
using Instances;
namespace MyFFMpegCoreExtensionsLib
{
public static class MyFFMpegCoreExtensions
{
public static bool JoinImageSequence(string output, double frameRate = 30, string imageExtension, params byte[][] images)
{
//TODO Need to replace with something that checks the magic bytes to verify the image encoding
// var fileExtensions = images.Select(Path.GetExtension).Distinct().ToArray();
// if (fileExtensions.Length != 1)
// {
// throw new ArgumentException("All images must have the same extension", nameof(images));
// }
var fileExtension = imageExtension; //TODO Update this to use the correct extension based on the check above;
int? width = null, height = null;
var tempFolderName = Path.Combine(GlobalFFOptions.Current.TemporaryFilesFolder, Guid.NewGuid().ToString());
Directory.CreateDirectory(tempFolderName);
try
{
var index = 0;
foreach (byte[] fileData in images)
{
using ms = new System.IO.MemoryStream(fileData);
var analysis = FFProbe.Analyse(ms);
FFMpegHelper.ConversionSizeExceptionCheck(analysis.PrimaryVideoStream!.Width, analysis.PrimaryVideoStream!.Height);
width ??= analysis.PrimaryVideoStream.Width;
height ??= analysis.PrimaryVideoStream.Height;
var destinationPath = Path.Combine(tempFolderName, $"{index++.ToString().PadLeft(9, '0')}{fileExtension}");
// Creates a temp file using the image's binary data
System.IO.File.WriteAllBytes(destinationPath, fileData);
}
return FFMpegArguments
.FromFileInput(Path.Combine(tempFolderName, $"%09d{fileExtension}"), false)
.OutputToFile(output, true, options => options
.ForcePixelFormat("yuv420p")
.Resize(width!.Value, height!.Value)
.WithFramerate(frameRate))
.ProcessSynchronously();
}
finally
{
Directory.Delete(tempFolderName, true);
}
}
}
}
The primary differences are:
- Add a parameter for the extension associated with the image type
- Change the last parameter to accept an array of byte arrays
- Create a MemoryStream for each byte array -
FFProbe.Analyse()
has an overload which accepts a Stream object - Use
System.IO.File.WriteAllBytes()
to write a temp file for each byte array
Everything else is pretty much the same. No idea what the perf on this is - I assume it's going to be a bit of a memory hog.
To the best of my knowledge, there isn't a built-in method for merging byte arrays into a video. The closest you get is JoinImageSequence, defined here:
FFMpegCore/FFMpegCore/FFMpeg/FFMpeg.cs
Line 97 in eb221c3
You could probably create a custom method to replicate that functionality, using that method as a base. For example, a quick and dirty and completely untested attempt:
using System.Drawing; using FFMpegCore.Enums; using FFMpegCore.Exceptions; using FFMpegCore.Helpers; using Instances; namespace MyFFMpegCoreExtensionsLib { public static class MyFFMpegCoreExtensions { public static bool JoinImageSequence(string output, double frameRate = 30, string imageExtension, params byte[][] images) { //TODO Need to replace with something that checks the magic bytes to verify the image encoding // var fileExtensions = images.Select(Path.GetExtension).Distinct().ToArray(); // if (fileExtensions.Length != 1) // { // throw new ArgumentException("All images must have the same extension", nameof(images)); // } var fileExtension = imageExtension; //TODO Update this to use the correct extension based on the check above; int? width = null, height = null; var tempFolderName = Path.Combine(GlobalFFOptions.Current.TemporaryFilesFolder, Guid.NewGuid().ToString()); Directory.CreateDirectory(tempFolderName); try { var index = 0; foreach (byte[] fileData in images) { using ms = new System.IO.MemoryStream(fileData); var analysis = FFProbe.Analyse(ms); FFMpegHelper.ConversionSizeExceptionCheck(analysis.PrimaryVideoStream!.Width, analysis.PrimaryVideoStream!.Height); width ??= analysis.PrimaryVideoStream.Width; height ??= analysis.PrimaryVideoStream.Height; var destinationPath = Path.Combine(tempFolderName, $"{index++.ToString().PadLeft(9, '0')}{fileExtension}"); // Creates a temp file using the image's binary data System.IO.File.WriteAllBytes(destinationPath, fileData); } return FFMpegArguments .FromFileInput(Path.Combine(tempFolderName, $"%09d{fileExtension}"), false) .OutputToFile(output, true, options => options .ForcePixelFormat("yuv420p") .Resize(width!.Value, height!.Value) .WithFramerate(frameRate)) .ProcessSynchronously(); } finally { Directory.Delete(tempFolderName, true); } } } }The primary differences are:
- Add a parameter for the extension associated with the image type
- Change the last parameter to accept an array of byte arrays
- Create a MemoryStream for each byte array -
FFProbe.Analyse()
has an overload which accepts a Stream object- Use
System.IO.File.WriteAllBytes()
to write a temp file for each byte arrayEverything else is pretty much the same. No idea what the perf on this is - I assume it's going to be a bit of a memory hog.
Thanks for the feedback, but this method is not possible with recording frame by frame on camera..
I need a another solution similar here https://github.com/Ruslan-B/FFmpeg.AutoGen/blob/master/FFmpeg.AutoGen.Example/Program.cs
You can do it using a RawVideoPipeSource like in the example file https://github.com/rosenbjerg/FFMpegCore/blob/main/FFMpegCore.Examples/Program.cs