How do I run ffmpeg v360 filter on a stream with limited memory?
I am currently using ffmpeg python in a microservice for changing a projection of a VR video in S3 bucker. Ideally I would like to use as little memory as possible, reading a video file from s3 as a stream, transcoding it and uploading back into a separate file all in-memory.
However, when I run this code with large videos (above 10GB), the ffmpeg process gets terminated with no exception and boto3 uploads a 0B “file” to s3. When I run this with a small (100MB) video or on my local machine with 16GB RAM, the upload finishes up fine.
Code:
with (
ffmpeg
.input(input_url)
.filter("v360", inputProjection, outputProjection, in_stereo=inputStereo, out_stereo=outputStereo)
.output('pipe:', format=outputFormat)
.run_async(pipe_stdout=True)
).stdout as dataStream:
client.upload_fileobj(dataStream, aws_s3_bucket, outputFile)
I expect the ffmpeg to apply the filter to a video as a stream only holding a small section in memory at a time, but instead it seems to try to download the entire video into the local memory before applying the filter (when the script fails, stdout contains either no data or only the metadata of the video)
I cannot support the full download as I am planning to use the script on 100GB+ videos and it needs to run as a microservice in a kubernetes cluster.
Read more here: Source link
