Transcoder | ๐ Hardware-accelerated video transcoding using Android MediaCodec APIs. Supports cropping, concaten | Video Utils library
kandi X-RAY | Transcoder Summary
kandi X-RAY | Transcoder Summary
Transcodes and compresses video files into the MP4 format, with audio support, using hardware-accelerated Android codecs available on the device. Works on API 18+.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Provide a valid media format for the given media type
- Feeds a single track of a single track
- Drains buffer information once
- Decodes media format
- Creates an output format based on the input formats
- Returns the average inter frame interval
- Returns the best input size which is the best input size
- Region Transcoder
- Synchronizes the parameters from the radio button
- Interpolate a specific track
- Sets the thumbnail
- Override this method to retrieve the thumbnails
- Performs the transcode of the activity
- Creates and returns the output format based on the given options
- Reads a track frame
- Called when the transcoder has completed
- Downwards the input buffer to the output buffer
- Seek to the specified position
- Resets the state
- Reads a track
- Resample the input buffer using the given channels
- Initializes the MediaExtractor
- Resample a short buffer to an output buffer using the provided channels
- Stretch all the input samples into the output buffer
- Initializes the view
- Computes the output size for the given input size
Transcoder Key Features
Transcoder Examples and Code Snippets
Community Discussions
Trending Discussions on Transcoder
QUESTION
How can I know when my application is running out of memory.
For me, I'm doing some video transcoding on the server and sometimes it leads to out of memory errors.
So, I wish to know when the application is running out of memory so I can immediately kill the Video Transcoder.
Thank you.
...ANSWER
Answered 2022-Apr-10 at 08:28You can see how much memory is being used with the built-in process
module.
QUESTION
I'm implementing a pipeline where I receive inbound RTP packets in memory but I'm having trouble figuring out how to set up libavformat to handle/unwrap the RTP packets.
I have all relevant information about the underlying codec necessary but since it's h264 I cannot simply strip the RTP header trivially. I create the input context with goInputFunction
writing one packet per invocation.
ANSWER
Answered 2022-Mar-01 at 04:31Turns out this is possible through the use of an undocumented flag, FLAG_RTSP_CUSTOM_IO
. I've detailed how to do that here: https://blog.kevmo314.com/custom-rtp-io-with-ffmpeg.html
QUESTION
After upgrading to android 12, the application is not compiling. It shows
"Manifest merger failed with multiple errors, see logs"
Error showing in Merged manifest:
Merging Errors: Error: android:exported needs to be explicitly specified for . Apps targeting Android 12 and higher are required to specify an explicit value for
android:exported
when the corresponding component has an intent filter defined. See https://developer.android.com/guide/topics/manifest/activity-element#exported for details. main manifest (this file)
I have set all the activity with android:exported="false"
. But it is still showing this issue.
My manifest file:
...ANSWER
Answered 2021-Aug-04 at 09:18I'm not sure what you're using to code, but in order to set it in Android Studio, open the manifest of your project and under the "activity" section, put android:exported="true"(or false if that is what you prefer). I have attached an example.
QUESTION
I am trying to transcode a RTSP stream into an mpeg4 stream over http for use on a webpage in a video tag. I am using vlcj and a 32-bit version of VLC installed locally. I thought I had the right sout settings, but I am getting the following error when finished.
...ANSWER
Answered 2021-Dec-29 at 17:50The streaming options need to be passed in an array, not a single string with the separator as you have used.
This is a fragment of what you have:
QUESTION
I am trying to write a program to generate frames to be encoded via ffmpeg/libav into an mp4 file with a single h264 stream. I found these two examples and am sort of trying to merge them together to make what I want: [video transcoder] [raw MPEG1 encoder]
I have been able to get video output (green circle changing size), but no matter how I set the PTS values of the frames or what time_base
I specify in the AVCodecContext
or AVStream
, I'm getting frame rates of about 7000-15000 instead of 60, resulting in a video file that lasts 70ms instead of 1000 frames / 60 fps = 166 seconds. Every time I change some of my code, the frame rate changes a little bit, almost as if it's reading from uninitialized memory. Other references to an issue like this on StackOverflow seem to be related to incorrectly set PTS values; however, I've tried printing out all the PTS, DTS, and time base values I can find and they all seem normal. Here's my proof-of-concept code (with the error catching stuff around the libav calls removed for clarity):
ANSWER
Answered 2021-Nov-22 at 22:52You are getting high frame rate because you have failed to set packet duration.
Set the
time_base
to higher resolution (like 1/60000) as described here:
QUESTION
I am building a platform, that allows users to upload some video files (20-40 seconds) from their phone to server. All this upload is working great for now, files are stored in google storage bucket via nodejs cloud functions.
Now I want to create a gcp transcoder-job, that will convert uploaded .mp4 video file to .hls video stream with .ts chunks of 2-4 seconds duration.
Probably success scenario:
- User uploads mp4 file [Done]
- .mp4 file after upload triggers
functions.storage.object().onFinalize
[Done] onFinalize
triggers Google Cloud Job, that will convert mp4 to hls. [We are here]- .mp4 file is removed from google storage.
Will appreciate any help on creating such job. I am using firebase cloud functions with nodejs.
...ANSWER
Answered 2021-Nov-26 at 14:09I would do this using the transcoder API in GCP. It supports mp4 input and hls output. See supported formats
Note that videos must be greater than 5 seconds in length. If they can't be 5 seconds in length, maybe avoid this API and using a tool on AppEngine.
A rough flow of events to accomplish this could be something like:
- Upload MP4 to bucket
- pubsub is triggered from the upload event
- This can trigger a cloud function that can create a new Transcoder job
- The job status updates a pubsub topic
- Pubsub topic triggers a cloud function when the status indicates the job is done
- Cloud function deletes the original mp4 from the bucket
QUESTION
Since I have fallen into the AWS trap and not ignoring the info message on the Elastic Transcoder Page saying that, we should start using Elemental MediaConverter instead, pretty much nothing is working as expected.
I have set up the Elemental MediaConvert following these steps. I have to admit that setting up everything in the console was pretty easy and I was soon able to transcode my videos which are stored on my S3 bucket.
Unfortunately, the time had to come when I was forced to do the transcoding from my web application, using the @aws-sdk/client-mediaconvert. Apart from not finding any docs on how to use the SDK, I cannot even successfully connect to the service, since apparently MediaConvert does not support CORS.
So my question is, did anyone use MediaConvert with the SDK successfully? And if yes, could you please tell me what to do?
Here is my configuration so far:
...ANSWER
Answered 2021-Nov-25 at 17:02So, after almost two entire days of trial and error plus digging into the source code, I finally found a solution! To make it short: unauthenticated access and MediaConvert will not work!
The entire problem is Cognito
which does not allow access to MediaConvert
operations on unauthenticated access. Here is the access list.
Since I am using Auth0 for my user authentication I was simply following this guide and basically all my problems were gone! To attach the token I was using
QUESTION
I am using Rails 6.1 + Ruby 3 and trying to allow users to upload videos and then watch them in app.
clip.rb
...ANSWER
Answered 2021-Nov-06 at 23:22while I'm not 100% sure this is the problem, I believe the Paperclip library is deprecated and it does not work with ruby 3. If you use the latest ruby 2 version is should work for you. However, I would recommend not depending on paperclip for new projects.
QUESTION
I have two mp4 files. One contains only video and the other only audio. Both have exactly the same length (some seconds extracted from a HLS Stream).
I want them now to get mixed together trough a GCS Transcoder API Job which gets triggered by a Dataflow Pipeline. Digging trough the documentation did not yet result in a solution.
My current Job Config looks like that:
...ANSWER
Answered 2021-Oct-25 at 19:52There are some defaults not listed in the documentation. Try adding the following to your config and see whether it works
QUESTION
I'm using Amazon Elastic Transcoder in conjunction with Lambda and Step Functions to transcode MP3s from WAV files.
I need to store the MD5 / S3 ETag
header value of the transcoded MP3s in my database.
At the moment I'm having to fetch these using these in a separate process which is really slow:
...ANSWER
Answered 2021-Oct-14 at 14:13I was hoping that Elastic Transcoder would provide the transcoded files ETag in the job response
Unfortunately, it doesn't.
As of the current AWS Python Boto3 1.18.60 SDK (or any other SDK including the REST API), the entity tag(s) for the output object(s) isn't returned anywhere in the job response object.
This is most likely because an entity tag represents a specific version of the object and is mostly used for efficient caching invalidation.
Elastic Transcoder jobs do not produce multiple versions of the same output and as such, why would it return the ETag
value? If someone requires the ETag, they can get this from the S3 object.
Another question is what happens if there is multi-part output for large inputs? What does the SDK return? A list of ETag
header values? You have multiple parts, but you don't have multiple versions.
This implementation would go against the RFC 7232 specification for the ETag
header:
An entity-tag is an opaque validator for differentiating between multiple representations of the same resource, regardless of whether those multiple representations are due to resource state changes over time, content negotiation resulting in multiple representations being valid at the same time, or both.
Your actual problem, in this case, is that you want the MD5 hash of the file(s) even if they are multi-part.
Now your code will work for getting the MD5 hash for a single file, but if they are multi-part, they don't hash the multipart uploads as you would expect. Instead of calculating the hash of the entire file, Amazon calculates the hash of each part and then combines that into a single hash set as the ETag
header.
This makes a lot of sense: they calculate the hashes of each part as they receive it. After all the parts have been transferred, they combine the hashes instead of trying to calculate a final MD5 hash by reading a file that could possibly be up to the AWS object size limit - 5 terabytes. Try generating MD5 hashes for everyone's files when you're at Amazon's scale & you'll find that their way is quite clever :)
This is probably why the S3 API Reference says the below:
The ETag may or may not be an MD5 digest of the object data.
Objects created by either the Multipart Upload or Part Copy operation have ETags that are not MD5 digests, regardless of the method of encryption.
It is an MD5 hash, but when it's a multi-part upload it isn't so the above is technically correct.
To calculate the MD5 hash correctly for multi-part uploads, try checking out this great answer in response to "What is the algorithm to compute the Amazon-S3 Etag for a file larger than 5GB?".
To summarise, you, unfortunately, don't have the MD5 digest hash of the object(s) in the response back from Amazon Elastic Transcoder - you'll have to do this heavy lifting yourself & it can take long if you have huge files.
There's no workaround or quicker solution - you have the quickest solution as you're already getting the ETag
value from the HEAD in the most efficient way.
I would perhaps recommend trying to parallelise the HTTP HEAD requests made to get the objects' metadata (s3_cli.head_object(...)
) before trying to determine the final MD5 digest of the file(s).
That will definitely speed things up if you have 500 files for example - don't make the API requests separately.
Youโll definitely be able to send them in parallel, so offload the time youโd spend between requests all onto Amazonโs infrastructure.
Get the responses & then process together.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install Transcoder
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page