mediamux | writing responsive React components in a concise | Frontend Utils library
kandi X-RAY | mediamux Summary
kandi X-RAY | mediamux Summary
A utility for writing responsive React components in a concise, maintainable, mobile-first way. At Klarna we use inline styles extensively. In responsive web applications this can lead to verbose, complicated components where we check against specific media queries like isMobile or isDesktop. Mediamux is a React Hook which returns a function accepting any number of arguments, and returning the argument matching the currently active breakpoint. It is heavily inspired by the array syntax for applying responsive styles in theme-ui and styled-system.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of mediamux
mediamux Key Features
mediamux Examples and Code Snippets
Community Discussions
Trending Discussions on mediamux
QUESTION
For a testing purpose I am creating a new video from existing one by using MediaExtractor and MediaMuxer. I expect the new video to be exactly the same duration as the original one but it is not the case. The new video duration is slightly shorter than the original one.
...ANSWER
Answered 2022-Jan-06 at 10:38no answer but some inputs that may help:
I believe the media extractor and media muxer are vendor owned, google have the default cpp implementation but the vendors can override it. you can review the google implementation here: https://cs.android.com/android/platform/superproject/+/master:frameworks/av/media/libstagefright/MediaMuxer.cpp;l=173
It helps me to solve one of the voodoo bugs in the engine related to "last frame" / time mismatch. NOTE: you need to make sure you are looking o n the right version (check the blame tool to the right).
the media can contain any metadata value that was pushed while creating the file. so, you can calculate it again or just use what you get.
did you tried to take the video that you created and then run the test when this file is the input?
QUESTION
The following code is an attempt to use AAC on Android to encode a floating point sine tone signal into an mp4 file. However, it fails. Either I get a distorted tone or nothing at all. The length of the mp4 file is also wrong.
I would appreciate some help solving this problem. Thanks!
...ANSWER
Answered 2021-Aug-28 at 05:10Maybe endian matters. The byte-arrays to be queued in MediaCodec's input-buffer have to store all the PCM sample values in native byte order
as follows.
QUESTION
I’m muxing an audio and video file. I have the urls where both files are hosted. I use android.media.MediaMuxer and android.media.MediaExtractor to mux them. During this muxing, those classes download the files, but I want to show progress, so I need to know the total file size of the audio file and the video files.
Does anyone know how to get the file size using those Android API’s? Or do I have to just hit the URL’s myself (again) to get the content length?
Example files
...ANSWER
Answered 2021-Aug-14 at 13:45In the end I did a HEAD request to get the "content-length" header (which gives me the file size), before handing off to MediaExtractor.
I've realised now I could also have downloaded the file myself, then get a local FileDescriptor
and pass that to MediaExtractor
allowing me to avoid extra network requests.
QUESTION
My Android app creates an MP4 using the MediaCodec and the MediaMuxer. I use the MediaPlayer to play back the video. While the video plays, it isn't possible to seek to any location in the video using Android's MediaPlayer. More specifically, the seekTo function will not work. Using other apps to play the video and seek is somewhat sketchy. Some apps seem to work while others do not.
I have swapped my mp4 with a video that I recorded on the camera as well as various videos I've found on the Internet and none of them have the problem seeking. The fact that the stock camera app can generate an MP4 and let you seek clearly indicates a problem in the way the codec is being setup. This leads me to believe that the problem is most likely in the format settings used to create the video. I have tried modifying a number of settings without any success including the profile (used both baseline and main), the profile level, the I-frame interval (GOP) as well as the bitrate and video size. I also made sure the presentation time for each frame matched the frame rate exactly. Here is the info I am getting for both the video that doesn't support seek and one that does (the camera video). Is there anything in these settings that could be causing the problem?
A short test file can be downloaded here. The seek works if you play this in QuickTime or VLC:
https://drive.google.com/file/d/15QiDPYdPd_tVQTkqXuP0v2L7eKoMbWQo/view?usp=sharing
Video that doesn't support seek:
...ANSWER
Answered 2021-Jul-05 at 13:20A MP4 player needs to know the location of the sync samples (I-frames or IDR-frames). The sync sample location is usually signaled with then Sync Sample Box 'stss' located in moov->trak->mdia->minf->stbl->stss
.
In your sample file the 'stss' box is missing.
QUESTION
I used MediaMuxer
and MediaCodec
to generate a mp4 video.
The video is playble after I call mMediaMuxer.stop()
However, when the user quit the app before I get the change to call the stop()
method, I am left with a big mp4 file that is not playable.
Is there anyway to repair this mp4 file to make it playable?
EditHere is one example of a corrupted mp4 file
And I was able to repair the file using this online tool but this tool asked to upload a non-corrupted video as reference.
Here is the non-corrupted mp4 video that I used as reference. When I uploaded this video, the tool repaired my broken mp4 file.
So it is possible to repair the file but how did they do it?
If useful, here is the code I used to generate both corrupted and non corrupted
...ANSWER
Answered 2020-Oct-13 at 12:02In general MP4 is not a good recording format. Usually the sample table is kept in memory and written on close. So in case of a power loss or an application bug - you loose the recording. Use a MPEG-2 Transport Stream or a fragmented MP4 then most of the written media remains playable. Most likely your file will contains just a MP4 'ftyp' and 'mdat' atom with the audio and video interleaved. With some educated guessing and knowledge about the video stream - there is chance to extract audio and video. https://fix.video seems to do it.
QUESTION
I have a long running video task in a Xamarin.Android
app; it decodes a video file using a MediaPlayer
into a custom OpenGL ES Surface
then queues another Surface
, encodes the data using a MediaCodec
and drains into a ByteBuffer
, which then is passed into a MediaMuxer
based upon encoder output ByteBuffer
availability. The operation works well and fast, until the video file written byte total is over ~1.3GB, at which point the video (but not audio) will lockup.
The app seems to have too many GREFs, as I'm watching them go up and down in realtime, until they're finally well above 46000 GREFs. It seems like the operating system (or app?) is having trouble dumping all of the GREFs via GC, which causes the app to get stuck in the middle of video processing. I'm monitoring the android resources and the total available memory never changes very much at the OS level; the cpu also seems to always have plenty of idle headroom (~28%).
I am outputting to the system console and watching the gref output logging using:
adb shell setprop debug.mono.log gref
The garbage collection seems to not be able to keep up after about 14 minutes. The GREF count goes up, then down, up, then down; eventually, it goes so high that the GREF count stays above 46k, with the following message looping:
...ANSWER
Answered 2020-Oct-10 at 21:12It turns out that all I had to do was comment out the line I had mentioned as being suspect:
var curDisplay = EGLContext.EGL.JavaCast().EglGetCurrentDisplay();
It runs in a loop that gets called thousands of times for a complete video to finish.
What must have been happening is that these EGLDisplay
instances (var
) were not being properly garbage collected. I would have thought they'd get automatically collected when the method is finished, but something was stopping that from happening. If you know more about this feel free to give a better answer; I'm not exactly sure what caused the finalizer
to get hung up on those objects.
That alone won't really do anyone very much in solving this type of problem, so here's how I figured it out:
first I added this code to the MainActivity
OnCreate
.. this writes the GREF logs to a file in the /download folder at the root of the droid device then loops and updates every 120 seconds (or whatever interval you choose)
QUESTION
I'm trying to make some deep learning experiments on android on video samples. And I've got stuck into remuxing videos. I have a couple of questions to arrange information in my head:) I have read some pages: https://vec.io/posts/android-hardware-decoding-with-mediacodec and https://bigflake.com/mediacodec/#ExtractMpegFramesTest but still I have a mess.
My questions:
- Can I read video with
MediaExtractor
and then pass data toMediaMuxer
to save video in another file? Without using MediaCodec? - If I want to modify frames before saving, can I do that without using
Surface
? Just by modifyingByteBuffer
? I assume that I need to decode data fromMediaExtractor
, then modify content, then encode it toMediaMuxer
. - Does
sample
is the same asframe
in context of methodMediaExtractor::readSampleData
? - Do I need to decode sample?
ANSWER
Answered 2020-Aug-08 at 22:08This is a brief description of what each class does:
- MediaExtrator: Extracts encoded video/audio data
- MediaCodec: Depending on how its configured it can be a decoder or an encoder.
- MediaMuxer: Muxes streams of data into an output file.
This is how you pipeline should generally look like:
MediaExtractor -> MediaCodec(As Decoder) -> Your editing -> MediaCodec(As Encoder) -> MediaMuxer
To answer you questions:
- MediaExtractor will give you encoded data, if you want to do anything with it you will have to decode it using a MediaCodec.
- It might be possible to do so without a surface but it will be pretty limited. Surfaces is the way to go. You can find more info here: Editing frames and encoding with MediaCodec
- Sample can be a video frame or an audio sample
- Yes you do need to decode samples to edit them
QUESTION
I am trying to merge audio and video, but I am getting java.lang.IllegalStateException: Failed to add the track to the muxer
. I think the problem is I can not add .weba
audio to the muxer. If that the case how can I go about merging them?
ANSWER
Answered 2020-Apr-25 at 22:15The simple answer is, that you cannot merge them directly with the MediaMuxer
API. The documentation of the class has a table with all supported formats. For mpeg4 outputs the supported audio codecs are AAC
, AMR_NB
and AMR_WB
.
You could try to use a tool like the MediaCodec
class to decode your audio data and then encode it to one of the supported formats. If you only want to use Android APIs. Otherwise there are plenty of libraries out there that allow to do muxing with a much greater variety of codecs.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install mediamux
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page