d-video | : movie_camera : 原生的VIdeo插件 | Video Utils library
kandi X-RAY | d-video Summary
kandi X-RAY | d-video Summary
:movie_camera: 原生的VIdeo插件
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of d-video
d-video Key Features
d-video Examples and Code Snippets
Community Discussions
Trending Discussions on d-video
QUESTION
currently i am developing a simple app to record video using kotlin, but i have difficulties when i want to get absolute path from video i recorded before. Here is my code which is written in kotlin
...ANSWER
Answered 2021-May-27 at 10:59Could i get the file path instead of uri of my recorded video ?
There is no requirement for the video to be recorded as a file on the filesystem, let alone in a location that you can access via the filesystem APIs.
If the scheme of the returned Uri
happens to be file
, then getPath()
will return you a filesystem path to the file. However, there is no guarantee that you can use that file, and the file
scheme has largely been banned since Android 7.0.
It is far more likely that the scheme will be content
. That Uri
could point to:
- A local file on external storage that your app could read
- A local file on external storage, but in a place that your app will lack read access to
- A local file on internal storage for the other app
- A local file on removable storage
- A local file that is encrypted and needs to be decrypted on the fly
- A stream of bytes held in a BLOB column in a database
- A piece of content that needs to be downloaded by the other app first
- ...and so on
And, just as with an https
URL, there is no magic way to convert the path portion of the Uri
to some filesystem path.
i need file system path because i want to process it using FFmpeg later
I would start by seeing if anyone has an FFmpeg wrapper that can work with a Uri
.
If that is unavailable, then you will need to make a copy of the content in a file that you control:
- Use
ContentResolver
andopenInputStream()
to get anInputStream
on the content, passing in yourUri
- Open a
FileOutputStream
on some file (e.g., ingetCacheDir()
) - Copy the bytes from the
InputStream
to theFileOutputStream
(Kotlin has some great extension functions for this)
Or, rather than use ACTION_VIDEO_CAPTURE
, use a camera library and take the video directly within your app, rather than trying to rely on a third-party app to take the video.
QUESTION
I'm trying to use the Windows.Media.Playback MediaPlayer. I am attempting to follow the information here Play audio and video with MediaPlayer. I am able to hear the audio of a video by calling it from C#.
...ANSWER
Answered 2021-May-26 at 19:53You are trying to apply UWP controls to a cross platform Xamarin Forms project
Windows.Media.Playback
is only for Windows, and would not work on Android or iOS. There are techniques you can use to include platform specific controls in a Xamarin project, or you can use a cross-platform control like MediaElement
QUESTION
I added some controls like rewind or reload to the video on my page but I want these added controls to fade in/out like default controls of video by moving the mouse(fade in) and stop moving after 2sec(fade out).
...ANSWER
Answered 2021-May-25 at 20:34after test a few ways I found the answer.
QUESTION
I am trying to declare a ClientFunction on a Factory page and then call it on a test page. But I am doing something wrong and it doesn’t work. I have two pages one is factory page, second test page. In test page i have ClientFunction and it works fine. When i trying move to Factory Page it doesn't working.
...ANSWER
Answered 2021-May-19 at 11:02You can do this:
QUESTION
im looking for this problem the last 2 days but didnt find any solution. I prepared this jsFiddle for you to show you my exact problem.
How do I make the image only visible for section 2? Like it should scroll behind the layer of section 1. It works from section 2 into section 3 but i cant find a solution to place it behind section 1.
...ANSWER
Answered 2021-May-11 at 13:00I added this CSS:
QUESTION
I've got a moderately complicated AVAssetWriterInput setup that I'm using to be able to flip the camera while I'm recording. Basically run two sessions, when the user taps to flip the camera I disconnect session 1 from the output and attach session 2.
This works really great. I can export the video and it plays just fine.
Now that I'm trying to do more advanced stuff with the resulting video some problems are popping up, specifically the AVAssetTracks on the inside of the exported AVAsset are slightly mismatched (always by less than 1 frame). Specifically I'm trying to do this: https://www.raywenderlich.com/6236502-avfoundation-tutorial-adding-overlays-and-animations-to-videos but a significant amount of the time there ends up being an all black frame, sometimes at the head of the video, sometimes at the tail of the video, that appears for a split second. The time varies, but it's always less than a frame (see logs below, 1/30 or 0.033333333s)
I did a bit of back-and-forth debugging and I managed to record a video using my recorder that consistently produced a trailing black frame, BUT using the tutorial code I have not been able to create a video that produces a trailing black frame. I added some similar logging (to what's pasted below) to the tutorial code and I'm seeing deltas of no greater than 2/100ths of a second. So around 1/10th of 1 frame at most. It's even a perfect 0 on one occasion.
So my sense right now is that what's happening is I record my video, both assetInputs start to gobble data, and then when I say "stop" they stop. The video input stops with the last complete frame, and the audio input does similarly. But since the audio input is sampling at a much higher rate than the video they're not synced up perfectly and I end up with more audio than video. This isn't a problem until I compose an asset with the two tracks and then the composition engine thinks I mean "yes, actually use 100% of all the time for all tracks even if there is a mismatch" which results in the black screen.
(Edit: This is basically what's happening - https://blender.stackexchange.com/questions/6268/audio-track-and-video-track-are-not-the-same-length)
I think the correct solution is, instead of worrying about the composition construction and timing and making sure it's all right, just make the captured audio and video match up as nicely as possible. Ideally 0, but I'd be fine with anything around 1/10th of a frame.
So my question is: How do I make two AVAssetWriterInputs, one audio and one video, attached to a AVAssetWriter line up better? Is there a setting somewhere? Do I mess with the framerates? Should I just trim the exported asset to the length of the video track? Can I duplicate the last captured frame when I stop recording? Can I have it so that the inputs stop at different times - basically have the audio stop first and then wait for the video to 'catch up' and then stop the video? Something else? I'm at a loss for ideas here :|
MY LOGGING
...ANSWER
Answered 2021-May-05 at 19:44TL;DR - don't just AVAssetWriter.finishWriting {}
because then the last written frame is T_End. Instead, use AVAssetWriter.endSession(atSourceTime:)
to set T_End to be the time of the last written video frame.
AVCaptureVideoDataOutputSampleBufferDelegate TO THE RESCUE!!
Use AVCapture(Video|Audio)DataOutputSampleBufferDelegate to write buffers to the AVAssetWriter (attach delegates to AVCaptureVideoDataOutput and AVCaptureAudioDataOutput)
Once the session is started and your outputs are going they're going to constantly be spitting out data onto this delegate
- canWrite is a flag that tracks whether you should be recording (writing sampleBuffers to the AVAssetWriter) or not
- In order to prevent leading black frames we need to make sure the first frame is a video frame. Until we get a video frame even if we're recording we ignore the frames. startSession(atSourceTime:) sets T0 for the asset, which we're setting to be the time of the first video frame
- Every time a video frame is written, record that time on a separate queue. This frees up the delegateQueue to only do frame processing//writing as well as guaranteeing that stopping recording (which will be triggered from the
main
queue) will not have collisions or memory issues when reading thelastVideoFrameWrite
Now for the fun part!
- In order to prevent trailing black frames, we have the AVAssetWriter end its session at T_lastVideoFrameTime. This discards all frames (audio and video) that were written after T_lastVideoFrameTime ensuring that both assetTracks inside the AVAssetWriter are as synced up as possible.
RESULTS
QUESTION
Background: Could having audio as stream 0 and video as stream 1 explain why my MPG will play on OSX QuickTime Player, but not Win10 Movies & TV?
I've got an mpg file with audio as stream 0 and video as stream 1.
It plays fine under OSX QT Player, but not under Win10's default app.
For lack of a better idea, I'm assuming the unusual stream ordering is my problem, and I'm trying to fix it with ffmpeg.
What luck! https://trac.ffmpeg.org/wiki/Map describes exactly my case!
Re-order streams
The order of your -map options determines the order of the streams in the output. In this example the input file has audio as stream #0 and video as stream #1 (which is possible but unusual). Example to re-position video so it is listed first, followed by the audio:
ffmpeg -i input.mp4 -map 0:v -map 0:a -c copy output.mp4
This example stream copies (re-mux) with -c copy to avoid re-encoding.
I use exactly that command, but the flipping doesn't seem to work, like so:
...ANSWER
Answered 2021-Apr-13 at 00:26Tricky one this seemed at first. I wondered if this old FFmpeg trac ticket might hold the key:
There is no stream order in mpeg-ps. what you see from ffmpeg output order is likely just if a audio or video packet comes first
But that's not actually the problem; however it is worth noting that your file has a .mpg extension, when you should be trying to output an MP4 or MKV. ".mpg" is only valid if it contains legacy MPEG-1 and MPEG-2 formats. H.264 or AAC elementary streams are invalid.
If you've not created this file yourself, it's either a mislabelled container (e.g. MKV or MP4), or someone has bizarrely muxed the streams to .mpg. Note how FFmpeg warns you of the incompatible codec during your stream reorder attempt.
MPEG-PS is a packetised format, so there's no elementary streams as such. If it's genuinely an MPEG-PS file, it may be that an audio sample appears first. Either way, you should abandon using .mpg for your formats.
See the end of this answer for how you can use FFprobe to fairly accurately identify the actual container format.
I had another think, and finally a neuron reminded me about how the -map
output follows the order of assignment.
An important thing to note is that -map 0:v -map 0:a
doesn't quite work how you might expect it with files containing more than one of a stream type, as that syntax matches all applicable streams of that type.
Gyan has clarified that if you have a file with exactly one video and one audio stream, -map 0:v -map 0:a
will function equivalently to -map 0:1 -map 0:0
.
If you want to use the 0:a
syntax, if you have more than one audio for example you must address them individually, otherwise FFmpeg will group them when reordering. -map 0:a
will move both audios; -map 0:a:0
will move just the first audio.
The alternative, remembering to always check stream order in every file you manipulate, is to specify absolute stream numbers in the order you wish to have them in the output. So, -map 0:1 -map 0:0
if your video is the second of two streams in the source file.
For files with one video and one audio stream, you can use either method.
TestsI created an .MP4 file containing one H.264 video as stream 0:0 and one MP3 audio as stream 0:1.
Original file:
QUESTION
I followed the embedding tutorial in this page but the video in my iframe is not autoplaying. I have the exact parameters put up for autoplaying and looping, but for some reason its not doing it. Here is the code:
...ANSWER
Answered 2021-May-04 at 00:20you can only autoplay if the video is muted try adding this &muted=1
for example :
QUESTION
I am trying to autoplay
a sound which would act as an app startup sound. Below is what I am using. Now when I add controls = TRUE
, and click on play, the audio file plays fine, but when I set autoplay = TRUE
, it doesn't play on its own. How can I fix this?
ANSWER
Answered 2021-Apr-28 at 02:03After you download the silence.mp3
file (and keep it in www
folder) from the link you gave, you can do the following.
QUESTION
2021-03-06 17:56:41.069 2475-3124/com.example.app I/org.webrtc.Logging: EglRenderer: local_video_viewDropping frame - No surface
That's the error/warning I am experiencing. And the remote/local video track is not rendering in QBRTCSurfaceView.
...ANSWER
Answered 2021-Apr-17 at 03:11I found my answer, if you are having issue with QBRTCSurfaceViews, check android SurfaceView options to configure based on your setup https://developer.android.com/reference/android/view/SurfaceView
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install d-video
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page