livedl | live streaming video downloader | Video Utils library
kandi X-RAY | livedl Summary
Support
Quality
Security
License
Reuse
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample Here
livedl Key Features
livedl Examples and Code Snippets
Trending Discussions on Video
Trending Discussions on Video
QUESTION
This code is working for downloading all photos and videos
from instaloader import Instaloader, Profile
L = Instaloader()
PROFILE = "username"
profile = Profile.from_username(L.context, PROFILE)
posts_sorted_by_likes = sorted(profile.get_posts(), key=lambda post: post.likes,reverse=True)
for post in posts_sorted_by_likes:
L.download_post(post, PROFILE)
Now i want to download only videos, But I can't. How can I filter this code for only videos?
ANSWER
Answered 2022-Feb-19 at 06:17Post
has an is_video
property
for post in posts_sorted_by_likes:
if post.is_video:
L.download_post(post, PROFILE)
QUESTION
I don't want to show playback speed in my video, is there any controls or controlList
properties to disable that option like controls disablepictureinpicture controlslist="nodownload"
ANSWER
Answered 2021-Sep-08 at 10:36According to the docs only three options are available (nodownload
, nofullscreen
, and noremoteplayback
) and none seems to do what you want.
And you can't style the browser's default control set, but you can use the (JavaScript) Media API to build your own control set which of course you can style in any way that you like. See this CodePen.
QUESTION
I imported a tif
movie into python which has the dimensions (150,512,512)
. I would like to calculate the mean pixel intensity for each of the 150 frames and then plot it over time. I could figure out how to calculate the mean intensity over the whole stack (see below), but I am struggling to calculate it for each frame individually.
from skimage import io
im1 = io.imread('movie.tif')
print("The mean of the TIFF stack is:")
print(im1.mean())
How can I get the mean pixel intensities for each frame?
ANSWER
Answered 2022-Feb-18 at 23:25You could slice the matrix and obtain the mean for each frame like below
from skimage import io
im1 = io.imread('movie.tif')
for i in range(im1.shape[0]):
print(im1[i,:,:].mean())
To plot it you can use a library like matplotlib
from skimage import io
import matplotlib.pyplot as plt
im1 = io.imread('movie.tif')
y = []
for i in range(im1.shape[0]):
y.append(im1[i,:,:].mean())
x = [*range(0, im1.shape[0], 1)]
plt.plot(x,y)
plt.show()
QUESTION
I'm trying to add rotation metadata to the video recorded from RTSP stream. All works fine until I try to run recording with segment format. My command looks like this:
ffmpeg -rtsp_transport tcp -stimeout 1000000 -i "" -vcodec copy -map_metadata 0 -metadata:s:v rotate=270 -an -dn -y -segment_time 60 -strftime 1 -reset_timestamps 1 -t 25 -f segment /home/short.mp4
I can see in logs that rotate should be written to the metadata as displaymatrix: rotation of -90.00 degrees
ffmpeg version 4.2.2 Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 7.3.0 (GCC)
configuration: --disable-stripping --enable-pic --enable-shared --enable-pthreads --cross-prefix=aarch64-poky-linux- --ld='aarch64-poky-linux-gcc -march=armv8-a+crc -fstack-protector-strong -D_FORTIFY_SOURCE=2 -Wformat -Wformat-security -Werror=format-security --sysroot=/home/ubuntu/przemoch/safeway-by-sternkraft/build/tmp/work/aarch64-poky-linux/ffmpeg/4.2.2-r0/recipe-sysroot' --cc='aarch64-poky-linux-gcc -march=armv8-a+crc -fstack-protector-strong -D_FORTIFY_SOURCE=2 -Wformat -Wformat-security -Werror=format-security --sysroot=/home/ubuntu/przemoch/safeway-by-sternkraft/build/tmp/work/aarch64-poky-linux/ffmpeg/4.2.2-r0/recipe-sysroot' --cxx='aarch64-poky-linux-g++ -march=armv8-a+crc -fstack-protector-strong -D_FORTIFY_SOURCE=2 -Wformat -Wformat-security -Werror=format-security --sysroot=/home/ubuntu/przemoch/safeway-by-sternkraft/build/tmp/work/aarch64-poky-linux/ffmpeg/4.2.2-r0/recipe-sysroot' --arch=aarch64 --target-os=linux --enable-cross-compile --extra-cflags=' -O2 -pipe -g -feliminate-unused-debug-types -fdebug-prefix-map=/home/ubuntu/przemoch/safeway-by-sternkraft/build/tmp/work/aarch64-poky-linux/ffmpeg/4.2.2-r0=/usr/src/debug/ffmpeg/4.2.2-r0 -fdebug-prefix-map=/home/ubuntu/przemoch/safeway-by-sternkraft/build/tmp/work/aarch64-poky-linux/ffmpeg/4.2.2-r0/recipe-sysroot= -fdebug-prefix-map=/home/ubuntu/przemoch/safeway-by-sternkraft/build/tmp/work/aarch64-poky-linux/ffmpeg/4.2.2-r0/recipe-sysroot-native= -march=armv8-a+crc -fstack-protector-strong -D_FORTIFY_SOURCE=2 -Wformat -Wformat-security -Werror=format-security --sysroot=/home/ubuntu/przemoch/safeway-by-sternkraft/build/tmp/work/aarch64-poky-linux/ffmpeg/4.2.2-r0/recipe-sysroot' --extra-ldflags='-Wl,-O1 -Wl,--hash-style=gnu -Wl,--as-needed -fstack-protector-strong -Wl,-z,relro,-z,now' --sysroot=/home/ubuntu/przemoch/safeway-by-sternkraft/build/tmp/work/aarch64-poky-linux/ffmpeg/4.2.2-r0/recipe-sysroot --libdir=/usr/lib --shlibdir=/usr/lib --datadir=/usr/share/ffmpeg --disable-mipsdsp --disable-mipsdspr2 --cpu=generic --pkg-config=pkg-config --disable-static --enable-alsa --enable-avcodec --enable-avdevice --enable-avfilter --enable-avformat --enable-avresample --enable-bzlib --disable-libfdk-aac --enable-gpl --disable-libgsm --disable-indev=jack --disable-libvorbis --enable-lzma --disable-libmfx --disable-libmp3lame --disable-openssl --enable-postproc --disable-sdl2 --disable-libspeex --enable-swresample --enable-swscale --enable-libtheora --disable-vaapi --disable-vdpau --disable-libvpx --enable-libx264 --disable-libx265 --enable-libxcb --enable-outdev=xv --enable-zlib
libavutil 56. 31.100 / 56. 31.100
libavcodec 58. 54.100 / 58. 54.100
libavformat 58. 29.100 / 58. 29.100
libavdevice 58. 8.100 / 58. 8.100
libavfilter 7. 57.100 / 7. 57.100
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 5.100 / 5. 5.100
libswresample 3. 5.100 / 3. 5.100
libpostproc 55. 5.100 / 55. 5.100
Input #0, rtsp, from 'rtsp://admin:dupa.666@192.168.77.122:554/user=admin_password=tlJwpbo6_channel=1_stream=0.sdp?real_stream':
Metadata:
title : RTSP Session
Duration: N/A, start: 0.200000, bitrate: N/A
Stream #0:0: Video: h264 (Main), yuvj420p(pc, bt709, progressive), 1920x1080, 5 fps, 5 tbr, 90k tbn, 180k tbc
Output #0, segment, to '/home/short.mp4':
Metadata:
title : RTSP Session
encoder : Lavf58.29.100
Stream #0:0: Video: h264 (Main), yuvj420p(pc, bt709, progressive), 1920x1080, q=2-31, 5 fps, 5 tbr, 90k tbn, 90k tbc
Side data:
displaymatrix: rotation of -90.00 degrees
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Press [q] to stop, [?] for help
[segment @ 0x5594be0340] Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly
[segment @ 0x5594be0340] Non-monotonous DTS in output stream 0:0; previous: 0, current: 0; changing to 1. This may result in incorrect timestamps in the output file.
frame= 67 fps=8.6 q=-1.0 Lsize=N/A time=00:00:13.00 bitrate=N/A speed=1.67x
Unfortunately, there is no metadata in out video
$ ffmpeg -i short.mp4
ffmpeg version 4.2.2 Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 7.3.0 (GCC)
configuration: --disable-stripping --enable-pic --enable-shared --enable-pthreads --cross-prefix=aarch64-poky-linux- --ld='aarch64-poky-linux-gcc -march=armv8-a+crc -fstack-protector-strong -D_FORTIFY_SOURCE=2 -Wformat -Wformat-security -Werror=format-security --sysroot=/home/ubuntu/przemoch/safeway-by-sternkraft/build/tmp/work/aarch64-poky-linux/ffmpeg/4.2.2-r0/recipe-sysroot' --cc='aarch64-poky-linux-gcc -march=armv8-a+crc -fstack-protector-strong -D_FORTIFY_SOURCE=2 -Wformat -Wformat-security -Werror=format-security --sysroot=/home/ubuntu/przemoch/safeway-by-sternkraft/build/tmp/work/aarch64-poky-linux/ffmpeg/4.2.2-r0/recipe-sysroot' --cxx='aarch64-poky-linux-g++ -march=armv8-a+crc -fstack-protector-strong -D_FORTIFY_SOURCE=2 -Wformat -Wformat-security -Werror=format-security --sysroot=/home/ubuntu/przemoch/safeway-by-sternkraft/build/tmp/work/aarch64-poky-linux/ffmpeg/4.2.2-r0/recipe-sysroot' --arch=aarch64 --target-os=linux --enable-cross-compile --extra-cflags=' -O2 -pipe -g -feliminate-unused-debug-types -fdebug-prefix-map=/home/ubuntu/przemoch/safeway-by-sternkraft/build/tmp/work/aarch64-poky-linux/ffmpeg/4.2.2-r0=/usr/src/debug/ffmpeg/4.2.2-r0 -fdebug-prefix-map=/home/ubuntu/przemoch/safeway-by-sternkraft/build/tmp/work/aarch64-poky-linux/ffmpeg/4.2.2-r0/recipe-sysroot= -fdebug-prefix-map=/home/ubuntu/przemoch/safeway-by-sternkraft/build/tmp/work/aarch64-poky-linux/ffmpeg/4.2.2-r0/recipe-sysroot-native= -march=armv8-a+crc -fstack-protector-strong -D_FORTIFY_SOURCE=2 -Wformat -Wformat-security -Werror=format-security --sysroot=/home/ubuntu/przemoch/safeway-by-sternkraft/build/tmp/work/aarch64-poky-linux/ffmpeg/4.2.2-r0/recipe-sysroot' --extra-ldflags='-Wl,-O1 -Wl,--hash-style=gnu -Wl,--as-needed -fstack-protector-strong -Wl,-z,relro,-z,now' --sysroot=/home/ubuntu/przemoch/safeway-by-sternkraft/build/tmp/work/aarch64-poky-linux/ffmpeg/4.2.2-r0/recipe-sysroot --libdir=/usr/lib --shlibdir=/usr/lib --datadir=/usr/share/ffmpeg --disable-mipsdsp --disable-mipsdspr2 --cpu=generic --pkg-config=pkg-config --disable-static --enable-alsa --enable-avcodec --enable-avdevice --enable-avfilter --enable-avformat --enable-avresample --enable-bzlib --disable-libfdk-aac --enable-gpl --disable-libgsm --disable-indev=jack --disable-libvorbis --enable-lzma --disable-libmfx --disable-libmp3lame --disable-openssl --enable-postproc --disable-sdl2 --disable-libspeex --enable-swresample --enable-swscale --enable-libtheora --disable-vaapi --disable-vdpau --disable-libvpx --enable-libx264 --disable-libx265 --enable-libxcb --enable-outdev=xv --enable-zlib
libavutil 56. 31.100 / 56. 31.100
libavcodec 58. 54.100 / 58. 54.100
libavformat 58. 29.100 / 58. 29.100
libavdevice 58. 8.100 / 58. 8.100
libavfilter 7. 57.100 / 7. 57.100
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 5.100 / 5. 5.100
libswresample 3. 5.100 / 3. 5.100
libpostproc 55. 5.100 / 55. 5.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'short.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
title : RTSP Session
encoder : Lavf58.29.100
Duration: 00:00:13.00, start: 0.000000, bitrate: 1024 kb/s
Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuvj420p(pc, bt709), 1920x1080, 1023 kb/s, 5.15 fps, 5 tbr, 90k tbn, 180k tbc (default)
Metadata:
handler_name : VideoHandler
At least one output file must be specified
Any idea how to record RTSP using both segment and metadata? I really prefer using metadata for rotation, as I plan to save more information in metadata in the future.
ANSWER
Answered 2022-Feb-11 at 10:03I found out it has been resolved in
and it works fine in ffmpeg 5.0. You can also apply this patch to 4.4.
QUESTION
I have several videos on my site that have the same class.
I want to play only one video when hovering over it. As soon as I removed the hover, the video was paused with a delay of 1 second.
I learned how to start a video and pause it. But as soon as I add setTimeout I get an error: Uncaught TypeError: Cannot read properties of undefined (reading 'pause')
Below I am attaching the html code of my solution:
Also I am attaching the html code of my solution:
var figure = $(".cases__item").hover( hoverVideo, hideVideo );
function hoverVideo(e) {
$('.cases__item-video', this).get(0).play();
}
function hideVideo(e) {
setTimeout(function (){
$('.cases__item-video', this).get(0).pause();
}, 1000);
}
I also add working jsfiddle with my code: https://jsfiddle.net/v7certb6/1/
Please help me to resolve the video pause issue after one second.
I will be very grateful for any help.
ANSWER
Answered 2022-Feb-04 at 20:36The issue is because this
in the setTimeout()
function handler refers to that function, not to the element reference provided in the invocation of the outer hoverVideo()
or hideVideo()
functions.
To fix this issue create a variable in the outer scope to retain the reference to this
which you use within the setTimeout()
:
var figure = $(".cases__item").hover(hoverVideo, hideVideo);
function hoverVideo(e) {
let _this = this;
$('.cases__item-video', _this).get(0).play();
}
function hideVideo(e) {
let _this = this;
setTimeout(function() {
$('.cases__item-video', _this).get(0).pause();
}, 1000);
}
Example Fiddle - note the example is in a fiddle as the cross-site content has issues when played on the main SO site.
As a side note, the JS can be reduced to the following, which has the exact same behaviour just using arrow functions:
$(".cases__item").hover(
e => e.currentTarget.querySelector('.cases__item-video').play(),
e => setTimeout(() => e.currentTarget.querySelector('.cases__item-video').pause(), 1000));
QUESTION
I have an interlaced video stream and need apply a filter (any filter that takes two frames as input , for example tblend or lut2) on custom video frames and place output of them between mainframes like this :
Input frames: [1] [2] [3] ..... FPS=25
| \ / | \ / |
| \ / | \ / |
Output frames: [1] [f(1,2)] [2] [f(2,3)] [3] ..... FPS=50 i/p
I think I need select filter + Expressions
to select frames, but I don't know how to do it
Please help.
Note:
Input has no audio stream.
Output = uncompressed yuv422 8bits in AVI container
the output scan type can be interlaced or progressive
I have to do this with just one command.
I tried FFmpeg -i in.avi -vf tblend -vcodec rawvideo out.avi
, but the output of this command is not what I want.
ANSWER
Answered 2022-Feb-04 at 10:13You may chain tblend
, interleave
and setpts
filters, while the two inputs to interleave filter are the output of tblend
and the original video:
Example (assuming input framerate is 25Hz):
ffmpeg -y -i in.avi -filter_complex "[0:v]tblend=all_mode=average[v1],[v1][0:v]interleave,setpts=N/50/TB" -r 50 -vcodec rawvideo -pix_fmt bgr24 out.avi
[0:v]tblend=all_mode=average[v1]
Creates a stream withtblend
filter, and gives the output the temporary name[v1]
.[v1][0:v]interleave
applies interleave filter of[v1]
and the original video.setpts=N/50/TB
fixes the timestamps to apply 50fps output video.-r 50
sets the output frame rate to 50Hz.
Note: I selected -pix_fmt bgr24
, because yuv422
is not played with MPC-HC.
Testing:
Build synthetic pattern (the
-r 25
,rate=1
andsetpts=N/25/TB
are used for creating counting number at 25Hz):
ffmpeg -y -f lavfi -r 25 -i testsrc=size=192x108:rate=1:duration=10 -vf setpts=N/25/TB -vcodec rawvideo -pix_fmt bgr24 in.avi
Execute the command:
ffmpeg -y -i in.avi -filter_complex "[0:v]tblend=all_mode=average[v1],[v1][0:v]interleave,setpts=N/50/TB" -r 50 -vcodec rawvideo -pix_fmt bgr24 out.avi
Checking frame by frame:
As you can see, the frames 0 and 2 are the original frames, and 1 and 3 are blended output of two original frames.
Examples for two cases of interlaced video frames:
tinterlace filter is used for creating synthetic interlaced video.
Simulating two fields that originated from a single video frame:
'drop_even, 1'
Only output odd frames, even frames are dropped, generating a frame with unchanged height at half frame rate.
------> time
Input:
Frame 1 Frame 2 Frame 3 Frame 4
11111 22222 33333 44444
11111 22222 33333 44444
11111 22222 33333 44444
11111 22222 33333 44444
Output:
11111 33333
11111 33333
11111 33333
11111 33333
ffmpeg -y -f lavfi -i testsrc=size=192x108:rate=2:duration=10 -vf tinterlace=drop_even,fieldorder=tff -vcodec rawvideo -pix_fmt bgr24 in_drop_even.avi
Simulating two fields that captured at different times (not originated from the same video frame):
'interleave_top, 4'
Interleave the upper field from odd frames with the lower field from even frames, generating a frame with unchanged height at half frame rate.
------> time
Input:
Frame 1 Frame 2 Frame 3 Frame 4
11111<- 22222 33333<- 44444
11111 22222<- 33333 44444<-
11111<- 22222 33333<- 44444
11111 22222<- 33333 44444<-
Output:
11111 33333
22222 44444
11111 33333
22222 44444
ffmpeg -y -f lavfi -i testsrc=size=192x108:rate=2:duration=10 -vf tinterlace=interleave_top,fieldorder=tff -vcodec rawvideo -pix_fmt bgr24 in_interleave_top.avi
QUESTION
I would like to be able to robustly stop a video when the video arrives on some specified frames in order to do oral presentations based on videos made with Blender, Manim...
I'm aware of this question, but the problem is that the video does not stops exactly at the good frame. Sometimes it continues forward for one frame and when I force it to come back to the initial frame we see the video going backward, which is weird. Even worse, if the next frame is completely different (different background...) this will be very visible.
To illustrate my issues, I created a demo project here (just click "next" and see that when the video stops, sometimes it goes backward). The full code is here.
The important part of the code I'm using is:
var video = VideoFrame({
id: 'video',
frameRate: 24,
callback: function(curr_frame) {
// Stops the video when arriving on a frames to stop at.
if (stopFrames.includes(curr_frame)) {
console.log("Automatic stop: found stop frame.");
pauseMyVideo();
// Ensure we are on the proper frame.
video.seekTo({frame: curr_frame});
}
}
});
So far, I avoid this issue by stopping one frame before the end and then using seekTo
(not sure how sound this is), as demonstrated here. But as you can see, sometimes when going on the next frame it "freezes" a bit: I guess this is when the stop happens right before the seekTo
.
PS: if you know a reliable way in JS to know the number of frames of a given video, I'm also interested.
Concerning the idea to cut the video before hand on the desktop, this could be used... but I had bad experience with that in the past, notably as changing videos sometimes produce some glitches. Also, it can be more complicated to use at it means that the video should be manually cut a lot of time, re-encoded...
EDIT Is there any solution for instance based on WebAssembly (more compatible with old browsers) or Webcodec (more efficient, but not yet wide-spread)? Webcodec seems to allow pretty amazing things, but I'm not sure how to use them for that. I would love to hear solution based on both of them since firefox does not handle webcodec yet. Note that it would be great if audio is not lost in the process. Bonus if I can also make controls appear on request.
EDIT: I'm not sure to understand what's happening here (source)... But it seems to do something close to my need (using webassembly I think) since it manages to play a video in a canvas, with frame... Here is another website that does something close to my need using Webcodec. But I'm not sure how to reliably synchronize sound and video with webcodec.
EDIT: answer to the first question
Concerning the video frame, indeed I chose poorly my frame rate, it was 25 not 24. But even by using a framerate of 25, I still don't get a frame-precise stop, on both Firefox and Chromium. For instance, here is a recording (using OBS) of your demo (I see the same with mine when I use 25 instead of 24):
one frame later, see that the butter "fly backward"(this is maybe not very visible with still screenshots, but see for instance the position of the lower left wing in the flowers):
I can see three potential reasons: first (I think it is the most likely reason), I heard that video.currentTime
was not always reporting accurately the time, maybe it could explain why here it fails? It seems to be pretty accurate in order to change the current frame (I can go forward and backward by one frame quite reliably as far as I can see), but people reported here that video.currentTime
is computed using the audio time and not the video time in Chromium, leading to some inconsistencies (I observe similar inconsistencies in Firefox), and here that it may either lead the time at which the frame is sent to the compositor or at which the frame is actually printed in the compositor (if it is the latest, it could explain the delay we have sometimes). This would also explain why requestAnimationVideoFrame
is better, as it also provides the current media time.
The second reason that could explain that problem is that setInterval
may not be precise enough... However requestAnimationFrame
is not really better (requestAnimationVideoFrame
is not available in Firefox) while it should fire 60 times per seconds which should be quick enough.
The third option I can see is that maybe the .pause
function is quite long to fire... and that by the end of the call the video also plays another frame. On the other hand, your example using requestAnimationVideoFrame https://mvyom.csb.app/requestFrame.html seems to work pretty reliably, and it's using .pause
! Unfortunately it only works in Chromium, but not in firefox. I see that you use metadata.mediaTime
instead of currentTime
, maybe this is more precise than current time.
The last option is that there is maybe something subtle concerning vsync as explained in this page. It also reports that expectedDisplayTime
may help to solve this issue when using requestAnimationVideoFrame
.
ANSWER
Answered 2022-Jan-21 at 19:18The video has frame rate of 25fps, and not 24fps:
After putting the correct value it works ok: demo
The VideoFrame api heavily relies on FPS provided by you. You can find FPS of your videos offline and send as metadata along with stop frames from server.
The site videoplayer.handmadeproductions.de uses window.requestAnimationFrame() to get the callback.
There is a new better alternative to requestAnimationFrame. The requestVideoFrameCallback(), allows us to do per-video-frame operations on video.
The same functionality, you domed in OP, can be achieved like this:
const callback = (now, metadata) => {
if (startTime == 0) {
startTime = now;
}
elapsed = metadata.mediaTime;
currentFrame = metadata.presentedFrames - doneCount;
fps = (currentFrame / elapsed).toFixed(3);
fps = !isFinite(fps) ? 0 : fps;
updateStats();
if (stopFrames.includes(currentFrame)) {
pauseMyVideo();
} else {
video.requestVideoFrameCallback(callback);
}
};
video.requestVideoFrameCallback(callback);
And here is how demo looks like.
The API works on chromium based browsers like Chrome, Edge, Brave etc.
There is a JS library, which finds frame rate from video binary file, named mediainfo.js.
QUESTION
In my Facebook Video Downloader
android application i want to show video resolutions like SD, HD with size. Currently i am using InputStreamReader
and Pattern.compile
method to find SD and HD URL of video. This method rarely gets me HD link of videos and provides only SD URL which can be downloaded.
Below is my code of link parsing
fun linkParsing(url: String, loaded: (item: DownloadItem) -> Unit) {
val showLogs: Boolean = true
Log.e("post_url", url)
return try {
val getUrl = URL(url)
val urlConnection =
getUrl.openConnection() as HttpURLConnection
var reader: BufferedReader? = null
urlConnection.setRequestProperty("User-Agent", POST_USER_AGENT)
urlConnection.setRequestProperty("Accept", "*/*")
val streamMap = StringBuilder()
try {
reader =
BufferedReader(InputStreamReader(urlConnection.inputStream))
var line: String?
while (reader.readLine().also {
line = it
} != null) {
streamMap.append(line)
}
} catch (E: Exception) {
E.printStackTrace()
reader?.close()
urlConnection.disconnect()
} finally {
reader?.close()
urlConnection.disconnect()
}
if (streamMap.toString().contains("You must log in to continue.")) {
} else {
val metaTAGTitle =
Pattern.compile("")
val metaTAGTitleMatcher = metaTAGTitle.matcher(streamMap)
val metaTAGDescription =
Pattern.compile("")
val metaTAGDescriptionMatcher =
metaTAGDescription.matcher(streamMap)
var authorName: String? = ""
var fileName: String? = ""
if (metaTAGTitleMatcher.find()) {
var author =
streamMap.substring(metaTAGTitleMatcher.start(), metaTAGTitleMatcher.end())
Log.e("Extractor", "AUTHOR :: $author")
author = author.replace("", "")
authorName = author
} else {
authorName = "N/A"
}
if (metaTAGDescriptionMatcher.find()) {
var name = streamMap.substring(
metaTAGDescriptionMatcher.start(),
metaTAGDescriptionMatcher.end()
)
Log.e("Extractor", "FILENAME :: $name")
name = name.replace("", "")
fileName = name
} else {
fileName = "N/A"
}
val sdVideo =
Pattern.compile("")
val sdVideoMatcher = sdVideo.matcher(streamMap)
val imagePattern =
Pattern.compile("")
val imageMatcher = imagePattern.matcher(streamMap)
val thumbnailPattern =
Pattern.compile("")
val thumbnailMatcher = thumbnailPattern.matcher(streamMap)
val hdVideo = Pattern.compile("(hd_src):\"(.+?)\"")
val hdVideoMatcher = hdVideo.matcher(streamMap)
val facebookFile = DownloadItem()
facebookFile?.author = authorName
facebookFile?.filename = fileName
facebookFile?.postLink = url
if (sdVideoMatcher.find()) {
var vUrl = sdVideoMatcher.group()
vUrl = vUrl.substring(8, vUrl.length - 1) //sd_scr: 8 char
facebookFile?.sdUrl = vUrl
facebookFile?.ext = "mp4"
var imageUrl = streamMap.substring(sdVideoMatcher.start(), sdVideoMatcher.end())
imageUrl = imageUrl.replace("", "").replace("&", "&")
Log.e("Extractor", "FILENAME :: NULL")
Log.e("Extractor", "FILENAME :: $imageUrl")
facebookFile?.sdUrl = URLDecoder.decode(imageUrl, "UTF-8")
if (showLogs) {
Log.e("Extractor", "SD_URL :: Null")
Log.e("Extractor", "SD_URL :: $imageUrl")
}
if (thumbnailMatcher.find()) {
var thumbNailUrl =
streamMap.substring(thumbnailMatcher.start(), thumbnailMatcher.end())
thumbNailUrl = thumbNailUrl.replace("", "").replace("&", "&")
Log.e("Extractor", "Thumbnail :: NULL")
Log.e("Extractor", "Thumbnail :: $thumbNailUrl")
facebookFile?.thumbNailUrl = URLDecoder.decode(thumbNailUrl, "UTF-8")
}
}
if (hdVideoMatcher.find()) {
var vUrl1 = hdVideoMatcher.group()
vUrl1 = vUrl1.substring(8, vUrl1.length - 1) //hd_scr: 8 char
facebookFile?.hdUrl = vUrl1
if (showLogs) {
Log.e("Extractor", "HD_URL :: Null")
Log.e("Extractor", "HD_URL :: $vUrl1")
}
} else {
facebookFile?.hdUrl = null
}
if (imageMatcher.find()) {
var imageUrl =
streamMap.substring(imageMatcher.start(), imageMatcher.end())
imageUrl = imageUrl.replace("", "").replace("&", "&")
Log.e("Extractor", "FILENAME :: NULL")
Log.e("Extractor", "FILENAME :: $imageUrl")
facebookFile?.imageUrl = URLDecoder.decode(imageUrl, "UTF-8")
}
if (facebookFile?.sdUrl == null && facebookFile?.hdUrl == null) {
}
loaded(facebookFile!!)
}
} catch (e: Exception) {
e.printStackTrace()
}
}
I want to implement a feature where i can show different Resolutions with Sizes as shown in this image.
Please note that i have tested my linkParsing method with videos that has HD URL but it gives only SD URL.
This a sample video link: https://fb.watch/aENyxV7gxs/
How this can be done? I am unable to find any proper method or GitHub
library for this.
ANSWER
Answered 2022-Jan-26 at 12:11Found a solution for this so posting as answer.
This can be done by extracting Page Source
of a webpage and then parsing that XML and fetching list of BASE URLs.
Steps as follow:
1- Load that specific video URL
in Webview
and get Page Source inside onPageFinished
private fun webViewSetupNotLoggedIn() {
webView?.settings?.javaScriptEnabled = true
webView?.settings?.userAgentString = AppConstants.USER_AGENT
webView?.settings?.useWideViewPort = true
webView?.settings?.loadWithOverviewMode = true
webView?.addJavascriptInterface(this, "mJava")
webView?.post {
run {
webView?.loadUrl(“url of your video")
}
}
object : WebViewClient() {
override fun shouldOverrideUrlLoading(view: WebView, url: String): Boolean {
if (url == "https://m.facebook.com/login.php" || url.contains("https://m.facebook.com/login.php")
) {
webView?.loadUrl("url of your video")
}
return true
}
override fun onPageStarted(view: WebView?, url: String?, favicon: Bitmap?) {
super.onPageStarted(view, url, favicon)
}
}
webView.webChromeClient = object : WebChromeClient() {
override fun onProgressChanged(view: WebView?, newProgress: Int) {
super.onProgressChanged(view, newProgress)
if (progressBarBottomSheet != null) {
if (newProgress == 100) {
progressBarBottomSheet.visibility = View.GONE
} else {
progressBarBottomSheet.visibility = View.VISIBLE
}
progressBarBottomSheet.progress = newProgress
}
}
}
webView?.webViewClient = object : WebViewClient() {
override fun onPageFinished(view: WebView?, url: String?) {
try {
if (webView?.progress == 100) {
var original = webView?.originalUrl
var post_link = "url of your video"
if (original.equals(post_link)) {
var listOfResolutions = arrayListOf()
val progressDialog = activity?.getProgressDialog(false)
progressDialog?.show()
//Fetch resoultions
webView.evaluateJavascript(
"(function(){return window.document.body.outerHTML})();"
) { value ->
val reader = JsonReader(StringReader(value))
reader.isLenient = true
try {
if (reader.peek() == JsonToken.STRING) {
val domStr = reader.nextString()
domStr?.let {
val xmlString = it
CoroutineScope(Dispatchers.Main).launch {
CoroutineScope(Dispatchers.IO).async {
try {
getVideoResolutionsFromPageSource((xmlString)) {
listOfResolutions = it
}
} catch (e: java.lang.Exception) {
e.printStackTrace()
Log.e("Exception", e.message!!)
}
}.await()
progressDialog?.hide()
if (listOfResolutions.size > 0) {
setupResolutionsListDialog(listOfResolutions)
} else {
Toast.makeText(
context,
"No Resolutions Found",
Toast.LENGTH_SHORT
).show()
}
}
}
}
} catch (e: IOException) {
e.printStackTrace()
} finally {
reader.close()
}
}
}
}
} catch (ex: Exception) {
ex.printStackTrace()
}
super.onPageFinished(view, url)
}
@TargetApi(android.os.Build.VERSION_CODES.M)
override fun onReceivedError(
view: WebView?,
request: WebResourceRequest?,
error: WebResourceError
) {
}
@SuppressWarnings("deprecation")
override fun onReceivedError(
view: WebView?,
errorCode: Int,
description: String?,
failingUrl: String?
) {
super.onReceivedError(view, errorCode, description, failingUrl)
}
override fun onLoadResource(view: WebView?, url: String?) {
Log.e("getData", "onLoadResource")
super.onLoadResource(view, url)
}
}
}
2- When Page source is fetched parse to get video Resolution URLs
fun getVideoResolutionsFromPageSource(
pageSourceXmlString: String?,
finished: (listOfRes: ArrayList) -> Unit
) {
//pageSourceXmlString is the Page Source of WebPage of that specific copied video
//We need to find list of Base URLs from pageSourceXmlString
//Base URLs are inside an attribute named data-store which is inside a div whose class name starts with '_53mw;
//We need to find that div then get data-store which has a JSON as string
//Parse that JSON and we will get list of adaptationset
//Each adaptationset has list of representation tags
// representation is the actual div which contains BASE URLs
//Note that: BASE URLs have a specific attribute called mimeType
//mimeType has audio/mp4 and video/mp4 which helps us to figure out whether the url is of an audio or a video
val listOfResolutions = arrayListOf()
if (!pageSourceXmlString?.isEmpty()!!) {
val document: org.jsoup.nodes.Document = Jsoup.parse(pageSourceXmlString)
val sampleDiv = document.getElementsByTag("body")
if (!sampleDiv.isEmpty()) {
val bodyDocument: org.jsoup.nodes.Document = Jsoup.parse(sampleDiv.html())
val dataStoreDiv: org.jsoup.nodes.Element? = bodyDocument.select("div._53mw").first()
val dataStoreAttr = dataStoreDiv?.attr("data-store")
val jsonObject = JSONObject(dataStoreAttr)
if (jsonObject.has("dashManifest")) {
val dashManifestString: String = jsonObject.getString("dashManifest")
val dashManifestDoc: org.jsoup.nodes.Document = Jsoup.parse(dashManifestString)
val mdpTagVal = dashManifestDoc.getElementsByTag("MPD")
val mdpDoc: org.jsoup.nodes.Document = Jsoup.parse(mdpTagVal.html())
val periodTagVal = mdpDoc.getElementsByTag("Period")
val periodDocument: org.jsoup.nodes.Document = Jsoup.parse(periodTagVal.html())
val subBodyDiv: org.jsoup.nodes.Element? = periodDocument.select("body").first()
subBodyDiv?.children()?.forEach {
val adaptionSetDiv: org.jsoup.nodes.Element? =
it.select("adaptationset").first()
adaptionSetDiv?.children()?.forEach {
if (it is org.jsoup.nodes.Element) {
val representationDiv: org.jsoup.nodes.Element? =
it.select("representation").first()
val resolutionDetail = ResolutionDetail()
if (representationDiv?.hasAttr("mimetype")!!) {
resolutionDetail.mimetype = representationDiv?.attr("mimetype")
}
if (representationDiv?.hasAttr("width")!!) {
resolutionDetail.width =
representationDiv?.attr("width")?.toLong()!!
}
if (representationDiv?.hasAttr("height")!!) {
resolutionDetail.height =
representationDiv.attr("height").toLong()
}
if (representationDiv?.hasAttr("FBDefaultQuality")!!) {
resolutionDetail.FBDefaultQuality =
representationDiv.attr("FBDefaultQuality")
}
if (representationDiv?.hasAttr("FBQualityClass")!!) {
resolutionDetail.FBQualityClass =
representationDiv.attr("FBQualityClass")
}
if (representationDiv?.hasAttr("FBQualityLabel")!!) {
resolutionDetail.FBQualityLabel =
representationDiv.attr("FBQualityLabel")
}
val representationDoc: org.jsoup.nodes.Document =
Jsoup.parse(representationDiv.html())
val baseUrlTag = representationDoc.getElementsByTag("BaseURL")
if (!baseUrlTag.isEmpty() && !resolutionDetail.FBQualityLabel.equals(
"Source",
ignoreCase = true
)
) {
resolutionDetail.videoQualityURL = baseUrlTag[0].text()
listOfResolutions.add(resolutionDetail)
}
}
}
}
}
}
}
finished(listOfResolutions)
}
class ResolutionDetail {
var width: Long = 0
var height: Long = 0
var FBQualityLabel = ""
var FBDefaultQuality = ""
var FBQualityClass = ""
var videoQualityURL = ""
var mimetype = "" // [audio/mp4 for audios and video/mp4 for videos]
}
3- Pass videoQualityURL to your video download function and video in that selected resolution will be downloaded.
QUESTION
So I have a page with a grid layout, with a header and a footer and a black content container in the middle.
html, body {
height: 100%;
margin: 0;
padding: 0;
}
.container {
display: grid;
height: 100%;
grid-template-rows: max-content 1fr max-content;
}
.container div {
border: 1px solid red;
}
.videoContainer {
background-color: black;
}
video {
width: 100%;
height: 100%;
object-fit: contain;
}
This is the header
This is the footer
So far so good.
Now I want to put a video that will stretch to fit this container (and be centered). Here's attempt with object-fit: contain;
html, body {
height: 100%;
margin: 0;
padding: 0;
}
.container {
display: grid;
height: 100%;
grid-template-rows: max-content 1fr max-content;
}
.container div {
border: 1px solid red;
}
.videoContainer {
background-color: black;
}
video {
width: 100%;
height: 100%;
object-fit: contain;
}
This is the header
This is the footer
But it doesn't work. Instead of fitting the video to the container, the container expands to fit the video.
How can I keep the container at its inherent dimensions and make the video fit its container?
ANSWER
Answered 2022-Jan-21 at 00:571fr
The first thing you need to know is that 1fr
is equivalent to minmax(auto, 1fr)
, meaning that the container won't be smaller than its content, by default.
So, start by replacing 1fr
with minmax(0, 1fr)
. That will solve the overflow problem.
html,
body {
height: 100%;
margin: 0;
padding: 0;
}
.container {
display: grid;
height: 100%;
grid-template-rows: max-content minmax(0, 1fr) max-content;
}
.container div {
border: 1px solid red;
}
.videoContainer {
background-color: black;
}
video {
width: 100%;
height: 100%;
object-fit: contain;
}
This is the header
This is the footer
object-fit
If you want the video to actually "fit this container" (as in cover the full width and height), then try object-fit: cover
as opposed to contain
.
QUESTION
I'm having regularly issue with hvc1 videos getting an inconsistent number of frames between ffprobe info and FFmpeg info, and I would like to know what could be the reason for this issue and how if it's possible to solve it without re-encoding the video.
I wrote the following sample script with a test video I have
I split the video into 5-sec segments and I get ffprobe giving the expected video length but FFmpeg gave 3 frames less than expected on every segment but the first one.
The issue is exactly the same if I split by 10 seconds or any split, I always lose 3 frames.
I noted that the first segment is always 3 frames smaller (on ffprobe) than the other ones and it's the only consistent one.
Here is an example script I wrote to test this issue :
# get total video frame number using ffprobe or ffmpeg
total_num_frames=$(ffprobe -v quiet -show_entries stream=nb_read_packets -count_packets -select_streams v:0 -print_format json test_video.mp4 | jq '.streams[0].nb_read_packets' | tr -d '"')
echo $total_num_frames
ffmpeg -hwaccel cuda -i test_video.mp4 -vsync 2 -f null -
# Check ffprobe of each segment is consistent
rm -rf clips && mkdir clips && \
ffmpeg -i test_video.mp4 -acodec copy -f segment -vcodec copy -reset_timestamps 1 -segment_time 5 -map 0 clips/part_%d.mp4
count_frames=0
for i in {0..5}
do
num_packets=$(ffprobe -v quiet -show_entries stream=nb_read_packets -count_packets -select_streams v:0 -print_format json clips/part_$i.mp4 | jq '.streams[0].nb_read_packets' | tr -d '"')
count_frames=$(($count_frames+$num_packets))
echo $num_packets $count_frames $total_num_frames
done
Output is the following
3597
ffmpeg version 4.2.4-1ubuntu0.1 Copyright (c) 2000-2020 the FFmpeg developers
built with gcc 9 (Ubuntu 9.3.0-10ubuntu2)
configuration: --prefix=/usr --extra-version=1ubuntu0.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-nvenc --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
libavutil 56. 31.100 / 56. 31.100
libavcodec 58. 54.100 / 58. 54.100
libavformat 58. 29.100 / 58. 29.100
libavdevice 58. 8.100 / 58. 8.100
libavfilter 7. 57.100 / 7. 57.100
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 5.100 / 5. 5.100
libswresample 3. 5.100 / 3. 5.100
libpostproc 55. 5.100 / 55. 5.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'test_video.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2mp41
encoder : Lavf58.29.100
Duration: 00:00:59.95, start: 0.035000, bitrate: 11797 kb/s
Stream #0:0(und): Video: hevc (Main) (hvc1 / 0x31637668), yuv420p(tv, bt709), 1920x1080, 11692 kb/s, 60.01 fps, 60 tbr, 19200 tbn, 19200 tbc (default)
Metadata:
handler_name : Core Media Video
Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, mono, fltp, 91 kb/s (default)
Metadata:
handler_name : Core Media Audio
Stream mapping:
Stream #0:0 -> #0:0 (hevc (native) -> wrapped_avframe (native))
Stream #0:1 -> #0:1 (aac (native) -> pcm_s16le (native))
Press [q] to stop, [?] for help
Output #0, null, to 'pipe:':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2mp41
encoder : Lavf58.29.100
Stream #0:0(und): Video: wrapped_avframe, nv12, 1920x1080, q=2-31, 200 kb/s, 60 fps, 60 tbn, 60 tbc (default)
Metadata:
handler_name : Core Media Video
encoder : Lavc58.54.100 wrapped_avframe
Stream #0:1(und): Audio: pcm_s16le, 44100 Hz, mono, s16, 705 kb/s (default)
Metadata:
handler_name : Core Media Audio
encoder : Lavc58.54.100 pcm_s16le
frame= 3597 fps=788 q=-0.0 Lsize=N/A time=00:00:59.95 bitrate=N/A speed=13.1x
video:1883kB audio:5162kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
then
297 297 3597
300 597 3597
300 897 3597
300 1197 3597
300 1497 3597
300 1797 3597 <--- output are consistent based on ffprobe
But then if i check segment size with ffmpeg with the following command
ffmpeg -hwaccel cuda -i clips/part_$i.mp4 -vsync 2 -f null -
for part 0 its ok
frame= 297 fps=0.0 q=-0.0 Lsize=N/A time=00:00:04.95 bitrate=N/A speed=12.5x
video:155kB audio:424kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
for all other parts it's inconsistent and should be 300
frame= 297 fps=0.0 q=-0.0 Lsize=N/A time=00:00:04.95 bitrate=N/A speed=12.3x
video:155kB audio:423kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
The issue is exactly the same with any other interval size, e.g with 10 seconds I would get the following video size:
ffprobe 597 - 600 ...
ffmpeg 597 597 ...
I thought it could be related to source vfr or cfr but I tried to convert the input to cfr and nothing changed.
Moreover, I tried to force the keyframe every second to check if it was a keyframe issue with the following arg: -force_key_frames "expr:gte(t,n_forced*1)", but the problem is exactly the same.
What am I doing wrong? it happens to me a lot with files in hvc1 and I really don't know how to deal with that.
ANSWER
Answered 2022-Jan-11 at 22:08The source of the differences is that FFprobe counts the discarded packets, and FFmpeg doesn't count the discarded packets as frames.
Your results are consistent with video stream that is created with 3 B-Frames (3 consecutive B-Frames for every P-Frame or I-Frame).
According to Wikipedia:
I‑frames are the least compressible but don't require other video frames to decode.
P‑frames can use data from previous frames to decompress and are more compressible than I‑frames.
B‑frames can use both previous and forward frames for data reference to get the highest amount of data compression.
When splitting a video with P-Frame and B-Frame into segments without re-encoding, the dependency chain breaks.
- There are (almost) always frames that depends upon frames from the previous segment or the next segment.
- The above frames are kept, but the matching packets are marked as "discarded" (marked with
AV_PKT_FLAG_DISCARD
flag).
For the purpose of working on the same dataset, we my build synthetic video (to be used as input).
Building synthetic video with the following command:
ffmpeg -y -r 60 -f lavfi -i testsrc=size=384x256:rate=1 -vf "setpts=N/60/TB" -g 60 -vcodec libx265 -x265-params crf=28:bframes=3:b-adapt=0 -tag:v hvc1 -pix_fmt yuv420p -t 20 test_video.mp4
-g 60
set GOP size to 60 frames (insert a key frame every 60 frames).bframes=3:b-adapt=0
force 3 consecutive B-Frames.
For verifying the number of I/P/B frames, we may use FFprobe:
ffprobe -i test_video.mp4 -show_frames -show_entries frame=pict_type
The output is like:
pict_type=I
pict_type=B
pict_type=B
pict_type=B
pict_type=P
pict_type=B
pict_type=B
pict_type=B
...
Segment the video by time (5 seconds per segment):
ffmpeg -i test_video.mp4 -f segment -vcodec copy -reset_timestamps 1 -segment_time 5 clips/part_%d.mp4
FFprobe counting:
297 1497 1200
300 1797 1200
300 2097 1200
303 2400 1200
FFmpeg counting:
frame= 297
frame= 297
frame= 297
frame= 300
As you can see, the result is consistent with your output.
We may identify the "discarded" packets using FFprobe:
ffprobe -i part_1.mp4 -show_packets
Look for flags=_D
.
Packet with flags=_D
is marked as "discarded"
Note: In a video stream every packet matches a frame.
FFprobe output begins with:
flags=K_
flags=_D
flags=_D
flags=_D
flags=__
flags=__
flags=__
...
For every middle segment, 3 packets are marked as "discarded", and that is the reason for the 3 missing frames in FFmpeg compared to FFprobe.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install livedl
Support
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesExplore Kits - Develop, implement, customize Projects, Custom Functions and Applications with kandi kits
Save this library and start creating your kit
Share this Page