rtmp | RTMP Server , RTMP Pusher , RTMP Client | HTTP library
kandi X-RAY | rtmp Summary
kandi X-RAY | rtmp Summary
RTMP Server , RTMP Pusher , RTMP Client
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of rtmp
rtmp Key Features
rtmp Examples and Code Snippets
Community Discussions
Trending Discussions on rtmp
QUESTION
I created a live stream session on instafeed.me then used ffmpeg
to send an MP4 file to the stream. But I get IO error.
The command is
...ANSWER
Answered 2021-Oct-02 at 00:09Instagram apparently does not like MP3. Use AAC instead. Replace -acodec libmp3lame
/-c:a libmp3lame
with -c:a aac
.
QUESTION
I am trying to stream live to Facebook with the following settings:
...ANSWER
Answered 2022-Mar-05 at 09:19Place
QUESTION
I have some code in React Native as follows, as can be seen I am calling a routine within the "componentDidMount" that is intended to load some previously saved variables from 'storage':
...ANSWER
Answered 2022-Mar-04 at 22:36I think you have to tweak your syntax a bit to do string concatenation in your react component:
QUESTION
When calling get broadcast object rest API, the RTMP Url in response shows private IP instead of public IP like below
"rtmpURL": "rtmp://172.58.0.1/LiveApp/test",
Is there anyway we can get public IP address?
...ANSWER
Answered 2022-Mar-04 at 10:51Thank you for this question.
You can get the public IP by declaring globalIp
Please go to /conf/red5.properties
i.e., /usr/local/antmedia/conf/red5.properties
Edit the properties file and Set useGlobalIp=true
save the settings
restart AMS with sudo service antmedia restart
Thank you
QUESTION
I download the ffmpeg through scoop,and I can use ffmpeg in powershell.
My script is below:
...ANSWER
Answered 2022-Feb-07 at 02:28I think the issue might be that zx is using the bash inside WSL. Assuming you also don't want to use ffmpeg in WSL, here is how I fixed it.
- Install git.
- Find the
bash.exe
inside the installed git. In my case, it is located atC:/Program Files/Git/usr/bin/bash.exe
. - Insert the following line after the shebang:
$.shell = `Location to your bash`
. In my case, it is$.shell = `C:/Program Files/Git/usr/bin/bash.exe`;
.
I am fairly sure this works with other distributions of bash for Windows, but I have not tested.
QUESTION
From SRS how to transmux HLS wiki, we know SRS generate the corresponding M3U8 playlist in hls_path, here is my config file:
...ANSWER
Answered 2022-Jan-31 at 16:53As you use OriginCluster, then you must get lots of streams to serve, there are lots of encoders to publish streams to your media servers. The key to solve the problem:
- Never use single server, use cluster for elastic ability, because you might get much more streams in future. So forward is not good, because you must config a special set of streams to foward to, similar to a manually hash algorithm.
- Beside of bandwidth, the disk IO is also the bottleneck. You definitely need a high performance network storage cluster. But be careful, never let SRS directly write to the storage, it will block SRS coroutine.
So the best solution, as I know, is to:
- Use SRS Origin Cluster, to write HLS on your local disk, or RAM disk is better, to make sure the disk IO never block the SRS coroutine(driven by state-threads network IO).
- Use network storage cluster to store the HLS files, for example cloud storage like AWS S3, or NFS/K8S PV/Distributed File System whatever. Use nginx or CDN to deliver the HLS.
Now the problem is: How to move data from memory/disk to a network storage cluster?
You must build a service, by Python or Go:
- Use
on_hls
callback, to notify your service to move the HLS files. - Use
on_publish
callback, to notify your service to start FFmpeg to convert RTMP to HLS.
Note that FFmpeg should pull stream from SRS edge, never from origin server directly.
QUESTION
From the wiki, the coworkers means "The HTTP APIs of other origin servers in the cluster". In our origin cluster, I config it like this:
...ANSWER
Answered 2022-Jan-31 at 16:53For OriginCluster, a set of origin works as a cluster to provide services to Edge server, like this:
QUESTION
I use this code,but the picture is not visible :(:
os.system(f'ffmpeg -stream_loop -1 -re -i {dir_path}/maxresdefault.jpg -re -i "{url}" -c:v libx264 -preset superfast -b:v 2500k -bufsize 3000k -maxrate 5000k -c:a aac -ar 44100 -b:a 128k -pix_fmt yuv420p -f flv rtmp://a.rtmp.youtube.com/live2/###########')
this main part of code:
ANSWER
Answered 2022-Jan-18 at 23:16It looks like -stream_loop
is not working as it supposed to work (I don't know the reason).
- Add
-f image2
before-i im001.jpg
- force FFmpeg to get the input as an image (it supposed to be the default, but I think there is an issue when using-stream_loop
). - Remove the
-re
from the video input (I think there is an issue when using-re
with-stream_loop
). - Add
-r 1
for setting the frame rate of the input to 1Hz (it is not a must, but the default framerate is too high for a single image). - Add
-vn
before the audio input - make sure FFmpeg doesn't decode the video from the URL (in case there is a video stream). - I added
-vf "setpts=N/1/TB" -af "asetpts=PTS-STARTPTS" -map:v 0:v -map:a 1:a
for correcting the timestamps and the mapping (we probably don't need it). The1
in"setpts=N/1/TB"
is the video framerate. - I added
-bsf:v dump_extra
(I think we need it for FLV streaming). - Add
-shortest -fflags +shortest
for quitting when the audio ends.
I don't know how to stream it to YouTube.
I used localhost and FFplay as a listener (it takes some time for the video to appear).
Here is the complete code sample:
QUESTION
I'm trying to stream FFmpeg with audio.
I will show my code below:
Import module ...
ANSWER
Answered 2021-Dec-24 at 21:58Assuming you actually need to use OpenCV for the video, you have to add the audio directly to FFmpeg as Gyan commented, because OpenCV does not support audio.
-re
argument is probably required for live streaming.
For testing, I modified the RTMP URL from YouTube to localhost.
FFplay sub-process is used for capturing the stream (for testing).
Complete code sample:
QUESTION
I'm trying to make a continuous livestream of videos downloaded via yt-dlp. I need to port this (working) bash command into Python.
...ANSWER
Answered 2021-Nov-07 at 18:06Closing the stdin
pipe is required for "pushing" the sub-process remaining (buffered) data to stdout
pipe.
For example, add encoder_p.stdin.close()
after finish writing all data to encoder_p.stdin
.
I don't understand how your code is working.
In my machine, it gets stack at encoder_buf = encoder_p.stdout.read(COPY_BUFSIZE)
.
I solved the problem using a "writer thread".
The "writer thread" reads data from yt_dlp_p
and write it to encoder_p.stdin
.
Note: In your specific case, it could work without a thread (because the data is just passed through FFmpeg, and not being encoded), but usually, the encoded data is not ready right after writing the input to FFmpeg.
My code sample uses FFplay sub-process for playing the video (we need the video player because the RTMP streaming requires a "listener" in order to keep streaming).
Here is a complete code sample:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install rtmp
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page