rtsp-stream | box solution for RTSP - HLS live stream transcoding | Video Utils library
kandi X-RAY | rtsp-stream Summary
Support
Quality
Security
License
Reuse
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample Here
rtsp-stream Key Features
rtsp-stream Examples and Code Snippets
Trending Discussions on rtsp-stream
Trending Discussions on rtsp-stream
QUESTION
Description:
I have an API (ASP.Net 5) which connect to an IP Camera through RTSP. The camera send a h264 stream converted with ffmpeg as m3u8 stream which is returned to the angular client as follow:
public async Task GetCameraH264Stream()
{
string deviceIp = "rtsp://[CAMERA_IP]/";
string recordingUri = "rtsp://[USER:PASSWORD]@[CAMERA_IP]/axis-media/media.amp";
string output = Path.Combine(Path.GetTempPath(), Guid.NewGuid() + ".m3u8");
var mediaInfo = await FFmpeg.GetMediaInfo(recordingUri);
var conversionResult = FFmpeg.Conversions.New()
.AddStream(mediaInfo.Streams)
.SetOutput(output)
.Start();
// Allow any Cors
Response.Headers.Add("Access-Control-Allow-Origin", "*");
Response.Headers.Add("Cache-Control", "no-cache");
// Open the file, and read the stream to return to the client
FileStreamResult result = new FileStreamResult(System.IO.File.Open(output, FileMode.Open, FileAccess.Read, FileShare.Read), "application/octet-stream");
result.EnableRangeProcessing = true;
return result;
}
If I call this methods directly, the browser download a file, which I can read with VLC.
In my Angular app, I have this component:
app-vjs-player:
@Component({
selector: 'app-vjs-player',
template: '
',
encapsulation: ViewEncapsulation.None,
})
export class VjsPlayerComponent implements OnInit, OnDestroy {
@ViewChild('target', {static: true}) target: ElementRef;
@Input() options: {
fluid: boolean,
aspectRatio: string,
autoplay: boolean,
sources: {
src: string,
type: string,
}[],
vhs: {
overrideNative: true
},
};
player: videojs.Player;
constructor(
private elementRef: ElementRef,
) { }
ngOnInit() {
// instantiate Video.js
this.player = videojs(this.target.nativeElement, this.options, function onPlayerReady() {
console.log('onPlayerReady', this);
});
}
ngOnDestroy() {
// destroy player
if (this.player) {
this.player.dispose();
}
}
}
This component is used like this:
TS:
playerOptions = {
fluid: false,
aspectRatio: "16:9",
autoplay: false,
sources: [{
src: 'https://localhost:44311/api/GetCameraH264Stream',
type: 'application/x-mpegURL',
}],
}
HTML:
Problem
All this seems to work pretty well, until vjs throw this error when the api return the stream :
ERROR: (CODE:4 MEDIA_ERR_SRC_NOT_SUPPORTED) The media could not be loaded, either because the server or network failed or because the format is not supported
When I open the network dev tools, the request status is "Canceled", but I don't know if videojs cancel it because the filestreal can't be read, or if it is because of the way the API return the stream.
Any idea ?
Source
Forwarding RTSP stream from IP Camera to Browser in ASP.NET Core
EDIT
- I tried to limit the resolution and the bitrate but I can't configure the camera like that, there is other application using it. The camera do not have any streaming url allowing this configuration
- I have been able to get an image from my code after changing the content type of the api response. I changed:
FileStreamResult result = new FileStreamResult(System.IO.File.Open(output, FileMode.Open, FileAccess.Read, FileShare.Read), "application/octet-stream");
to
FileStreamResult result = new FileStreamResult(System.IO.File.Open(output, FileMode.Open, FileAccess.Read, FileShare.Read), "application/x-mpegURL");
With this the first packet is displayed, but the next requests are still canceled.
ANSWER
Answered 2022-Jan-04 at 10:49The change on the response ContentType is working (see last edit on question).
It seems that the canceled request was due to the slow network. All the code above is working as is, except for the last modif ( application/octet-stream => application/x-mpegURL ). Here is the updated api method:
public async Task GetCameraH264Stream()
{
string deviceIp = "rtsp://[CAMERA_IP]/";
string recordingUri = "rtsp://[USER:PASSWORD]@[CAMERA_IP]/axis-media/media.amp";
string output = Path.Combine(Path.GetTempPath(), Guid.NewGuid() + ".m3u8");
var mediaInfo = await FFmpeg.GetMediaInfo(recordingUri);
var conversionResult = FFmpeg.Conversions.New()
.AddStream(mediaInfo.Streams)
.SetOutput(output)
.Start();
// Allow any Cors
Response.Headers.Add("Access-Control-Allow-Origin", "*");
Response.Headers.Add("Cache-Control", "no-cache");
// Open the file, and read the stream to return to the client
FileStreamResult result = new FileStreamResult(System.IO.File.Open(output, FileMode.Open, FileAccess.Read, FileShare.Read), "application/x-mpegURL");
result.EnableRangeProcessing = true;
return result;
}
EDIT
It seems that the code above will create a ffmpeg.exe process each time a request is made. This process will never end, as this is a stream from a camera that is never ended. I don't know how to kill the ffmpeg process yet, but I have modified the stream conversion retrieval so it use an existing ffmpeg process for the stream if it already exist:
public async Task GetCameraH264Stream()
{
string deviceIp = "rtsp://[CAMERA_IP]/";
string recordingUri = "rtsp://[USER:PASSWORD]@[CAMERA_IP]/axis-media/media.amp";
if (!this.cache.GetCache("camstream").TryGetValue(streamingUri, out object output))
{
output = Path.Combine(Path.GetTempPath(), Guid.NewGuid() + ".m3u8");
var mediaInfo = await FFmpeg.GetMediaInfo(streamingUri);
var conversionResult = FFmpeg.Conversions.New()
.AddStream(mediaInfo.Streams)
.SetOutput((string) output)
.Start();
this.cache.GetCache("camstream").Set(streamingUri, output);
// Delay until the file is created
while (!System.IO.File.Exists((string)output))
{
await Task.Delay(100);
}
}
// Allow any Cors
Response.Headers.Add("Access-Control-Allow-Origin", "*");
Response.Headers.Add("Cache-Control", "no-cache");
// Open the file, and read the stream to return to the client
FileStreamResult result = new FileStreamResult(System.IO.File.Open(output, FileMode.Open, FileAccess.Read, FileShare.Read), "application/x-mpegURL");
result.EnableRangeProcessing = true;
return result;
}
And for the .ts file :
private async Task GetCameraH264StreamTSFile(string tsFileName)
{
string output = Path.Combine(Path.GetTempPath(), tsFileName);
Response.Headers.Add("Access-Control-Allow-Origin", "*");
return File(System.IO.File.OpenRead(output), "application/octet-stream", enableRangeProcessing: true);
}
QUESTION
I'm currently working on a remotely controlled robot that is sending two camera streams from a Jetson Nano to a PC/Android Phone/VR Headset.
I've been able to create a stable link between the robot and PC using gst-rtsp-server running this pipeline:
./test-launch nvarguscamerasrc sensor-id=1 ! video/x-raw(memory:NVMM) width=1920 height=1080 framerate=30/1 format=NV12 ! nvvidconv flip-method=2 ! omxh264enc iframeinterval=15 ! h264parse ! rtph264pay name=pay0 pt=96
And receiving it on PC using:
gst-launch-1.0 -v rtspsrc location=rtspt://192.168.1.239:8554/test ! application/x-rtp, payload=96 ! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink sync=false
On PC, there's an excellent latency of about ~120ms, so I thought there wouldn't be a problem running that same thing on Android. Using gstreamer's prebuild binaries from here and a modification from here to be able to use rtspsrc
I've succesfully managed to receive the rtsp stream. But this time the video is "slowed down" (probably some buffer problems, or HW acceleration?)
Worked my way around that by using latency=150 drop-on-latency=true
parametrs of rtspsrc
which only keeps those frames with lower latency but as expected the output encoded image is trash.
So my question is: Why is there such a difference between a phone and a PC receiving the stream.
It seems that the gst-rtsp-stream
is defaulting to sending via tcp
which i tried to configure with gst_rtsp_media_factory_set_protocols(factory, GST_RTSP_LOWER_TRANS_UDP_MCAST)
but doing that I can no longer receive the stream even on a PC with the same pipeline.
Is there a way to force gst-rtsp-server
to send via udp
. Or is there a way to optimize the phone encoding performance to run as quick a the PC does? (I have an Galaxy S10+, so I guess it should be able to handle that)
My goal is a clear video on Android/VR Headset with minimal latency (preferably the same ~120ms as on PC)
ANSWER
Answered 2022-Feb-16 at 20:31The rtsp server uses TCP because your client query asked for that using rtspt
where last t
queries for TCP transport. Just using rstp
instead should use UDP. You may have a look to protocols
property of rtspsrc
for more details.
Full story is in the comments here and continued to solution here: Gstreamer Android HW accelerated H.264 encoding
QUESTION
How can i send all webcams to collect from one server. For example:
there is pc_1, pc2, ..., pc_n they are sending camera view to some ubuntu server where i can connect with ssh name@ip_adress
and all pc have a windows on them
i looked Sending live video frame over network in python opencv this but this worked only on localhost
and secondly i looked this Forward RTSP stream to remote socket (RTSP Proxy?) but couldnt figure out how to do it on my situation
ANSWER
Answered 2022-Jan-28 at 22:52Each IPC is a RTSP server, it allows you to pull/play RTSP stream from it:
IPC ---RTSP--> Client(Player/FFmpeg/OBS/VLC etc.)
And because it's a internal IPC and its IP is intranet, so the client should in the same intranet, that's why it works only on localhost
like.
Rather than pulling from the internet client which does not work, you could forward the stream to internet server, just like this:
IPC ---RTSP--> Client --RTMP--> Internet Server(SRS/Nginx etc.)
For example, use FFmpeg as a Client
to do this, please replace the xxx
by your internet server:
ffmpeg -i "rtsp://user:password@ip" -c:v libx264 -f flv rtmp://xxx/live/stream
Note: You could fastly deploy a internet server by srs-droplet-template in 3 minutes, without any cli or knowledge about media server.
Then you could play the stream by any client and any protocol, like PC/H5 by HTTP-FLV/HLS/WebRTC, mobile iOS/Android by HTTP-FLV/HLS, please read this post
QUESTION
I'm working on a project that takes individual images from an RTSP-Stream and manipulates them (drawing bounding boxes). Those images should be restreamed (h264 encoded) on a separate RTSP-stream on an other address and shouldn't be saved on the local disk.
My current code so far is:
{
// OpenCV VideoCapture: Sample RTSP-Stream
var capture = new VideoCapture("rtsp://195.200.199.8/mpeg4/media.amp");
capture.Set(VideoCaptureProperties.FourCC, FourCC.FromFourChars('M', 'P', 'G', '4'));
var mat = new Mat();
// LibVlcSharpStreamer
Core.Initialize();
var libvlc = new LibVLC();
var player = new MediaPlayer(libvlc);
player.Play();
while (true)
{
if (capture.Grab())
{
mat = capture.RetrieveMat();
// Do some manipulation in here
var media = new Media(libvlc, new StreamMediaInput(mat.ToMemoryStream(".jpg")));
media.AddOption(":no-audio");
media.AddOption(":sout=#transcode{vcodec=h264,fps=10,vb=1024,acodec=none}:rtp{mux=ts,sdp=rtsp://192.168.xxx.xxx:554/video}");
media.AddOption(":sout-keep");
player.Media = media;
// Display screen
Cv2.ImShow("image", mat);
Cv2.WaitKey(1);
}
}
}
It is a little bit messy, because of testing purposes, but it works if I just use the given RTSP-Stream as the Media instead of the fetched images. I have some success with piping the images (as bytes) into the cvlc
command line (python get_images.py | cvlc -v --demux=rawvideo --rawvid-fps=25 --rawvid-chroma=RV24 --sout '#transcode{vcodec=h264,fps=25,vb=1024,acodec=none}:rtp{sdp="rtsp://:554/video"}'
), but it should be integrated in c#
. get_images.py
just reads images in a while
-loop, wirtes a text on it and forwards them into std-out
.
My thoughts on solving this problem is, to input the images via the StreamMediaInput
-class and to dynamically change the media, if a new image has been retrieved. But it doesn't work, nothing can be seen with VLC or FFPlay.
Does someone has faced a similar Problem? How can the StreamMediaInput
-Object can be changed dynamically, such that new images are broadcastet correctly?
Thank you for taking the time to read this post. Have a nice day!
EDIT:
I tried to implement my own MediaInput class (very similar to MemoryStramMediaInput
) with the modification of UpdateMemoryStream()
. The MemoryStream gets updated by every new retrieved image, but the read()
will not get called a second time (read()
is called once per Medium). I am trying to implement the blocking read()
, but I am struggeling to find a good way in implementing it. The code so far is:
EDIT 2:
I decided to implement the blocking with an ManualResetEvent
, which blocks the read(), if the Position is at the end of the Stream. Futhermore the read is looped in a while to keep the data in the stream updated. It still does not work. My Code so far:
using System;
using System.Collections.Generic;
using System.IO;
using System.Text;
using System.Threading;
using LibVLCSharp.Shared;
namespace LibVlcSharpStreamer
{
///
/// A implementation that reads from a .NET stream
///
public class MemoryStreamMediaInput : MediaInput
{
private Stream _stream;
private ManualResetEvent manualResetEvent = new ManualResetEvent(false);
#if NET40
private readonly byte[] _readBuffer = new byte[0x4000];
#endif
///
/// Initializes a new instance of , which reads from the given .NET stream.
///
/// You are still responsible to dispose the stream you give as input.
/// The stream to be read from.
public MemoryStreamMediaInput(Stream stream)
{
_stream = stream ?? throw new ArgumentNullException(nameof(stream));
CanSeek = stream.CanSeek;
}
///
/// Initializes a new instance of , which reads from the given .NET stream.
///
/// You are still responsible to dispose the stream you give as input.
/// The stream to be read from.
public void UpdateMemoryStream(Stream stream)
{
stream.CopyTo(_stream);
_stream.Position = 0;
manualResetEvent.Set();
manualResetEvent.Reset();
Console.WriteLine("released");
}
///
/// LibVLC calls this method when it wants to open the media
///
/// This value must be filled with the length of the media (or ulong.MaxValue if unknown)
/// true if the stream opened successfully
public override bool Open(out ulong size)
{
try
{
try
{
size = (ulong)_stream.Length;
}
catch (Exception)
{
// byte length of the bitstream or UINT64_MAX if unknown
size = ulong.MaxValue;
}
if (_stream.CanSeek)
{
_stream.Seek(0L, SeekOrigin.Begin);
}
return true;
}
catch (Exception)
{
size = 0UL;
return false;
}
}
///
/// LibVLC calls this method when it wants to read the media
///
/// The buffer where read data must be written
/// The buffer length
/// strictly positive number of bytes read, 0 on end-of-stream, or -1 on non-recoverable error
public unsafe override int Read(IntPtr buf, uint len)
{
try
{
while (_stream.CanSeek)
{
if (_stream.Position >= _stream.Length)
{
manualResetEvent.WaitOne();
}
var read = _stream.Read(new Span(buf.ToPointer(), (int)Math.Min(len, int.MaxValue)));
// Debug Purpose
Console.WriteLine(read);
}
return -1;
}
catch (Exception)
{
return -1;
}
}
///
/// LibVLC calls this method when it wants to seek to a specific position in the media
///
/// The offset, in bytes, since the beginning of the stream
/// true if the seek succeeded, false otherwise
public override bool Seek(ulong offset)
{
try
{
_stream.Seek((long)offset, SeekOrigin.Begin);
return true;
}
catch (Exception)
{
return false;
}
}
///
/// LibVLC calls this method when it wants to close the media.
///
public override void Close()
{
try
{
if (_stream.CanSeek)
_stream.Seek(0, SeekOrigin.Begin);
}
catch (Exception)
{
// ignored
}
}
}
}
I have marked the spot in the code, where the blocking clause would fit well, in my opinion.
ANSWER
Answered 2021-Aug-31 at 13:23Because you are doing a new Media for each frame, you won't be able to stream it as a single stream.
What you could do is create a MJPEG stream : Put .jpg images one after one in a single stream, and use that stream with LibVLCSharp to stream it.
However, if LibVLCSharp is faster to read the data from your memory stream than you are writing data to it, it will detect the end of the file, and will stop the playback / streaming (A Read() call that returns no data is considered as the end of the file). To avoid that, the key is to "block" the Read() call until there is actually data to read. This is not a problem to block the call as this happen on the VLC thread.
The default MemoryStream
/StreamMediaInput
won't let you block the Read() call, and you would need to write your own Stream
implementation or write your own MediaInput
implementation.
Here are a few ideas to block the Read call:
- Use a BlockingCollection to push Mat instances to the stream input. BlockingCollection has a Take() method that blocks until there is actually some data to read
- Use a ManualResetEvent to signal when data is available to be read (it has a Wait() method)
It would be easier to talk about that on The LibVLC discord, feel free to join !
If you manage to do that, please share your code as a new libvlcsharp-sample project!
QUESTION
I use python and opencv-python to capture frames from video, then use ffmpeg command to push rtsp stream with pipe. I can play the rtsp stream via gstreamer and vlc. However, a display device cannot decode and play the rtsp-stream because it cannot receive SPS and PPS frames. Use wireshark to capture stream, found that it doesn't send sps and pps frames, only send IDR frames.
The key codes are as follows.
# ffmpeg command
command = ['ffmpeg',
'-re',
'-y',
'-f', 'rawvideo',
'-vcodec', 'rawvideo',
'-pix_fmt', 'bgr24',
'-s', "{}x{}".format(width, height),
'-r', str(fps),
'-i', '-',
'-c:v', 'libx264',
'-preset', 'ultrafast',
'-f', 'rtsp',
'-flags', 'local_headers',
'-rtsp_transport', 'tcp',
'-muxdelay', '0.1',
rtsp_url]
p = sp.Popen(command, stdin=sp.PIPE)
while (cap.isOpened()):
ret, frame = cap.read()
if not ret:
cap = cv2.VideoCapture(video_path)
continue
p.stdin.write(frame.tobytes()
May be I miss some options of ffmpeg command?
ANSWER
Answered 2021-Dec-01 at 14:40Try adding the arguments '-bsf:v', 'dump_extra'
.
According to FFmpeg Bitstream Filters Documentation:
dump_extra
Add extradata to the beginning of the filtered packets except when said packets already exactly begin with the extradata that is intended to be added.
The filter supposed to add SPS and PPS NAL units with every key frame.
Here is a complete code sample:
import subprocess as sp
import cv2
rtsp_url = 'rtsp://localhost:31415/live.stream'
video_path = 'input.mp4'
# We have to start the server up first, before the sending client (when using TCP). See: https://trac.ffmpeg.org/wiki/StreamingGuide#Pointtopointstreaming
ffplay_process = sp.Popen(['ffplay', '-rtsp_flags', 'listen', rtsp_url]) # Use FFplay sub-process for receiving the RTSP video.
cap = cv2.VideoCapture(video_path)
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) # Get video frames width
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) # Get video frames height
fps = int(cap.get(cv2.CAP_PROP_FPS)) # Get video framerate
# FFmpeg command
command = ['ffmpeg',
'-re',
'-y',
'-f', 'rawvideo',
'-vcodec', 'rawvideo',
'-pix_fmt', 'bgr24',
'-s', "{}x{}".format(width, height),
'-r', str(fps),
'-i', '-',
'-c:v', 'libx264',
'-preset', 'ultrafast',
'-f', 'rtsp',
#'-flags', 'local_headers',
'-rtsp_transport', 'tcp',
'-muxdelay', '0.1',
'-bsf:v', 'dump_extra',
rtsp_url]
p = sp.Popen(command, stdin=sp.PIPE)
while (cap.isOpened()):
ret, frame = cap.read()
if not ret:
break
p.stdin.write(frame.tobytes())
p.stdin.close() # Close stdin pipe
p.wait() # Wait for FFmpeg sub-process to finish
ffplay_process.kill() # Forcefully close FFplay sub-process
Notes:
'-flags', 'local_headers'
are not valid arguments in my version of FFmpeg.- I don't know how to verify my solution, so I could be wrong...
QUESTION
I want to capture a Rtsp-stream from a Live-CAM which I then want to re-stream to another Rtsp-server. Basically, my computer will work as a relay-server using FFMpeg.
I have tried this temporary command but I cannot get it working i.e.
ffmpeg.exe -i rtsp://InputIPAddress:554/mystream -preset medium -vcodec libx264 -tune zerolatency -f rtsp -rtsp_transport tcp rtsp://localhost:8554/mysecondstream
I have then tried, for testing purposes, using FFplay to watch the stream from localhost as follows:
ffplay rtsp://localhost:8554/mysecondstream
but no luck.
Anyone who can help me out? Thanks.
ANSWER
Answered 2021-Mar-15 at 16:25Well, I found that this one works:
ffmpeg -rtsp_transport tcp -i "rtsp://123.123.123.123:554/mystream1" -rtsp_transport tcp -c:v copy -f rtsp rtsp://234.234.234.234:554/mystream2
Works even on an Android phone where I have FFmeg running. However, I am not really pleased with it. I hope I can improve it further.
EDIT: adding "-use_wallclock_as_timestamps 1" makes the stream stable.
QUESTION
Currently Im able to stream to youtube using this library:
https://github.com/pedroSG94/rtmp-rtsp-stream-client-java using android mobile camera.
When I tried with USB camera attached mobile and I can achieve the same using this library: https://github.com/pedroSG94/Stream-USB-test
What I need now is, to use mobile's microphone for audio and USB camera for video for the rtmp streaming to youtube. Please suggest me some solutions.
ANSWER
Answered 2021-Feb-08 at 07:30You can use DroidCam OBS with OBS (camera only, not sure) but you can also use Iriun Webcam for both audio and camera.
QUESTION
what i am trying to do is to save a RTSP-stream as a file with some text overlay (so copy
is not an option) on a Raspberry Pi. I tried using FFMPEG, but even with ultrafast settings the CPU load is way too high. Is there a faster encoding method or a completely different approach that i am missing?
ffmpeg -rtsp_transport tcp -i rtsp://x:y@ip/stream1 -vcodec libx264 -preset ultrafast -crf 0 -segment_time 3600 -t 3600 -f segment -y -strftime 1 -vf drawtext="fontcolor=white:fontsize=30:text='%{localtime}'",drawtext="fontcolor=white:fontsize=30:textfile=text.txt:x=600" /home/pi/NAS1/Elements/Videos/%Y-%m-%d_%H-%M-%S_file.mp4
ANSWER
Answered 2020-Dec-02 at 02:04Use the hardware encoder
I.e.
ffmpeg -codec:v h264_omx -b:v 2048k
QUESTION
I'm streaming video h264 video and AAC audio over RTMP on Android using the native MediaCodec APIs. Video and audio look great, however while the video is shot in potrait mode, playback on the web or with VLC is always in landscape.
Having read through the h264 spec, I see that this sort of extra metadata can be specified in Supplemental Enhancement Information (SEI), and I've gone about adding it to the raw h264 bit stream. My SEI NAL unit for this follows this rudimentary format, I plan to optimize later:
val displayOrientationSEI = {
val prefix = byteArrayOf(0, 0, 0, 1)
val nalHeader = byteArrayOf(6) // forbidden_zero_bit:0; nal_ref_idc:0; nal_unit_type:6
val display = byteArrayOf(47 /* Display orientation type*/, 3 /*payload size*/)
val displayOrientationCancelFlag = "0" // u(1); Rotation information follows
val horFlip = "1" // hor_flip; u(1); Flip horizontally
val verFlip = "1" // ver_flip; u(1); Flip vertically
val anticlockwiseRotation = "0100000000000000" // u(16); value / 2^16 -> 90 degrees
val displayOrientationRepetitionPeriod = "010" // ue(v); Persistent till next video sequence
val displayOrientationExtensionFlag = "0" // u(1); No other value is permitted by the spec atm
val byteAlignment = "1"
val bitString = displayOrientationCancelFlag +
horFlip +
verFlip +
anticlockwiseRotation +
displayOrientationRepetitionPeriod +
displayOrientationExtensionFlag +
byteAlignment
prefix + nalHeader + display + BigInteger(bitString, 2).toByteArray()
}()
Using Jcodec's SEI class, I can see that my SEI message is parsed properly. I write out these packets to the RTMP stream using an Android JNI wrapper for LibRtmp.
Despite this, ffprobe does not show the orientation metadata, and the video when played remains in landscape.
At this point I think I'm missing a very small detail about how FLV headers work when the raw h264 units are written out by LibRtmp. I have tried appending this displayOrientationSEI
NAL unit:
- To the initial SPS and PPS configuration only.
- To each raw h264 NAL units straight from the encoder.
- To both.
What am I doing wrong? Going through the source of some RTMP libraries, like rtmp-rtsp-stream-client-java, it seems the SEI message is dropped when creating FLV tags.
Help is much, much appreciated.
ANSWER
Answered 2020-May-30 at 00:32Does RTMP support the Display Orientation SEI Message in h264 streams?
RTMP is unaware of the very concept. from RTMPs perspective, the SEI is just a series of bytes it copys. It never looks at them, it never parses them.
The thing that needs to support it, is the h.264 decoder (which RTMP is also unaware of) and the player software. If it is not working for you, you must check the player, or the validity of the encoded SEI, not the transport.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install rtsp-stream
Support
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesExplore Kits - Develop, implement, customize Projects, Custom Functions and Applications with kandi kits
Save this library and start creating your kit
Share this Page