media-server | Open UPNP/DLNA media server for Windows/Mac/Linux
kandi X-RAY | media-server Summary
kandi X-RAY | media-server Summary
Open UPNP/DLNA media server for Windows/Mac/Linux
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Start the UPP server .
- Middleware to render a React component
- swap the menu
- ScrollSpy wrapper
- Returns true if the transition end event is valid .
- Stop up the UPPPM server .
- get the closest parent element
- Clear all popups
- Creates list of Media servers .
- The repositories repository .
media-server Key Features
media-server Examples and Code Snippets
Community Discussions
Trending Discussions on media-server
QUESTION
I have a web application I am working on that allows the user to stream video from their browser and simultaneously livestream to both Youtube and Twitch using ffmpeg. The application works fine when I don't need to send any of the audio. Currently I am getting the error below when I try to record video and audio. I am new to using ffmpeg and so any help would be greatly appreciated. Here is also my repo if needed: https://github.com/toshvelaga/livestream
Here is my node.js server with ffmpeg
...ANSWER
Answered 2021-Jul-26 at 17:51So I got the audio to work after a little bit of trial and error with ffmpeg. Not sure if this is the optimal approach, but it works for the time being.
Here is also the full file: https://github.com/toshvelaga/livestream/blob/main/server/server.js
QUESTION
import socket
import threading
host = '127.0.0.1'
port = 36250
def RetrFile(sock):
from account import posts
filename = 'posts.txt'
for item in posts(filename):
sock.send(item)
sock.send("DONE".encode())
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
h_name = socket.gethostname()
IP_address = socket.gethostbyname(h_name)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind((IP_address,port))
print("Server started")
print("IP address:", IP_address)
while True:
print("Waiting for clients...")
s.listen()
c, addr = s.accept()
print("Client connected, IP: " + str(addr))
data = s.recv(1024)
if data == "POSTS".encode():
t = threading.Thread(target=RetrFile, args=(c))
t.start()
s.close()
...ANSWER
Answered 2021-Jul-09 at 17:54Here is the problem:
QUESTION
All Files Below
I have trouble hosting my RTMP server on a digital-ocean droplet. I have 2 node application, 1 is just an API written with Hapi.js that runs on port 8000
The second one is a node-media-server app running on port 8888 and 1935 for RTMP, which I have integrated as a Hapi.js plugin, but they run as separate processes. I use Nginx as a reverse proxy to pass requests to node apps, and everything works fine. All the endpoints provided by node-media-server work.
But I can't think of a way to access my node-media-server's 1935 port and send a RTMP stream to it.
On localhost I use OBS like this rtmp://localhost:1935/live/{stream_key}
, but the same doesn't work for the hosted app.
Please show me a way to receive the stream from my OBS to server.
Maybe I could use ngix-rtmp module to receive the stream and just push it to my node-media-server app on the server...
/etc/nginx/sites-available/default
ANSWER
Answered 2021-Jun-26 at 19:45After some research, I got the solution.
The solution was painfully obvious
So node-media-server app listens for RTMP on port 1935. So the natural solution is to create and configure a firewall that will allow TCP connections through port 1935. For Ubuntu 18.0 Droplet the following does the trick.
First, find Your port with lsof -i :1935
, then allow TCP connection over the port with sudo ufw allow 1935/tcp
. Thus if the node-media-server is running, congrats! You can now use OBS like this rtmp://your_ip:1935/live/stream_key
Note that: Watch for the host Your app runs on. For me localhost
worked, but with some Droplet configurations You might need to set it to 0.0.0.0
QUESTION
In general, everything is fine, but the log is stacking a lot at "ant-media-server.log"
2021-05-08 12:14:08,756 [Thread-89] INFO i.a.streamsource.StreamFetcher - last dts4156022956 is bigger than incoming dts 4236715763 2021-05-08 12:14:08,756 [Thread-89] INFO i.a.streamsource.StreamFetcher - dts (4236715764) is bigger than pts(4156022956)
A few hundred megabytes even in a moment How can I solve this?
...ANSWER
Answered 2021-May-11 at 12:58There are several ways for doing that.
If you want to keep the log level in INFO but you still don't want to have these logs, then please open to
conf/logback.xml
and add following line before ```The file should look like something below
QUESTION
I'm trying to run node-media-server on a EC2 instance, but i a'm not able to make OBS to connect to the server, here is my Nginx config:
...ANSWER
Answered 2021-Apr-16 at 15:57I found the problem, the first thing is to setup Nginx to listen on the port 80 only as node-media-server takes care of listening on the ports 8000 and 1935
QUESTION
Is there any information, updates, or documentation regarding Codename One's WebRTC support? There was a mention of it months ago in this comment on Stack Overflow (AntMedia Native Interface issues), but then I haven't heard anything more about it.
For the time being, I'm supporting live streaming on AntMedia via native interfaces that do live streaming with RMTP, as on my own I couldn't find a way to support WebRTC in Codename One. Unfortunately I realized just today that the RMTP support on Android doesn't work anymore (I don't know why, in the past months it worked)... anyway I've always considered RMTP as a temporary workaround, maybe this trouble is a good opportunity to switch to WebRTC.
I've seen that Steve has quietly created this cn1lib, which has not been announced (maybe because the work is not yet finished?) nor is it present among the extensions that can be installed via Codename One's Control Center: https://github.com/shannah/CN1WebRTC
I found the documentation here: https://shannah.github.io/CN1WebRTC/javadoc/ but comparing this javadoc with the documentation provided by AntMedia I just don't understand what I have to do, as AntMedia provides its own SDKs for Android and iOS, provides documentation to use them, but I don't understand how I can use in their place the cn1lib made by Steve. Obviously porting their SDKs is not easy, otherwise I would have already done it as the first option. In any case, the AntMedia server should be independent from the SDKs used, as it should use standard protocols, if I understand correctly.
Specifically, I have a server running AntMedia Enterprise Edition 2.1.0, whose documentation on WebRTC support is here: https://github.com/ant-media/Ant-Media-Server/wiki
Thank you
...ANSWER
Answered 2021-Jan-13 at 17:04I haven't used AntMedia Server, so my following comment is based on 10 minutes looking through their documentation.
It looks like they provide their own API that is distinct from the standard WebRTC APIs. The Codename One WebRTC lib is built on the standard WebRTC APIs. I think that the best route, if you want to use AntMediaServer's APIs is to create native interface wrappers for it.
It is also possible and likely that you can just use the Ant Media Server and then use the standard WebRTC API to connect to it. If this is the case, then you would be able to use the cn1lib with it. However, their documentation only seems to show how to use their custom API for the client.
QUESTION
I have been coming across an issue with Ant Media not authorizing my one-time play tokens. I am on the latest version, am on enterprise edition and have turned on one-time tokens for play in settings of my live app.
I am following the docs at https://github.com/ant-media/Ant-Media-Server/wiki/Stream-Security-Documentation.
Send a get request (using postman) using the recommended scenario format: https://[IP_Address]:5443//rest/v2/broadcasts//token?expireDate=&type=play
Copy tokenId from the response and insert it into format I wish to play (I've tried HLS and WebRTC): https://[IP_Address]//play.html?name=streamID&playOrder=hls&token=tokenId
Then I get 403 Invalid token or cannot play media message with HLS.
Am I missing something in the current steps I'm taking? I am following the docs step by step, if I am missing something can someone inform me?
Thanks, Nathan.
...ANSWER
Answered 2020-Dec-02 at 20:45Could you please make sure your expire date and application names are correct? You can check your current timestamp on this page -> https://www.epochconverter.com/
QUESTION
We had set a new server with the last release
The issue is that when a user publish a stream video and that another one connects to room, the server sends streams via websocket to all new comer via the streams
array sent with joinedTheRoom
message. But when new comer join the room and start publishing, all the users that already are in the room does not receive the streamJoined
message.
When looking at the server graphic interface, all streams are well published on the server, but it does not send the info via websocket. When logging all the received info from ws, we only receive joinedTheRoom
, initialized
and pings.
We used to have another server with the release 2.1.0, and we did not have such issues. We tried to see what have changed is the last release but most of the issues are empty. Can you see what went wrong with our server ? Do we need an updated version of Javascript SDK (if so, where can i find it ?) ?
...ANSWER
Answered 2020-Nov-13 at 15:02As you guessed, streamJoined has removed. You can check here for further details. Current implementation relies on client is getting room information from server every 5 second intervals. So you need to change implementation from streamJoined to getroominfo. You can check here for new implementation of conference sample and i suggest you to look here for updated Javascript SDK. I guess if you look at new conference sample you can get it done.
QUESTION
I have node.js server that uses node-media-server:
...ANSWER
Answered 2020-Oct-27 at 12:57You can add text to a video in a number of ways - most common are probably:
Add a text track to the video container, i.e. the MP4 file. This is usually done server side and the client then uses this info to display it client side. You can see more info here and an example with a commonly used tool: https://www.bento4.com/developers/dash/subtitles/
Embed the text in the frames themselves - this requires more processing and also adds the text to the video frames themselves, so you can't turn text on and off at the client easily. If you do want to do this then FFMPEG is probably a good place to start.
Add a text overlay on the client itself - e.g. a text 'div' or element on a browser App, or a TextView on Android etc. You mention that synchronisation may be a problem, but you could take timing events from the video to trigger changing the text. This avoids you having to do any extra processing on the video or video container.
A simple example of using timing to trigger text is below - you would likely want to update it to avoid checking everything on each 'onTimeUpdate' event, and maybe to put the text over the video itself, but this give an example how the basic mechanism works:
QUESTION
I have configured kurento media server, but for the SSL part it requires to have cert+key in pem format and also password. The issue is i have
...ANSWER
Answered 2020-Aug-13 at 03:47Try this one instead
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install media-server
There are two ways: using a python or grunt script.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page