Open-IM-Server | Instant messaging server | Websocket library
kandi X-RAY | Open-IM-Server Summary
Support
Quality
Security
License
Reuse
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample Here
Open-IM-Server Key Features
Open-IM-Server Examples and Code Snippets
Trending Discussions on Websocket
Trending Discussions on Websocket
QUESTION
I built a docker container with Django, Uvicorn, Nginx and Redis, and am using django-channels but when I run this it says it cannot connect to the websocket and this is seen in the browser console:
WebSocket connection to 'ws://127.0.0.1:8080/ws/notifications/' failed
It is working fine when I use Django's runserver command for development but when I include Nginx and Uvicorn it breaks.
Entrypoint.sh:
gunicorn roomway.asgi:application --forwarded-allow-ips='*' --bind 0.0.0.0:8000 -k uvicorn.workers.UvicornWorker
Nginx config:
upstream django {
server app:8000;
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 8080;
location /static {
alias /vol/static;
}
location /ws/ {
proxy_pass http://0.0.0.0:8000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
location / {
proxy_pass http://django;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_redirect off;
proxy_buffering off;
}
}
settings.py:
CHANNEL_LAYERS={
'default': {
'BACKEND': 'channels_redis.core.RedisChannelLayer',
'CONFIG': {
'hosts': [('redis', 6379)], #Redis port
}
}
}
The JS file which handles the socket:
var wsStart = "ws://"
var webSocketEndpoint = wsStart + window.location.host + '/ws/notifications/'
const notificationSocket = new WebSocket(webSocketEndpoint)
asgi.py:
application = ProtocolTypeRouter({
"http": django_asgi_app,
"websocket": AuthMiddlewareStack(
URLRouter([
url(r'^ws/notifications/', NotificationConsumer.as_asgi()),
path("ws//", ChatConsumer.as_asgi())
])
)
})
Nginx throws this error with the above code:
[error] 23#23: *4 connect() failed (111: Connection refused) while connecting to upstream, server: , request: "GET /ws/notifications/ HTTP/1.1", upstream: "http://0.0.0.0:8000/ws/notifications/", host: "127.0.0.1:8080"
When I change the proxy_pass
to http://django
instead of 0.0.0.0
, Nginx does not throw that error anymore but I get the same error on the console. Also this time Django throws these warnings:
[WARNING] Unsupported upgrade request.
[WARNING] No supported WebSocket library detected. Please use 'pip install uvicorn[standard]', or install 'websockets' or 'wsproto' manually.
ANSWER
Answered 2022-Mar-26 at 10:20As noted in a comment by Iain Shelvington, it seems like websockets are not included in the base install of uvicorn
pip uninstall uvicorn
pip install 'uvicorn[standard]'
QUESTION
I have a ratchet WebSocket server, whose entityManager
is initialized from the backend. However, if some changes happen from one of the front-ends since the state of the entityManager
of the WebSocket server is different from the backend, the new changes are not reflected in the data that is served by the WebSocket server.
For this purpose, I wrote some listeners on the backend that listen for changes in these entities in and then send a request to the server like so:
public function postUpdate(Room $entity, LifecycleEventArgs $_)
{
try {
Loop::run(function() use ($entityName, $id) {
$conn = yield connect('ws://localhost:8080');
yield $conn->send(json_encode(['message' => $entityName, 'data' => ['duid' => $id]]));
$conn->close();});
} catch (Exception $e) {}
}
I then fetch the entity in the WebSocket server and simply refresh it, like so:
function onMessage(ConnectionInterface $from, $msg)
{
try {
$messageData = json_decode($msg);
switch ($messageData->message) {
case BookingSocketActions::ROOM_CHANGED_EVENT:
// $room = $this->entityManager->getRepository('ResourcesBundle:Room')
// ->find(['id' => $id]);
$room = $this->entityManager->getRepository('ResourcesBundle:Room')
->findRoomDetailById($messageData->data->duid);
// $this->entityManager->clear();
$this->entityManager->refresh($room);
break;
}
} catch (Exception $ex) {
$from->send($ex);
}
}
Now here is the strange bug: The state of the $entity
that is refreshed in the WebSocket server is always one behind the real changes of the entity. Suppose I change $entity->name
from "1" to "2".
After the refresh $entity->name
is still "1" on the WebSocket server. Only if I change it again to sth else, e.g. "3", will it change to "2" (after the refresh). If I change it to "4", it will go to "3" and so on.
The event is firing correctly from the backend and the entity is being fetched correctly on the server. It's just that refresh()
works only on a second request (and therefore a second refresh) to the WebSocket server but not on the first.
I have tried even things like $entityManager->merge($entity);
but no results.
I am on symfony 3.4, doctrine 2.7, and ratchet 0.4.3.
ANSWER
Answered 2022-Mar-08 at 15:30Doctrine uses the identity map
The websocket server is a daemon and all cleanup tasks are the responsibility of the developer
Use
\Doctrine\ORM\EntityManager::find
with the $lockMode
argument = \Doctrine\DBAL\LockMode::NONE
OR
Call the \Doctrine\ORM\EntityManager::clean
method before \Doctrine\ORM\EntityManager::find
QUESTION
I know there are a lot of questions and answeres regarding this topic out there, but nothing matched my specific issue.
I am using the following versions
- Angular 10.0.14
- @aspnet/signalr 1.0.27
- ASP.NET Core 3.1
VERSION UPDATE:
- I just replaced @aspnet/signalr 1.0.27 by @microsoft/signalr 5.0.11 -> same issue.
The SignalR connection works pretty fine until I add an accessTokenFactory in the Angular frontend.
Frontend
this.hubConnection = new signalR.HubConnectionBuilder()
.withUrl(`${environment.bzApiBaseUrl}/hubs/heartbeat`, {
accessTokenFactory: () => token
})
.build();
this.hubConnection
.start()
.then(() => {
console.log('SignalR: Heartbeat - connection started');
this.hubConnection.on('beat', () => {
console.log('SignalR: Heartbeat - Heartbeat received');
});
})
.catch(err => console.log('SignalR: Heartbeat - error while starting connection: ' + err));
});
The connection gets established when I remove the accessTokenFactory
from HubConnectionBuilder
.
Backend
services.AddCors(options =>
{
options.AddPolicy(
name: MyAllowSpecificOrigins,
builder =>
{
builder
.WithOrigins(CorsSettings.TargetDomain)
.AllowAnyHeader()
.AllowAnyMethod()
.SetIsOriginAllowed((host) => true)
.AllowCredentials();
});
});
The value of the CorsSetting domain is http://localhost:4200
where the frontend is running.
app.UseStaticFiles();
app.UseRouting();
app.UseCors(MyAllowSpecificOrigins);
app.UseAuthentication();
app.UseAuthorization();
app.UseEndpoints(e =>
{
e.MapControllers();
e.MapHub("/api/hubs/heartbeat");
});
The following error gets logged in browsers console after adding accessTokenFactory
:
WebSocketTransport.js:70
WebSocket connection to 'ws://localhost:33258/api/hubs/heartbeat?id=BpBytwkEatklNR-XqGtabA&access_token=eyJ0eXAiOiJKV1QiLCJhbGci... failed:
Utils.js:190
Error: Failed to start the transport 'WebSockets': undefined
dashboard:1
Access to resource at 'http://localhost:33258/api/hubs/heartbeat?id=iWuMeeOKkWCUa8X9z7jXyA&access_token=eyJ0eXAiOiJKV1QiLCJhbGci...' from origin 'http://localhost:4200' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
I don't understand the CORS issue due to the fact that it seems to be raised by adding a token to the query string with the accessTokenFactory.
I tried:
- Adding the backend URL to CORS origin (not needed according to documentation)
- app.UseSignalR instead of app.UseEndpoints
- Disable app.UseHttpsRedirection()
- Changed the order of the registered middlewares in different ways
- .SetIsOriginAllowed((host) => true) after .AllowCredentials()
UPDATE
The connection gets established when setting the HttpTransportType
to LongPolling
. It does not work with WebSockets
or ServerSentEvents
.
app.UseEndpoints(e =>
{
e.MapControllers();
e.MapHub("/api/hubs/heartbeat", options =>
{
options.Transports = HttpTransportType.LongPolling;
});
});
It seems as the token gets send as an HTTP header instead of a query string parameter when using long polling. The token is very long, so I break through the max. allowed URL length when using WebSockets or ServerSentEvents. But I have still no idea why this leads to a CORS exception.
ANSWER
Answered 2021-Oct-19 at 12:06Browsers do not support headers for websockets, therefore the bearer token has to be added as query string parameter. We hit the maximum length for URLs due to the length of our bearer token. We could shorten our token or use a reference token, see also: https://github.com/aspnet/SignalR/issues/1266
Hope this helps others as well.
QUESTION
I am running currently a webserver with ASP.NET Core 3.1 and a Blazor project. Recently when upgrading to .NET 6.0 I encountered (even with a blank Blazor project) some problems with a websocket error message in the browser only when deployed on my webserver (see message below).
Locally (on Windows 11 x64, VS 22 Preview 4) there are no error messages...
Webserver: Debian 10 x64, .NET 6.0 SDK installed, running on NGINX with websockets enabled (reverse proxy).
Do I miss out on something or is it a problem with the current state of .NET 6.0 and NGINX? I already tried to access the webpage locally on the debian server and the same error message occurs.
Help would be much appreciated!
Greetings!
Error messages within order:
Information: Normalizing '_blazor' to 'http://192.168.178.35/_blazor'.
blazor.server.js:1 WebSocket connection to 'ws://192.168.178.35/_blazor?id=wnPt_fXa9H4Jpia530vPWQ' failed:
Information: (WebSockets transport) There was an error with the transport.
Error: Failed to start the transport 'WebSockets': Error: WebSocket failed to connect. The connection could not be found on the server, either the endpoint may not be a SignalR endpoint, the connection ID is not present on the server, or there is a proxy blocking WebSockets. If you have multiple servers check that sticky sessions are enabled.
Warning: Failed to connect via WebSockets, using the Long Polling fallback transport. This may be due to a VPN or proxy blocking the connection. To troubleshoot this, visit https://aka.ms/blazor-server-using-fallback-long-polling.
ANSWER
Answered 2022-Feb-26 at 12:07Here is the solution described again, maybe a little bit more convenient:
To fix this problem, I changed in the site-configuration (/etc/nginx/sites-available) of nginx the following variables:
proxy_set_header Connection $connection_upgrade;
to
proxy_set_header Connection $http_connection;
For me this solved the problem.
QUESTION
I am implementing a simple chatbot using keras and WebSockets. I now have a model that can make a prediction about the user input and send the according answer.
When I do it through command line it works fine, however when I try to send the answer through my WebSocket, the WebSocket doesn't even start anymore.
Here is my working WebSocket code:
@sock.route('/api')
def echo(sock):
while True:
# get user input from browser
user_input = sock.receive()
# print user input on console
print(user_input)
# read answer from console
response = input()
# send response to browser
sock.send(response)
Here is my code to communicate with the keras model on command line:
while True:
question = input("")
ints = predict(question)
answer = response(ints, json_data)
print(answer)
Used methods are those:
def predict(sentence):
bag_of_words = convert_sentence_in_bag_of_words(sentence)
# pass bag as list and get index 0
prediction = model.predict(np.array([bag_of_words]))[0]
ERROR_THRESHOLD = 0.25
accepted_results = [[tag, probability] for tag, probability in enumerate(prediction) if probability > ERROR_THRESHOLD]
accepted_results.sort(key=lambda x: x[1], reverse=True)
output = []
for accepted_result in accepted_results:
output.append({'intent': classes[accepted_result[0]], 'probability': str(accepted_result[1])})
print(output)
return output
def response(intents, json):
tag = intents[0]['intent']
intents_as_list = json['intents']
for i in intents_as_list:
if i['tag'] == tag:
res = random.choice(i['responses'])
break
return res
So when I start the WebSocket with the working code I get this output:
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
* Restarting with stat
* Serving Flask app 'server' (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: on
But as soon as I have anything of my model in the server.py
class I get this output:
2022-02-13 11:31:38.887640: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2022-02-13 11:31:38.887734: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: )
Metal device set to: Apple M1
systemMemory: 16.00 GB
maxCacheSize: 5.33 GB
It is enough when I just have an import at the top like this: from chatty import response, predict
- even though they are unused.
ANSWER
Answered 2022-Feb-16 at 19:53There is no problem with your websocket route. Could you please share how you are triggering this route? Websocket is a different protocol and I'm suspecting that you are using a HTTP client to test websocket. For example in Postman:
HTTP requests are different than websocket requests. So, you should use appropriate client to test websocket.
QUESTION
I fail to enable the CORS for testing with the latest NestJS 8.0.6 and a fresh http + ws project. That said, I want to see the Access-Control-Allow-Origin
in the servers response (so that the client would accept it). Here is my main.ts where I've tried 3 approches: 1) with options, 2) with a method, 3) with app.use. None of them works.
import { NestFactory } from '@nestjs/core';
import { AppModule } from './app.module';
import { microserviceConfig} from "./msKafkaConfig";
async function bootstrap() {
const app = await NestFactory.create(AppModule, { cors: true}); // DOESN'T WORK
app.enableCors(); // DOESN'T WORK
app.connectMicroservice(microserviceConfig);
await app.startAllMicroservices();
// DOESN'T WORK
app.use((req, res, next) => {
res.header('Access-Control-Allow-Origin', '*');
res.header('Access-Control-Allow-Methods', 'GET,PUT,POST,DELETE,PATCH,OPTIONS,UPGRADE,CONNECT,TRACE');
res.header('Access-Control-Allow-Headers', 'Content-Type, Accept');
next();
});
await app.listen(3000);
}
bootstrap();
Please, do NOT give me a lesson on how dangerous CORS (XSForgery) is if we accept all domains. there is enough material about that. And I'm well aware of it. This is about NestJS not replying the Access-Control-Allow-Origin
element in the header.
The browser console reports:
Access to XMLHttpRequest at 'http://localhost:3000/socket.io/?EIO=4&transport=polling&t=Nm4kVQ1' from origin 'http://localhost:4200' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
In the chrome header inspection I see:
Request URL: http://localhost:3000/socket.io/?EIO=4&transport=polling&t=Nm4kUZ-
Referrer Policy: strict-origin-when-cross-origin
Connection: keep-alive
Content-Length: 97
Content-Type: text/plain; charset=UTF-8
Date: Mon, 20 Sep 2021 19:41:05 GMT
Keep-Alive: timeout=5
Accept: */*
Accept-Encoding: gzip, deflate, br
Accept-Language: en,de-DE;q=0.9,de;q=0.8,en-US;q=0.7,es;q=0.6
Cache-Control: no-cache
Connection: keep-alive
Host: localhost:3000
Origin: http://localhost:4200
Pragma: no-cache
Referer: http://localhost:4200/
sec-ch-ua: "Google Chrome";v="93", " Not;A Brand";v="99", "Chromium";v="93"
sec-ch-ua-mobile: ?0
sec-ch-ua-platform: "Windows"
Sec-Fetch-Dest: empty
Sec-Fetch-Mode: cors
Sec-Fetch-Site: same-site
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/93.0.4577.82 Safari/537.36
EIO: 4
transport: polling
t: Nm4kUZ-
Does the Referrer Policy: strict-origin-when-cross-origin
have an influence?
(btw, it works just fine with a simple express setup. So it cannot be my browser's fault.)
ANSWER
Answered 2021-Sep-20 at 20:29The enableCors
and { cors: true }
options are for the HTTP server (express or fastify). The URL given showing the CORS error came from a socket.io connection. To enable CORS for socket.io
you need to use the options in the @WebsocketGateway()
decorator, like
@WebsocketGateway({ cors: '*:*' })
export class FooGateway {}
Make sure to have both the host and the port set for the websocket cors as host:port
QUESTION
Forgive me for the newb question, but I am confused and obviously not understanding the fundamentals or explanations of how to use a Websocket server hosted over HTTPS
. Everything I find online leads me to have more questions than answers.
I have a Websocket server hosted on my HTTPS
website using Java code.
This is my WebsocketServer.java
file:
import org.java_websocket.WebSocket;
import org.java_websocket.handshake.ClientHandshake;
import org.java_websocket.server.WebSocketServer;
import java.net.InetSocketAddress;
import java.util.HashSet;
import java.util.Set;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
public class WebsocketServer extends WebSocketServer {
private static final Logger logger = LogManager.getLogger(WebsocketServer.class);
private static int TCP_PORT = 6868;
private static Set conns;
public WebsocketServer() {
super(new InetSocketAddress(TCP_PORT));
conns = new HashSet<>();
}
@Override
public void onOpen(WebSocket conn, ClientHandshake handshake) {
conns.add(conn);
logger.info("New connection from " + conn.getRemoteSocketAddress().getAddress().getHostAddress());
logger.info("Size of connection list: " + conns.size());
}
@Override
public void onClose(WebSocket conn, int code, String reason, boolean remote) {
conns.remove(conn);
logger.info("Closed connection to " + conn.getRemoteSocketAddress().getAddress().getHostAddress());
}
@Override
public void onMessage(WebSocket conn, String message) {
logger.info("Message from client: {}", message);
// for (WebSocket sock : conns) {
// sock.send("SENDING BACK" + message);
// }
}
@Override
public void onError(WebSocket conn, Exception ex) {
// ex.printStackTrace();
try {
if (conn != null) {
conns.remove(conn);
// do some thing if required
}
logger.info("ERROR from {}", conn.getRemoteSocketAddress().getAddress().getHostAddress());
} catch (Exception e) {
logger.info("onError: WebSocketServer may already be running");
}
}
public Set getConns() {
return conns;
}
}
Then I started the WebsocketServer
like this:
WebsocketServer websocketServer;
// Start socket server
websocketServer = new WebsocketServer();
websocketServer.start();
And on the client side, I connect to it like this:
// APP_WEB_SOCKET is the url to my site: api.my_custom_domain.com
var connection = new WebSocket("wss://" + APP_WEB_SOCKET + ":6868");
QUESTIONS: I keep reading that I need a certificate if I want to use wss
over HTTPS
, but cannot find any documents that explain what this means in a way that I can understand.
My app is hosted in AWS Elastic Beanstalk environment. Do I need to somehow add a certificate to the setup of the WebsocketServer
in my Java code? Example:
WebsocketServer websocketServer;
// Start socket server
websocketServer = new WebsocketServer();
// example guessing
websocketServer.cert = "SOMETHING";??
websocketServer.start();
Does the client code need to be changed at all?
Who needs the certificate?
If someone could please explain what I am missing or point me in the correct direction, I would really appreciate it.
ANSWER
Answered 2022-Jan-13 at 14:50Keep it easy.
Certs inside your application are complex - they are hard to manage and you will get problems to run your application in a modern cloud environment (start new environments, renew certs, scale your application, ...).
Simple conclusion: Dont implement any certs.
How-to get encrypted connections?As Mike already pointed out in the comments: WebSockets are just upgraded HTTP(S) connections. A normal webserver (nginx, apache) takes care about the certs. It can be done in kubernetes (as ingress-controller) or with a "bare-metal" webserver.
Both of them should act as a reverse-proxy. This means: Your java-application doesn't know anything about certs. It has just unencrypted connections - like in your code on port 6868
.
But the client will not use this port. 6868
is only internally reachable.
The client will call your reverse-proxy at the normal HTTPS port (=443). The reverse-proxy will forward the connection to your java-application.
Here some links for further information:
QUESTION
I have a task, but I can't seem to get it done. I've created a very simple WebRTC stream on a Raspberry Pi which will function as a videochat-camera. With ionic I made a simple mobile application which can display my WebRTC stream when the phone is connected to the same network. This all works.
So right now I have my own local stream which shows on my app. I now want to be able to broadcast this stream from my phone to a live server, so other people can spectate it.
I know how to create a NodeJS server which deploys my webcam with the 'getUserMedia' function. But I want to 'push' my WebRTC stream to a live server so I can retrieve a public URL for it.
Is there a way to push my local Websocket to a live environment? I'm using a local RTCPeerConnection to create a MediaStream object
this.peerconnection = new RTCPeerConnection(this.peerservers);
this.peerconnection.onicecandidate = (event) => {
if (event.candidate && event.candidate.candidate) {
var candidate = {
sdpMLineIndex: event.candidate.sdpMLineIndex,
sdpMid: event.candidate.sdpMid,
candidate: event.candidate.candidate
};
var request = {
what: "addIceCandidate",
data: JSON.stringify(candidate)
};
this.websockets.send(JSON.stringify(request));
} else {
console.log("End of candidates.");
}
};
And to bind the stream object to my HTML Video tag I'm using this
onTrack(event) {
this.remoteVideo.srcObject = event.streams[0];
}
My stream url is something like: MyLocalIP:port/streams/webrtc So I want to create a public URL out of it to broadcast it.
ANSWER
Answered 2021-Dec-10 at 16:54Is there a way to push my local Websocket to a live environment?
It's not straightforward because you need more than vanilla webrtc (which is peer-to-peer). What you want is an SFU. Take a look at mediasoup.
To realize why this is needed think about how the webrtc connection is established in your current app. It's a negotiation between two parties (facilitated by a signaling server). In order to turn this into a multi-cast setup you will need a proxy of sorts that then establishes separate peer-to-peer connections to all senders and receivers.
QUESTION
With ajax requests it can be done with this code:
let oldXHROpen = window.XMLHttpRequest.prototype.open;
window.lastXhr = '';
window.XMLHttpRequest.prototype.open = function(method, url, async, user, password) {
this.addEventListener('load', function() {
window.lastXhr = this.responseText;
});
return oldXHROpen.apply(this, arguments);
};
lastXhr
variable will hold the last response.
But how can this be achieved for websockets too?
ANSWER
Answered 2021-Dec-09 at 17:16The question/bounty/op is specifically asking for a reputable source. Instead of rolling a custom solution, my proposal is that a known proven library should be used - that has been used, audited, forked, and in general used by the community and that is hosted on github.
The second option is to roll your own (though not recommended) and there are many exccelent answers on how to do it involving the addEventListener
Wshook is a library (hosted on github) that allows to easily intercept and modify WebSocket requests and message events. It has been starred and forked multiple times.
Disclaimer: I don't have any relationship with the specific project.strong text
Example:
wsHook.before = function(data, url, wsObject) {
console.log("Sending message to " + url + " : " + data);
}
// Make sure your program calls `wsClient.onmessage` event handler somewhere.
wsHook.after = function(messageEvent, url, wsObject) {
console.log("Received message from " + url + " : " + messageEvent.data);
return messageEvent;
}
From the documentation, you will find:
wsHook.before - function(data, url, wsObject):
Invoked just before calling the actual WebSocket's send() method.
This method must return data which can be modified as well.
Websocket addEventListenerwsHook.after - function(event, url, wsObject):
Invoked just after receiving the MessageEvent from the WebSocket server and before calling the WebSocket's onmessage Event Handler.
The WebSocket object supports .addEventListener()
.
Please see: Multiple Handlers for Websocket Javascript
QUESTION
I would like to know how many TCP connections are created when WebSocket call is made from browser to apache http server to backend web service?
Does it create a separate TCP connection from the browser to apache http server and from apache to the web service?
ANSWER
Answered 2021-Oct-18 at 14:57When Apache is proxying websockets, there is 1 TCP connection between the client and Apache and 1 TCP connection between Apache and the backend.
Apache watches both connections for activity and forwards read from one onto the other.
This is the only way it can be in a layer 7 (Application Layer, HTTP) proxy. Something tunnelling at a much lower layer, like a NAT device or MAC forwarding IP sprayer could tunnel a single connection -- but not on the basis of anything higher up in the stack like headers.
The 2nd connection is observable with netstat.
The 2nd connection is opened when mod_proxy_wstunnel calls ap_proxy_connect_to_backend() which calls apr_socket_create() which calls the portable socket() routine. When recent releases of mod_proxy_http handle this tunneling automatically, simialr flow through ap_proxy_acquire_connection.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install Open-IM-Server
Support
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesExplore Kits - Develop, implement, customize Projects, Custom Functions and Applications with kandi kits
Save this library and start creating your kit
Share this Page