http2-examples | Example code for the blog posts on http : //fstab | Blog library
kandi X-RAY | http2-examples Summary
kandi X-RAY | http2-examples Summary
Example code for the blog posts on [
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Handle a GET request .
- Handle the incoming request .
- Initialize HTTP2 connection .
- Main method .
- Creates the SSL context .
- Waits for all responses to complete .
- Handle a GET request .
- Waits for the settings to complete .
- Get the current time in seconds .
- Get the singleton instance
http2-examples Key Features
http2-examples Examples and Code Snippets
Community Discussions
Trending Discussions on http2-examples
QUESTION
We have a Jetty Http2 Client constructed roughly as the example here.
Each request processed by the client calls session.newStream(...)
. It appears that old streams are not GC-ed. However, we can't seem to find a good way, in the API, to either recycle them, or close them.
Should we set a very small idle timeout using streamPromise.get().setIdleTimeout(t)
?
Should we keep the Stream object, mark it when an exchange finishes, then reuse it? In this case though, we also need to recycle the listener, which makes is stateful.
Is there a way to "close" a Stream object, or mark it for GC? Simply setting it to null doesn't seem very API-ish.
...ANSWER
Answered 2017-Jul-26 at 09:28Streams that are closed are GCed.
Stream
s support half closes, so in order for a stream to be closed you need to send a frame with the end_stream
flag set, and receive a frame with the end_stream
flag set.
If you use HTTP2Client
directly, chances are that you're not ending the stream on your side (i.e. you send frames, but forget to set the end_stream
flag on the last frame you send), or the server does not end the stream (which would be a server bug).
Either case, turning on DEBUG logging for category org.eclipse.jetty.http2
on the client will tell you whether the frames have the end_stream
flag set, and report when streams are removed - you just need to parse the possibly large-ish log files.
QUESTION
I'm trying to use netty
to implement HTTP/2 client. In this example (line 93) they manually increment streamId
and put it to the map. When response comes they look for HttpConversionUtil.ExtensionHeaderNames.STREAM_ID
header parameter and thus associate the response with the request.
I don't like the idea of increasing streamId
by myself. Can I somehow get the id netty's going to use to writeAndFlush
the request?
Also, does it take much resources to create a new stream? Or is it just an identifier?
...ANSWER
Answered 2017-Jun-04 at 21:42I'm pretty sure, at this moment you can't get streamId
generated and used by Netty. Also, I don't like the idea of increasing streamId
by myself too, but looks like it's okay to do this with current API.
I've checked Netty sources and found next things:
- HttpToHttp2ConnectionHandler is used for writing requests. It has private method
getStreamId
which is used inwrite
method for getting value ofcurrentStreamId
. But we doesn't have any access to this variable. getStreamId
method uses another method called incrementAndGetNextStreamId. So, again, we can only increment and get newstreamId
value but can't get the current one.
Both of these classes marked with annotation @UnstableApi
, so maybe this behavior will be change in the future.
Here's some related links:
QUESTION
I understand that HTTP2 in Jetty is mostly at the Connector, Transport and Channel levels.
I'm trying to decide which combination would be the best to transport binary data between client and server:
- Jetty HTTP2 server with async servlet + Jetty HTTP2 client
- Jetty HTTP2 server with sync servlet + Jetty HTTP2 client
- Jetty HTTP2 server with async servlet + Netty HTTP2 client
- GRPC client and server (both are default Netty based)
Details:
I would like to send binary data to my client and I would like the connections to be non-blocking/async. The number of concurrent client requests can be high and the server could take a few seconds (some times) to respond to some requests.
Each response is small chunk of binary data. I would've liked it if I could send Netty's ByteBufs
directly as the response instead of copying to byte[]
or ByteBuffer
but that is not directly related to this particular question.
Method #4 is not my favorite because of the ProtoBuf wrapping (link) limitation (link).
Jetty references:
- http://download.eclipse.org/jetty/stable-9/apidocs/org/eclipse/jetty/http2/client/HTTP2Client.html
- https://github.com/eclipse/jetty.project/tree/jetty-9.4.x/jetty-http2/http2-server/src/main/java/org/eclipse/jetty/http2/server
- https://github.com/fstab/http2-examples/blob/master/multiplexing-examples/jetty-client/src/main/java/de/consol/labs/h2c/examples/client/jetty/JettyClientExample.java
- https://groups.google.com/forum/#!topic/grpc-io/z0rhhetN1rE
ANSWER
Answered 2017-Feb-05 at 22:32Disclaimer, I am the Jetty HTTP/2 maintainer.
Given that you have a large number of clients and that processing could take seconds, I would recommend to go with option 1 - async servlet and Jetty HTTP/2 client.
With "async servlet" you have 2 flavors: 1) async processing with blocking I/O, or 2) async processing + async I/O.
Servlet async processing is triggered by the use of HttpServletRequest.startAsync()
.
Servlet async I/O is triggered by using ReadListener
and WriteListener
respectively with the ServletInputStream
and ServletOutputStream
.
You definitely want async processing because you have a large number of clients and processing in the order of seconds - this will optimize the use of server threads.
Whether or not to use async I/O should probably be measured, since you have small binary responses. Blocking I/O is much much easier to code and debug, while async I/O is definitely more complicated to code and debug. Async I/O really shines when you have large contents and slow clients that may congest the TCP connection.
If you want to be fully async, go with async I/O. If you can tolerate a bit of blocking in exchange of simpler code, stay on blocking I/O. Worth repeating, in both cases - either async I/O or blocking I/O - you want to use async processing.
Regarding the issue of copying data, if you are willing to cast down to Jetty classes, you can avoid the copy by directly writing a ByteBuffer
to the ServletOutputStream
subclass, see this example.
Finally, with the Jetty client you can use the high-level HttpClient
with the HTTP/2 transport as detailed here. The benefit would be a high-level API that only deals with HTTP concepts, rather than using the low-level HTTP2Client
that deals with HTTP/2 frames, streams and such.
Report back what you end up choosing, and how it goes for you !
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install http2-examples
You can use http2-examples like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the http2-examples component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page