webpieces | project containing all the web pieces | HTTP library

 by   deanhiller Java Version: 2.1.29 License: Non-SPDX

kandi X-RAY | webpieces Summary

kandi X-RAY | webpieces Summary

webpieces is a Java library typically used in Networking, HTTP applications. webpieces has no bugs, it has no vulnerabilities, it has build file available and it has low support. However webpieces has a Non-SPDX License. You can download it from GitHub, Maven.

We only allow merges to master if 1. PR is up to date with latest master and 2. PR passed the circle CI build so master should never be broken -> Codecov.io / jacoco has two bugs, one related to aggregation reporting (so we are actually higher than this number). It's hard to narrow down my 5 favorite features but here is a try. The BELOW 23 minute video barely scratches the surface but demonstrates. One thing to note in the video is I was caught off guard by a minor bug(that is easily worked around) and had to restart the DevelopmentServer as for some reason, the hibernate rescan of entities and table creations did not work. We may have that fixed by the time you watch the video(hopefully).
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              webpieces has a low active ecosystem.
              It has 30 star(s) with 10 fork(s). There are 6 watchers for this library.
              OutlinedDot
              It had no major release in the last 12 months.
              webpieces has no issues reported. There are 3 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of webpieces is 2.1.29

            kandi-Quality Quality

              webpieces has no bugs reported.

            kandi-Security Security

              webpieces has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

            kandi-License License

              webpieces has a Non-SPDX License.
              Non-SPDX licenses can be open source with a non SPDX compliant license, or non open source licenses, and you need to review them closely before use.

            kandi-Reuse Reuse

              webpieces releases are available to install and integrate.
              Deployable package is available in Maven.
              Build file is available. You can build the component from source.
              Installation instructions are available. Examples and code snippets are not available.

            Top functions reviewed by kandi - BETA

            kandi has reviewed webpieces and discovered the below as its top functions. This is intended to give you an instant insight into webpieces implemented functionality, and help decide if they suit your requirements.
            • Unwrap a packet .
            • Fetches the server names if available .
            • Loads an application class .
            • Feed encrypted packet .
            • Modify the user directory for the project .
            • Fill the request with host headers from the request
            • This method redirects to post bean
            • Sends the render response .
            • Process end .
            • Sends an HTTP request to the given endpoint .
            Get all kandi verified functions for this library.

            webpieces Key Features

            No Key Features are available at this moment for webpieces.

            webpieces Examples and Code Snippets

            No Code Snippets are available at this moment for webpieces.

            Community Discussions

            QUESTION

            Gradle monobuild and map of jar files for all gradle composite builds
            Asked 2020-Oct-15 at 14:38

            We have a directory structure like so

            • java
              • build/build.gradle (This does NOT exist yet, but we want this)
              • servers
                • server1/build.gradle
                • server2/build.gradle
              • libraries
                • lib1/build.gradle
                • lib2/build.gradle

            We have 11 servers and 14 libraries with varying uses of dependencies. EACH server is a composite build ONLY depending on libraries (we don’t allow servers to depend on each other). In this way, as our mono-repo grows, opening up server1 does NOT get slower and slower as more and more gradle code is added(ie. gradle only loads server1 and all it’s libraries and none of the other libraries OR servers are loaded keeping things FAST).

            Ok, so one problem we are running into is duplication now which is why we need build/build.gradle file AND we want EVERY module in our mono repo to include that somehow for a few goals(each goal may need a different solution)

            GOAL 1: To have an ext { … } section containing a Map of Strings to gradle dependencies much like so

            ...

            ANSWER

            Answered 2020-Aug-27 at 13:50

            I'm a bit confused by why you don't just use a "standard" gradle top level build file and compose the others as subprojects. This solves all 3 of your goals

            If you are concerned by build speed, you can target each server individually simply by running

            Source https://stackoverflow.com/questions/63437962

            QUESTION

            Does anyone know if cloud run supports http/2 streaming while it does NOT support http1.1 streaming?
            Asked 2020-Jun-28 at 18:41

            We have a streaming endpoint where data streams through our api.domain.com service to our backend.domain.com service and then as chunks are received in backend.domain.com, we write those chunks to the database. In this way, we can ndjson a request into our servers and IT IS FAST, VERY FAST.

            We were very very disappointed to find out the cloud-run firewalls for http1.1 at least (via curl) do NOT support streaming!!!! curl is doing http2 to google cloud run firewall and google is by default hitting our servers with http1.1(for some reason though I saw an option to start in http2 mode that we have not tried).

            What I mean, by they don't support streaming is that google does not send our servers a request UNTIL the whole request is received by them!!!(ie. not just headers, it needs to receive the entire body....this makes things very slow as opposed to streaming straight through firewall 1, cloud run service 1, firewall 2, cloud run service 2, database.

            I am wondering if google's cloud run firewall by chance supports http/2 streaming and actually sends the request headers instead of waiting for the entire body.

            I realize google has body size limits.......AND I realize we respond to clients with 200OK before the entire body is received (ie. we stream back while a request is being streamed in) sooooo, I am totally ok with google killing the connection if size limits are exceeded.

            So my second question in this post is if they do support streaming, what will they do when size is exceeded since I will have already responded with 2000k at that point.

            In this post, my definition of streaming is 'true streaming'. You can stream a request into a system and that system can forward it to the next system and keep reading/forwarding and reading/forwarding rather than waiting for the whole request. The google cloud run firewall is NOT MY definition of streaming since it does not pass through chunks it receives! Our servers sends data as it receives it so if there are many hops, there is no impact thanks to webpieces webserver.

            ...

            ANSWER

            Answered 2020-Jun-28 at 16:33

            Unfortunately, Cloud Run doesn't support HTTP/2 end-to-end to the serving instance.

            Server-side streaming is in ALPHA. Not sure if it helps solving your problem. If it does, please fill out the following form to opt in, thanks!

            https://docs.google.com/forms/d/e/1FAIpQLSfjwvwFYFFd2yqnV3m0zCe7ua_d6eWiB3WSvIVk50W0O9_mvQ/viewform

            Source https://stackoverflow.com/questions/62616183

            QUESTION

            Hibernate merge vs. persist. Were there changes?
            Asked 2020-May-29 at 18:39

            I had an app with the following code working just fine until I upgraded hibernate (5.3.2 to 5.4.10 )

            ...

            ANSWER

            Answered 2020-Feb-08 at 13:02

            Possible Duplicate of Update Vs Merge

            Whats happening here is: Edit Mode :

            Source https://stackoverflow.com/questions/59923474

            QUESTION

            Trying to install logging on google cloud run but it's failing
            Asked 2020-May-07 at 16:47

            I am trying to follow these instructions to log correctly from java to logback to cloudrun...

            https://cloud.google.com/logging/docs/setup/java

            If I used jdk8, I get alpn missing jetty issues so I moved to a Docker image openjdk:10-jre-slim

            and my DockerFile is simple

            ...

            ANSWER

            Answered 2020-Mar-06 at 08:51

            Since your current question is how to simulate the project ID for local testing:

            You should download service account key file from https://console.cloud.google.com/iam-admin/serviceaccounts/project?project=MY_PROJECT, make it accessible inside docker container and activate it via

            Source https://stackoverflow.com/questions/60557616

            QUESTION

            javax.net.ssl.SSLHandshakeException: No available authentication scheme
            Asked 2020-May-06 at 17:51

            A google reveals a bug in jdk11.0.2 but I upgraded to jdk11.0.3 and this still exists for me. Steps to reproduce

            1. git clone https://github.com/deanhiller/webpieces.git
            2. add the line "org.gradle.java.home=/Library/Java/JavaVirtualMachines/jdk-11.0.3.jdk/Contents/Home" to ~/.gradle/gradle.properties to set jdk to 11.0.3
            3. run ./gradlew :core:core-asyncserver:test from webpieces directory

            The test case hangs and in the logs, it shows

            Caused by: javax.net.ssl.SSLHandshakeException: No available authentication scheme at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:128) at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:117) at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:308) at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:264) at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:255) at java.base/sun.security.ssl.CertificateMessage$T13CertificateProducer.onProduceCertificate(CertificateMessage.java:945) at java.base/sun.security.ssl.CertificateMessage$T13CertificateProducer.produce(CertificateMessage.java:934) at java.base/sun.security.ssl.SSLHandshake.produce(SSLHandshake.java:436) at java.base/sun.security.ssl.ClientHello$T13ClientHelloConsumer.goServerHello(ClientHello.java:1224) at java.base/sun.security.ssl.ClientHello$T13ClientHelloConsumer.consume(ClientHello.java:1160) at java.base/sun.security.ssl.ClientHello$ClientHelloConsumer.onClientHello(ClientHello.java:849) at java.base/sun.security.ssl.ClientHello$ClientHelloConsumer.consume(ClientHello.java:810) at java.base/sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:392) at java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:444) at java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1065) at java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask$DelegatedAction.run(SSLEngineImpl.java:1052) at java.base/java.security.AccessController.doPrivileged(Native Method) at java.base/sun.security.ssl.SSLEngineImpl$DelegatedTask.run(SSLEngineImpl.java:999) at org.webpieces.ssl.impl.AsyncSSLEngine2Impl.createRunnable(AsyncSSLEngine2Impl.java:94) ... 12 common frames omitted

            should I file another jdk bug that it still doesn't work or anyone have any thoughts?

            jdk bug that is resolved/related: https://bugs.openjdk.java.net/browse/JDK-8211426

            NOTE that this fixes it for some reason System.setProperty("jdk.tls.server.protocols", "TLSv1.2");

            hmmm, anyone know how to generate a self-signed certificate that works for TLSv1.2 and TLSv1.3?

            ...

            ANSWER

            Answered 2020-May-06 at 17:51

            Assuming it is the issue that is linked and not another issue around TLS 1.3.

            Your certificate is using the DSA algorithm, which has been deprecated a while ago in favor of RSA and is not supported at all in TLS1.3. Make sure to create RSA certificates instead.

            It seems that not-so-old versions of the java keytool might have created DSA certificates by default... an unfortunate default. You can use this command to verify a certificate type.

            openssl x509 -in certificate.crt -text

            Source https://stackoverflow.com/questions/55854904

            QUESTION

            Does protobuf support json properly for nulls since null support is odd?
            Asked 2020-May-01 at 12:58

            I am reading through this post on protobuf as I do grpc json to replace existing customers to use our json on grpc ... https://github.com/protocolbuffers/protobuf/issues/1451

            builder.setName(null) is not supported since the wire format does 'not' send a value for null. If this is the case, how do I do the json equivalent of these cases

            • "name": 1234
            • "name": 0 //{defaultValue=0 for int} is on the wire
            • "name": null
            • {name not exist}

            to me, 'not' calling the setName in protobuf would == {name not exist} on binary format but instead is the name=0 in binary. How to specify in protobuf this case so I can remain compatible with our existing customers in json while switching to protobuf?

            FOR Java specifically, we want a setAge() method and if I do NOT call it, the field will not exist. If I call setAge(null), it marshal "age":null. If I call setAge(0), it will marshal "age":0 (default value). If I call setAge(56) non-default value, it marshals "age" : 56

            We then want the same thing with protobuf. Sure the wire format MAY have to add additional fields. That is fine and due to the fact that defaults do not get marshalled to the wire so can't tell between null and default value :(. We are ok with the extra data on the wire for KISS for developers.

            Is there a proto schema we can use for this as well to have easy setAge(null) methods that marhal to "name":null.

            Here is what I am trying to do right now for the above

            ...

            ANSWER

            Answered 2020-May-01 at 12:58

            For the next person(and I use my on SO posts and answers as my own personal knowledge base), I found this amazing article...

            https://itnext.io/protobuf-and-null-support-1908a15311b6

            Source https://stackoverflow.com/questions/61541583

            QUESTION

            Can I have my cloudrun server receive http/2 requests?
            Asked 2020-Apr-15 at 17:08

            Can I have Google send http/2 requests to my server in cloud run?

            I am not sure how google would know my server supports it since google terminates the SSL on the loadbalancer and sends http to the stateless servers in cloud run.

            If possible, I am thinking of grabbing a few pieces from webpieces and creating a pure http/2 server with no http1.1 for microservices that I 'know' will only be doing http/2.

            Also, if I have a pure http/2 server, is there a way that google translates from http1 requests to http/2 when needed so I could host websites as well?

            The only info I could find was a great FAQ that seems to be missing the does it support http/2 on the server side(rather than client)...

            https://github.com/ahmetb/cloud-run-faq

            thanks, Dean

            ...

            ANSWER

            Answered 2020-Apr-15 at 17:08

            Cloud Run container contract requires your application to serve on an unencrypted HTTP endpoint. However, this can be either HTTP/1 or HTTP/2.

            Today, gRPC apps work on Cloud Run and gRPC actually uses HTTP/2 as its transport. This works because the gRPC servers (unless configured with TLS certificates) use the H2C (HTTP/2 unencrypted cleartext) protocol.

            So, if your application can actually serve traffic unencrypted using h2c protocol, the traffic between Cloud Run load balancer <=> your application can be HTTP/2, without ever being downgraded to HTTP/1.

            For example, in Go, you can use https://godoc.org/golang.org/x/net/http2/h2c package to automatically detect and upgrade http2 connections.

            To test if your application implements h2c correctly, you need to locally run:

            Source https://stackoverflow.com/questions/61231930

            QUESTION

            How to use libraries in codename one?
            Asked 2020-Apr-06 at 02:06

            I am wondering about the limitations of using libraries in code name one. Specifically, I would like to use a specific http client library that uses nio but I am not sure if that will even work in code name one. There is an http1 client and an http2 client here

            https://github.com/deanhiller/webpieces

            Can the nio stuff actually be compiled into iOs? or does it have to be synchronous socket http client implementations?

            thanks, Dean

            ...

            ANSWER

            Answered 2020-Apr-06 at 02:06

            It won't work and you can't. This article is from 2016 but it's still mostly accurate. The gist of it is that most of these APIs aren't essential and if we add all of them performance/size would balloon to huge numbers.

            E.g. a Codename One application can weigh less than 3mb for iOS production builds with 32 and 64 bit support. Our closest competitors clock at 50mb for the same functionality with only 64bit support. This isn't just a matter of size, it's a matter of quality (QA), maintenance etc.

            This also reduces portability as we have to test this on all ports including iOS, UWP, Web etc.

            Having said that we're open to adding things and have added some features to the core since the publication of that article. But either way, you can't just use an arbitrary jar and need to use a cn1lib.

            Source https://stackoverflow.com/questions/61048323

            QUESTION

            Dockerfile environment variables snafoo
            Asked 2020-Feb-10 at 15:51

            I have the following Dockerfile

            ...

            ANSWER

            Answered 2020-Feb-10 at 15:51

            Consider the following DockerFile:

            Source https://stackoverflow.com/questions/60153736

            QUESTION

            copying * files in Docker to directory gives error
            Asked 2020-Jan-26 at 17:56

            I have the following Dockerfile

            ...

            ANSWER

            Answered 2020-Jan-26 at 17:56

            When using COPY with more than one source file, the destination must be a directory and end with a /.

            Change the COPY line to

            Source https://stackoverflow.com/questions/59919757

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install webpieces

            NOTE: last tested running eclipse on jdk-11.0.3.jdk Eclipse 2019-06 Version: 2019-06 (4.12.0) Build id: 20190614-1200.
            import project into eclipse using gradle Click File menu -> Import... Expand Gradle folder Choose Existing Gradle Project and click Next Click Next Click Finish
            eclipse buildship gradle plugin works except for passing in -parameters to the settings file like ./gradlew eclipse did so you have to do a few more steps here Open eclipse preferences Expand 'Java' and click 'Compiler' select a checkbox near the bottom that says 'Store information about method parameters'
            From the IDE, expand {yourapp-all}/{yourapp}-dev/src/main/java/{yourpackage}
            Run OR Debug the class named DevelopmentServer.java which compiles your code as it changes so you don't need to restart the webserver (even in debug mode)
            In a browser go to http://localhost:8080
            refactor your code like crazy and hit the website again(no restart needed)
            As you upgrade, we just started(7/20/17) to have a legacy project we run the webpieces build against. This means we HAVE to make upgrades to it to see how it affects clients. You can copy the upgrades needed(some are not necessarily needed but recommended) here https://github.com/deanhiller/webpiecesexample-all/commits/master (We are going to try to standardize the comments better as well.
            For Documentation go to http://localhost:8080/@documentation and you can access the references and tutorials
            NOTE: last tested on Intellij 2019.3.4. I want to try something new on this project. If you want something fixed, I will pair with you to fix it ramping up your knowledge while fixing the issue. We do this with screenhero(remote desktop sharing and control). A project containing all the web pieces (WITH apis) to create a web server (and an actual web server, and an actual http proxy and an http 1.1 client and an http2 client and an independent async http parser1.1 and independent http2 parser and a templating engine and an http router......getting the idea yet, self contained pieces). This webserver is also made to be extremely 'Feature' Test Driven Development for web app developers such that tests can be written that will test all your filters, controllers, views, redirects and everything all together in one. This gives GREAT whitebox QE type testing that can be done by the developer. Don't write brittle low layer tests and instead write high layer tests that are less brittle then their fine grained counter parts (something many of us do at twitter). This project is essentially pieces that can be used to build any http related software and full stacks as well. May have to Deal with me ;). There is a jpeg of these pieces and relationships in the http://localhost:8080/@documentation pages (on DevelopmentServer only).
            Import Project From Welcome screen, choose Import Project Select your folder {yourapp}-all and click ok Choose 'Import project from external model' and choose gradle and click Finish
            Modify some build settings and compiling with parameters option(Intellij does not suck this setting in from gradle :( ) Open Preferences, expand "Build, Execution, and Deployment", expand 'Compiler', and click on 'Java Compiler'. Add -parameters to the 'Additional Command Line Parameters' Click Apply to save the setting Expand 'Build Tools' (also under 'Build, Execution, and Deployment') Click on 'Gradle' and change the two settings under 'Build and run' to IntelliJ (rather than Gradle by default) This previous step is CRITICAL for scanning new entities because the gradle build doesn't hot compile build jar on changes so the changes are not found Click Ok to save and close dialog Click Build menu and click Rebuild Project (VERY important after changing -parameters as unlike eclipse, intellij doesn't detect the need to rebuild)
            Modify TWO auto-recompile settings documented here https://stackoverflow.com/questions/12744303/intellij-idea-java-classes-not-auto-compiling-on-save
            From the IDE, expand {yourapp-all}/{yourapp}-dev/src/main/java/{yourpackage}
            Run OR Debug the class named DevelopmentServer.java which compiles your code as your code changes so you don't need to restart the webserver (even in debug mode)
            In a browser go to http://localhost:8080
            refactor your code like crazy and hit the website again(no restart needed)
            As you upgrade, we just started(7/20/17) to have a legacy project we run the webpieces build against. This means we HAVE to make upgrades to it to see how it affects clients. You can copy the upgrades needed(some are not necessarily needed but recommended) here https://github.com/deanhiller/webpiecesexample-all/commits/master (We are going to try to standardize the comments better as well.
            For Documentation go to http://localhost:8080/@documentation and you can access the references and tutorials
            clone webpieces
            The automated build runs "./gradlew -Dorg.gradle.parallel=false -Dorg.gradle.configureondemand=false build -PexcludeSelenium=true -PexcludeH2Spec=true" as it can't run the selenium tests or H2Spec tests at this time
            If you have selenium setup and h2spec, you can just run "./gradlew build" in which parallel=true and configureondemand=true so it's a faster build
            debugging with eclipse works better than intellij. intellij IDE support is better than eclipse(so pick your poison but it works in both)
            Over 370 customer facing tests(QA tests testing from customers point of view)
            your project is automatically setup with code coverage (for java and the generated html groovy)
            LoginFilter automatically adds correct cache headers so if you are logged out, back button will not go back to some logged in page instead redirecting to login
            built in 'very loose' checkstyle such that developers don't create 70+ line methods or 700+ line files or nasty anti-arrow pattern if statements
            unlike Seam/JSF and heavyweight servers, you can slap down 1000+ nodes of this server as it is built for clustering and scale and being stateless!!! especially with noSQL databases. with Seam/JSF, you lock your users to one node and when that goes out, if they are in the middle of buying a plane ticket, they are pretty much screwed.(ie. not a good design for large scale)
            be blown away with the optimistic locking pattern. If your end users both post a change to the same entity, one will win and the other will go through a path of code where you can decide, 1. show the user his changes and the other users, 2. just tell the user it failed and to start over 3. let it overwrite the previous user code
            prod server caches files using hash on content so all *.js files and *.css files are cached for a year and if file changes, the hash changes cause an 'immediate' reload avoiding the 1 year cache time. no confusing why is client X not working from this last deploy
            dev server never tells browser to cache files so developer can modify file and not need to clear browser cache
            %[..]% will verify a file actually exists at that route at build time so that you do not accidentally deploy web pages that link to nonexistent files
            no erasing users input from forms which many websites do....soooo annoying
            one liner for declaring a form field which does keeps users input for you, as well as i18n as well as error handling and decorating ALL your fields with your declared field template
            custom tags can be created in any myhtml.tag file to be re-used very easily(much like playframework 1.3.x+)
            production server does not contain a compiler (this was a mistake I believe in the play 1.3.x+ framework)
            production server creates a compressed static file cache on startup and serves pre-compressed files(avoiding on-demand compression) later, we may even have a cache in memory so we don't hit disk at all
            production server has no need to compile templates as they are precompiled in production mode which increases speed for end users
            You should find, we were so anal, we cover way more developer mistakes and way more error messages on what the developer did wrong so they don't have to wonder why something is not working and waste time.
            Override ANY component in your web application for testing to mock remote endpoints and tests can simulate those
            Override ANY component in the platform server just by binding a subclass of the component(fully customizable server to the extreme unlike any server before it)
            Debug one of the tests after creating the example project and you step right into the platform code making it easier to quickly understand the underlying platform you are using and how componentized it is. (if you happen to run into a bug, this makes it way easier to look into, but of course, we never run into bugs with 3rd party software, right, so you won't need this one) That was in my sarcastic font
            Selenium test case provided as part of the template so skip setting it up except for getting the right firefox version
            Route files are not in yml but are in java so you can have for loops, dynamic routes and anything you can dream up related to http routing
            Full form support for arrays(which is hard and play1.3.x+ never got it right to make it dead simple..let's not even talk about J2EE )
            Protects developers from the frequent caching css/js/html files screwup!!! This is bigger than people realize until they get bitten. ie. you should not change any static js/css files without also renaming them so that the browser cache is avoided and it loads the new one as soon as a new version is deployed. Nearly all webservers just let developers screw this up and then customers wonder why things are not working(and it's only specific customers that have old versions that complain making it harder to pinpoint the issue). Finally, you can live in a world where this is fixed!!!
            supports multiple domains over SSL with multiple certificates but only for advanced users
            JPA/hibernate plugin with filter all setup/working so if your backend is a database, you can crank out db models. NoSQL scales better but for startups, start simple with a database
            NoSql works AMAZINGLY when using nosql asynch clients as this server supports complete async controllers
            CRUD in routemodules can be done with one method call to create all routes(list, edit and add, post, and delete) or you can vary it with a subset easily
            no session timeout on login EVER(unlike JSF and seam frameworks)
            going to a secure page can redirect you to login and once logged in automatically redirect you back to your original page
            Security - cookie is hashed so can't be modified without failing next request
            Security - Form auth token in play1.3.x+ can be accidentally missed leaving security hole unless app developer is diligent. By default, we make it near impossible to miss the auth token AND check that token in forms for the developer(putting it in is automatic in play 1.3 but checking it is not leaving a hole if you don't know and many don't know)
            State per tab rather than just per session. All web frameworks have a location to store session state but if you go to buy a plane ticket in 3 different tabs, the three tabs can step on each other. A location to store information for each tab is needed
            Tests now test for backwards compatibility so we(developers) do not accidentally break compatibility with your app except on major releases
            better pipelining of requests fixing head of line blocking problem
            Server push - sending responses before requests even come based on the first page requests (pre-emptively send what you know they will need)
            Data compression of HTTP headers
            Multiplexing multiple requests over TCP connection
            channelmanager - a very thin layer on nio for speed(used instead of netty but with it's very clean api, anyone could plugin in any nio layer including netty!!!)
            asyncserver - a thin wrapper on channelmanager to create a one call tcp server (http-frontend sits on top of this and the http parsers together)
            http/http1_1-parser - An asynchronous http parser that can accept partial payloads (ie. nio payloads don't have full messages). Can be used with ANY nio library.
            http/http1_1-client - http1.1 client
            http/http2-parser - An asynchronous http2 parser with all the advantages of no "head of line blocking" issues and pre-emptively sending responses, etc. etc.
            http/http2-client - http 2 client built on above core components because you know if you server supports http2 AND noy doing 1.1 keeps it simple!!!
            http/http2to1_1-client - http1.1 client with an http2 interface SOOOOOO http2-client and http2to1_1-client are swappable as they implement the same api
            http/http-frontend - An very thin http webserver. call frontEndMgr.createHttpServer(svrChanConfig, serverListener) with a listener and it just fires incoming web http server requests to your listener(webserver/http-webserver uses this piece for the front end and adds it's own http-router and templating engine)
            webserver/http-webserver - a webserver with http2 and http1.1 support and tons of overriddable pieces via guice
            core/runtimecompiler - create a runtime compiler with a list of source paths and then just use this to call compiler.getClass(String className) and it will automatically recompile when it needs to. this is only used in the dev servers and is not on any production classpaths (unlike play 1.4.x)

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            Install
            Maven
            Gradle
            CLONE
          • HTTPS

            https://github.com/deanhiller/webpieces.git

          • CLI

            gh repo clone deanhiller/webpieces

          • sshUrl

            git@github.com:deanhiller/webpieces.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link