Skip to content

Conversation

robotdan
Copy link
Member

@robotdan robotdan commented May 16, 2025

Summary

  • bug fixes
  • additional logging
  • logging updates
  • performance updates
  • consideration for 1.0.0
  • add support for extensions and trailers in chunked transfer encoding. By support, I mean ignore, don't explode.
  • 104k RPS with fixed length requests in load testing.

Related

}
} else {
read = delegate.read(buffer);
read = delegate.read(buffer, offset, length);
Copy link
Member Author

@robotdan robotdan May 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed bug.

This was the primary bug identified that was causing runtime issues on the new virtual thread version of java-http.

double result = ((double) numberOfBytesRead / (double) millis) * 1_000;
return Math.round(result);
// Always zero
return numberOfBytesWritten;
Copy link
Member Author

@robotdan robotdan May 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No functional change in behavior. This should provide the same behavior as before.

bodyBytes = inputStream.readAllBytes();
} catch (IOException e) {
throw new BodyException("Unable to read the HTTP request body bytes", e);
if (bodyBytes == null) {
Copy link
Member Author

@robotdan robotdan May 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed bug.

Previously this would return new byte[] on the second call causing the caller to get a different result on the second call.

if (bodyBytes != null) {
read = Math.min(bodyBytes.length, length);
int remaining = bodyBytes.length - bodyBytesIndex;
read = Math.min(remaining, length);
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed bug.

The left arg to .min needs to take into account the current value of bodyBytesIndex.

if (ch == 'H' || ch == 'T' || ch == 'P' || ch == '/' || ch == '1' || ch == '.') {
// While this server only supports HTTP/1.1, allow the request protocol to be parsed for any valid version.
// - The supported version will be validated elsewhere.
if (ch == 'H' || ch == 'T' || ch == 'P' || ch == '/' || ch == '.' || (ch >= '0' && ch <= '9')) {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This allows us to parse HTTP/1.0 so we can allow this protocol.


socket.setSoTimeout(0); // Always block
socket.bind(new InetSocketAddress(listener.getBindAddress(), listener.getPort()));
socket.bind(new InetSocketAddress(listener.getBindAddress(), listener.getPort()), configuration.getMaxPendingSocketConnections());
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Performance.

The default backlog on the Server Socket .bind() is 50. This meant when I hammered the crap out of it, or ran ab (Apache Benchmark)and used> 50` workers and they all started at the same time, I would get socket exceptions because the server would reject them.

So this was a bottleneck to ramping up load testing.

Apache Tomcat calls this acceptCount in their config, and it defaults to and defaults to 100.

I have our default at 200 - which in theory is adequate. This is essentially the internal server socket queue and then we are pulling off pending connections as client sockets and creating a virtual thread to work them.

I have been able to run load tests with a 100 or more, and achieve 100k RPS with this config.

// Tomcat also has 'maxThreads' which defaults to 200 - this is per "connector", not sure how this is different than maxConnections.
// Reading doc, perhaps 'maxConnections' is across all listeners (connectors)? So maybe we go close to the maxThreads config which is 200.
// Without a config, we may be vulnerable to a DOS attack, so if we did want to do something, I would suggest using a blocking queue
// for the clients collection and cause it to block when we reach capacity.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tomcat has a config for maxThreads which defaults to 200.

Since we are using virtual threads, in theory we don't need this. But we may want to consider how many virtual threads we have running at any given time, or how many requests per worker thread we allow.

For example, some HTTP servers just kill of workers after n requests or n amount of time. This would sort of be a maximum lifetime of a virtual thread regardless of the keep alive state. This can be useful to prevent DOS attacks where sockets are just kept open forever.

This would be super easy to implement if we wanted to do so. I am already keeping track of the create instant, and the number of handled requests by an HTTP worker.

*/
public static boolean isURICharacter(byte ch) {
// TODO : Fully implement RFC 3986 to accurate parsing
// https://www.rfc-editor.org/rfc/rfc3986#section-1.3
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No change here, just added some doc.

* @param state the current parser state
* @return a throwable exception
*/
public static ParseException makeParseException(byte b, Enum<? extends Enum<?>> state) {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is just to allow for a consistent ParseException thrown from various places.

}

@Override
public void acceptedRequests() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think for consistency this should be named acceptedRequest().

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, good point. Thank you.

res.setContentLength(response.length);
res.setContentType("text/plain");
try {
res.getOutputStream().write(response);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Most of the other handlers are calling flush(), but not this one. I'm assuming it's called downstream, so it's likely redundant.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, I can take a closer look. We always call close() on the HTTPOutputStream in the HTTPWorker after we invoke the handler, so I suppose I could remove flush() in the other locations.

*/
@Override
public HTTPServerConfiguration withMaxPendingSocketConnections(int maxPendingSocketConnections) {
this.maxPendingSocketConnections = maxPendingSocketConnections;
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Daniel note: Need to validate this is not less than some number. The server socket default is 50 - this could be a good minimum.

* @return This.
*/
// TODO : Daniel : Review : It would be cool to offer some additional methods such as withOptionalExpectValidator?
// Or would there be another way that we could allow the caller to designate null as don't set?
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I initially was not going to allow this to be set to null. I am still thinking that is maybe the better option.

This would means we force you to be explicit and provide an expect handler. If it is null then there is a "default" behavior in the code instead of it being prescribed.

The reason I took it out initially was we had a test that just passed in null and expected it to set this to the default behavior - hence the comment about the withOptional.

Leaning towards requiring a non-null value here.

@robotdan robotdan merged commit 8715655 into main Jun 7, 2025
@robotdan robotdan deleted the degroff/virtual_threads_update branch June 7, 2025 00:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

2 participants