Skip to content
This repository was archived by the owner on Oct 9, 2023. It is now read-only.

Commit 1255806

Browse files
authored
When streaming, assume network latency >=1ms (#210)
## What is the goal of this PR? When streaming (e.g. `match` queries), we now assume the network latency is >=1ms, and have the server compensate accordingly. This sharply improves performance (up to 3x) when fetching answers from a server hosted at `localhost`. ## What are the changes implemented in this PR? When streaming from `localhost` with a per-answer performance profiler attached, it reveals that the answers come in "batches" of size 50 - the first 50 answers arrive in, say, 0.005s, then there is a 0.005s gap, then another 50 answers arrive, then there is another 0.005s gap, and so on. This indicates that there is a bug in the implementation of prefetch size - and sure enough, we've tracked it down. It manifests itself when connecting to `localhost`, and occurs due to the following logical flaw. Answers are streamed in batches of size N from the server (where N = `prefetch_size`, default 50), to prevent the server doing unnecessary work in case the client does not end up consuming all answers. Once the client sees N answers, it should send a "CONTINUE" request to the server to continue streaming. However, while the Nth answer is being sent to the client, and while the server is waiting to receive the CONTINUE request, the streaming should actually continue. If it doesn't, we end up with "wasted time" where the server is waiting and isn't sending anything. Thus, the server must predict to the best of its ability when the client will send the next CONTINUE. This is typically equal to the _network latency_. However, when connecting to `localhost`, the network latency is 0 - while it is physically impossible for the client to respond to the server at the exact same moment that the server sends the Nth answer. `localhost` is an edge case that is currently unhandled. To mitigate the problem, we now coerce the measured network latency to be at least 1ms.
1 parent a3a0260 commit 1255806

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

connection/TypeDBSessionImpl.ts

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -59,9 +59,9 @@ export class TypeDBSessionImpl implements TypeDBSession {
5959
this._database = await this._client.databases.get(this._databaseName);
6060
const start = (new Date()).getMilliseconds();
6161
const res = await this._client.stub().sessionOpen(openReq);
62-
const end = (new Date()).getMilliseconds(); // TODO will this work?
62+
const end = (new Date()).getMilliseconds();
6363
this._id = res.getSessionId_asB64();
64-
this._networkLatencyMillis = (end - start) - res.getServerDurationMillis();
64+
this._networkLatencyMillis = Math.max((end - start) - res.getServerDurationMillis(), 1);
6565
this._isOpen = true;
6666
this._pulse = setTimeout(() => this.pulse(), 5000);
6767
}

0 commit comments

Comments
 (0)