@@ -93,36 +93,42 @@ This results in a file called `ca.crt` containing a PEM encoded, x509 CA certifi
9393
9494## Query requests
9595
96- For most client requests made by a driver, it does not matter if there is any kind
97- of load-balancer between your client application and the ArangoDB deployment.
96+ For most client requests made by a driver, it does not matter if there is any
97+ kind of load-balancer between your client application and the ArangoDB
98+ deployment.
9899
99100{% hint 'info' %}
100- Note that even a simple ` Service ` of type ` ClusterIP ` already behaves as a load-balancer.
101+ Note that even a simple ` Service ` of type ` ClusterIP ` already behaves as a
102+ load-balancer.
101103{% endhint %}
102104
103- The exception to this is cursor related requests made to an ArangoDB ` Cluster ` deployment.
104- The coordinator that handles an initial query request (that results in a ` Cursor ` )
105- will save some in-memory state in that coordinator, if the result of the query
106- is too big to be transfer back in the response of the initial request.
107-
108- Follow-up requests have to be made to fetch the remaining data.
109- These follow-up requests must be handled by the same coordinator to which the initial
110- request was made.
111-
112- As soon as there is a load-balancer between your client application and the ArangoDB cluster,
113- it is uncertain which coordinator will actually handle the follow-up request.
114-
115- To resolve this uncertainty, make sure to run your client application in the same
116- Kubernetes cluster and synchronize your endpoints before making the
117- initial query request.
118- This will result in the use (by the driver) of internal DNS names of all coordinators.
119- A follow-up request can then be sent to exactly the same coordinator.
120-
121- If your client application is running outside the Kubernetes cluster this is much harder
122- to solve.
123- The easiest way to work around it, is by making sure that the query results are small
124- enough.
125- When that is not feasible, it is also possible to resolve this
126- when the internal DNS names of your Kubernetes cluster are exposed to your client application
127- and the resulting IP addresses are routable from your client application.
128- To expose internal DNS names of your Kubernetes cluster, your can use [ CoreDNS] ( https://coredns.io ) .
105+ The exception to this is cursor-related requests made to an ArangoDB ` Cluster `
106+ deployment. The coordinator that handles an initial query request (that results
107+ in a ` Cursor ` ) will save some in-memory state in that coordinator, if the result
108+ of the query is too big to be transfer back in the response of the initial
109+ request.
110+
111+ Follow-up requests have to be made to fetch the remaining data. These follow-up
112+ requests must be handled by the same coordinator to which the initial request
113+ was made. As soon as there is a load-balancer between your client application
114+ and the ArangoDB cluster, it is uncertain which coordinator will receive the
115+ follow-up request.
116+
117+ ArangoDB will transparently forward any mismatched requests to the correct
118+ coordinator, so the requests can be answered correctly without any additional
119+ configuration. However, this incurs a small latency penalty due to the extra
120+ request across the internal network.
121+
122+ To prevent this uncertainty client-side, make sure to run your client
123+ application in the same Kubernetes cluster and synchronize your endpoints before
124+ making the initial query request. This will result in the use (by the driver) of
125+ internal DNS names of all coordinators. A follow-up request can then be sent to
126+ exactly the same coordinator.
127+
128+ If your client application is running outside the Kubernetes cluster the easiest
129+ way to work around it is by making sure that the query results are small enough
130+ to be returned by a single request. When that is not feasible, it is also
131+ possible to resolve this when the internal DNS names of your Kubernetes cluster
132+ are exposed to your client application and the resulting IP addresses are
133+ routable from your client application. To expose internal DNS names of your
134+ Kubernetes cluster, your can use [ CoreDNS] ( https://coredns.io ) .
0 commit comments