Resolve latency issues
This page shows you how to resolve latency issues with Firestore.
Latency
The following table describes possible causes of increased latency:
| Latency cause | Types of operations affected | Resolution | 
|---|---|---|
| Sustained traffic exceeding the 500-50-5 rule. | read, write | For rapid traffic increases, Firestore attempts to automatically scale to meet the increased demand. When Firestore scales, latency begins to decrease. Hot-spots (high read, write, and delete rates to a narrow document range) limit the ability of Firestore to scale. Review designing for scale and identify hot-spots in your application. | 
| Contention, either from updating a single document too frequently or from transactions. | read, write | Reduce the write rate to individual documents. Review data contention in transactions and how you use transactions. | 
| Slow merge-join queries. | read | For example, queries with multiple equality filters ( ==) but not backed by composite indexes can result in slow merge-join queries. To improve performance, add composite indexes for these queries, see Reason #3 in Why is my Firestore query slow? | 
| Large reads that return many documents. | read | Use pagination to split large reads. | 
| Too many recent deletes. | read This greatly affects operations that list collections in a database. | If latency is caused by too many recent deletes, the issue should automatically resolve after some time. If the issue does not resolve, contact support. | 
| Adding and removing listeners too quickly. | realtime listener queries | See the best practices for realtime updates. | 
| Listening to large documents or to a query with many results. | realtime listener queries | See the best practices for realtime updates | 
| Index fanout, especially for array fields and map fields. | write | Review your usage of array fields and map fields. For map fields, you can disable subfields from indexing. You can also use collection level exemptions. | 
| Large writes and batched writes. | write | Try reducing the number of writes in each batched write. Batched writes are atomic and many writes in a single batch can increase latency and contention. For example, a batch of 10 writes performs better than a batch of 500 writes. For bulk data entry where you don't require atomicity, use a server client library with parallelized individual writes. Batched writes perform better than serialized writes but not better than parallel writes. |