Skip to content

Conversation

@andreiborza
Copy link
Member

@andreiborza andreiborza commented Nov 14, 2025

Looks like we swallowed the metric that triggers a flush when MAX_METRIC_BUFFER_SIZE is surpassed.

Test demonstrating issue: f0737fa
Fix: 1a4e02a

Related logs pr: #18207

Comment on lines +259 to +262
// After flushing the 1000 metrics, the new metric starts a fresh buffer with 1 item
const buffer = _INTERNAL_getMetricBuffer(client);
expect(buffer).toHaveLength(1);
expect(buffer?.[0]?.name).toBe('trigger.flush');
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: A metric that triggers a buffer flush is lost because the flush operation clears the buffer after the new metric has been added but before it's processed.
Severity: CRITICAL | Confidence: 1.00

🔍 Detailed Analysis

The _INTERNAL_captureSerializedMetric function incorrectly handles metrics when the buffer reaches MAX_METRIC_BUFFER_SIZE. When a new metric arrives, it's added to the buffer via bufferMap.set(client, [...metricBuffer, serializedMetric]). However, the subsequent flush condition check if (metricBuffer.length >= MAX_METRIC_BUFFER_SIZE) and the flush operation _INTERNAL_flushMetricsBuffer(client, metricBuffer) use the old metricBuffer reference (before the new metric was added). This leads to the newly added metric being present in the map when _INTERNAL_flushMetricsBuffer is called, but then being lost when _getBufferMap().set(client, []) clears the entire buffer, including the new metric. This results in data loss for metrics that trigger a buffer flush.

💡 Suggested Fix

Modify _INTERNAL_captureSerializedMetric to ensure that the _INTERNAL_flushMetricsBuffer function is called with the updated buffer, or that the new metric is handled correctly after the old buffer is flushed and cleared, perhaps by re-adding it to a newly cleared buffer.

🤖 Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI agent. Verify if this is a real issue. If it is, propose a fix; if not, explain why it's not valid. Location: packages/core/test/lib/metrics/internal.test.ts#L259-L262 Potential issue: The `_INTERNAL_captureSerializedMetric` function incorrectly handles metrics when the buffer reaches `MAX_METRIC_BUFFER_SIZE`. When a new metric arrives, it's added to the buffer via `bufferMap.set(client, [...metricBuffer, serializedMetric])`. However, the subsequent flush condition check `if (metricBuffer.length >= MAX_METRIC_BUFFER_SIZE)` and the flush operation `_INTERNAL_flushMetricsBuffer(client, metricBuffer)` use the *old* `metricBuffer` reference (before the new metric was added). This leads to the newly added metric being present in the map when `_INTERNAL_flushMetricsBuffer` is called, but then being lost when `_getBufferMap().set(client, [])` clears the entire buffer, including the new metric. This results in data loss for metrics that trigger a buffer flush. 

Did we get this right? 👍 / 👎 to inform future reviews.

Reference_id: 2692398

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's what we are fixing with 1a4e02a

@github-actions
Copy link
Contributor

size-limit report 📦

Path Size % Change Change
@sentry/browser 24.6 kB - -
@sentry/browser - with treeshaking flags 23.09 kB - -
@sentry/browser (incl. Tracing) 41.26 kB - -
@sentry/browser (incl. Tracing, Profiling) 45.53 kB - -
@sentry/browser (incl. Tracing, Replay) 79.73 kB - -
@sentry/browser (incl. Tracing, Replay) - with treeshaking flags 69.4 kB - -
@sentry/browser (incl. Tracing, Replay with Canvas) 84.42 kB - -
@sentry/browser (incl. Tracing, Replay, Feedback) 96.58 kB - -
@sentry/browser (incl. Feedback) 41.27 kB - -
@sentry/browser (incl. sendFeedback) 29.27 kB - -
@sentry/browser (incl. FeedbackAsync) 34.2 kB - -
@sentry/react 26.29 kB - -
@sentry/react (incl. Tracing) 43.22 kB - -
@sentry/vue 29.09 kB - -
@sentry/vue (incl. Tracing) 43.03 kB - -
@sentry/svelte 24.61 kB - -
CDN Bundle 26.91 kB +0.02% +3 B 🔺
CDN Bundle (incl. Tracing) 41.81 kB +0.01% +4 B 🔺
CDN Bundle (incl. Tracing, Replay) 78.33 kB +0.01% +2 B 🔺
CDN Bundle (incl. Tracing, Replay, Feedback) 83.81 kB +0.01% +3 B 🔺
CDN Bundle - uncompressed 78.85 kB +0.02% +12 B 🔺
CDN Bundle (incl. Tracing) - uncompressed 124.01 kB +0.01% +12 B 🔺
CDN Bundle (incl. Tracing, Replay) - uncompressed 240.04 kB +0.01% +12 B 🔺
CDN Bundle (incl. Tracing, Replay, Feedback) - uncompressed 252.8 kB +0.01% +12 B 🔺
@sentry/nextjs (client) 45.34 kB - -
@sentry/sveltekit (client) 41.64 kB - -
@sentry/node-core 50.86 kB - -
@sentry/node 158.08 kB - -
@sentry/node - without tracing 92.73 kB - -
@sentry/aws-serverless 106.5 kB - -

View base workflow run

@github-actions
Copy link
Contributor

node-overhead report 🧳

Note: This is a synthetic benchmark with a minimal express app and does not necessarily reflect the real-world performance impact in an application.

Scenario Requests/s % of Baseline Prev. Requests/s Change %
GET Baseline 8,701 - 11,316 -23%
GET With Sentry 1,357 16% 1,654 -18%
GET With Sentry (error only) 6,253 72% 7,684 -19%
POST Baseline 1,213 - 1,188 +2%
POST With Sentry 517 43% 573 -10%
POST With Sentry (error only) 1,079 89% 1,040 +4%
MYSQL Baseline 3,369 - 4,059 -17%
MYSQL With Sentry 466 14% 562 -17%
MYSQL With Sentry (error only) 2,762 82% 3,308 -17%

View base workflow run

andreiborza added a commit that referenced this pull request Nov 17, 2025
…18207) Looks like we swallowed the log that triggers a flush when `MAX_LOG_BUFFER_SIZE` is surpassed. Test demonstrating issue: [5697b7d](5697b7d) Fix: [f7a4d8b](f7a4d8b) Related metrics pr: #18212 v9 backport: #18213
@andreiborza andreiborza requested a review from chargome November 17, 2025 08:36
@andreiborza andreiborza merged commit e3ef3f2 into develop Nov 17, 2025
380 of 383 checks passed
@andreiborza andreiborza deleted the ab/fix-metrics-swallowing branch November 17, 2025 08:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

4 participants