- Notifications
You must be signed in to change notification settings - Fork 25.6k
Fix maximum seek limit RecyclerBytesStreamOutput #95133
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix maximum seek limit RecyclerBytesStreamOutput #95133
Conversation
We must only seek to max_capacity - 1 max, that's the highest allowed index. If we seek to max capacity, we were creating a situation where current capacity became negative.
| Hi @original-brownbear, I've created a changelog YAML for you. |
| Pinging @elastic/es-distributed (Team:Distributed) |
| Hi @original-brownbear , this fixes #95096 ? Maybe you can edit the PR/commit description for that. And would it be worth adding the relevant backport labels? |
| @kingherc no but I just linked the relevant fix in that issue and closed it. |
Where is that happening? I only see which is appropriately marked as long. |
| This seems a little odd to me. We certainly can't write anything after seeking to position |
| @DaveCTurner you're right, we need to special case that seek ... Annoying, let me try :) |
| I think it's ok for |
…lasticsearch into fix-seek-almost-end
yea that's pretty much what it comes down to, except that we need to make sure to still do it if we already have capacity available from a previous seek (that was the other bug). |
| How about this instead? (as a patch on f414d31) diff --git a/server/src/main/java/org/elasticsearch/common/io/stream/RecyclerBytesStreamOutput.java b/server/src/main/java/org/elasticsearch/common/io/stream/RecyclerBytesStreamOutput.java index b6a5cbe897c..f1986185720 100644 --- a/server/src/main/java/org/elasticsearch/common/io/stream/RecyclerBytesStreamOutput.java +++ b/server/src/main/java/org/elasticsearch/common/io/stream/RecyclerBytesStreamOutput.java @@ -101,23 +101,25 @@ public class RecyclerBytesStreamOutput extends BytesStream implements Releasable @Override public void writeInt(int i) throws IOException { - if (4 > (pageSize - currentPageOffset)) { + ensureCapacity(Integer.BYTES); + if (Integer.BYTES > (pageSize - currentPageOffset)) { super.writeInt(i); } else { BytesRef currentPage = pages.get(pageIndex).v(); VH_BE_INT.set(currentPage.bytes, currentPage.offset + currentPageOffset, i); - currentPageOffset += 4; + currentPageOffset += Integer.BYTES; } } @Override public void writeLong(long i) throws IOException { - if (8 > (pageSize - currentPageOffset)) { + ensureCapacity(Long.BYTES); + if (Long.BYTES > (pageSize - currentPageOffset)) { super.writeLong(i); } else { BytesRef currentPage = pages.get(pageIndex).v(); VH_BE_LONG.set(currentPage.bytes, currentPage.offset + currentPageOffset, i); - currentPageOffset += 8; + currentPageOffset += Long.BYTES; } } @@ -151,15 +153,8 @@ public class RecyclerBytesStreamOutput extends BytesStream implements Releasable public void seek(long position) { ensureCapacityFromPosition(position); - int offsetInPage = (int) (position % pageSize); - int pageIndex = (int) position / pageSize; - if (offsetInPage == 0) { - this.pageIndex = pageIndex - 1; - this.currentPageOffset = pageSize; - } else { - this.pageIndex = pageIndex; - this.currentPageOffset = offsetInPage; - } + this.pageIndex = (int) position / pageSize; + this.currentPageOffset = (int) position % pageSize; } public void skip(int length) { @@ -221,6 +216,12 @@ public class RecyclerBytesStreamOutput extends BytesStream implements Releasable private void ensureCapacity(int bytesNeeded) { if (bytesNeeded > pageSize - currentPageOffset) { ensureCapacityFromPosition(position() + bytesNeeded); + + // We are at the end of the current page, increment page index + if (currentPageOffset == pageSize) { + pageIndex++; + currentPageOffset = 0; + } } } @@ -236,10 +237,5 @@ public class RecyclerBytesStreamOutput extends BytesStream implements Releasable pages.add(newPage); currentCapacity += pageSize; } - // We are at the end of the current page, increment page index - if (currentPageOffset == pageSize) { - pageIndex++; - currentPageOffset = 0; - } } } |
| oh wait that has the original bug again :/ |
| That's going if we add the capacity check to the common fast path I think, I'd rather not do that I think. |
| I see, ok that makes sense then; could you add a comment about that performance consideration here? |
| Comment in the primitive write methods? :) |
| IMO a comment makes more sense in |
| ++ sure thing, added a comment that I hope helps in ea0cf27 |
DaveCTurner left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM thanks Armin
| Jenkins run elasticsearch-ci/part-3 |
| np + thanks David! |
We must only seek to max_capacity - 1 max, that's the highest allowed index. If we seek to max capacity, we were creating a situation where current capacity became negative because we got to pages * page_size > Integer.MAX_VALUE.