- Notifications
You must be signed in to change notification settings - Fork 110
Feature pipelines #236
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature pipelines #236
Conversation
Codecov Report
@@ Coverage Diff @@ ## master #236 +/- ## ========================================== - Coverage 44.83% 44.56% -0.28% ========================================== Files 360 359 -1 Lines 8261 8316 +55 Branches 1022 1045 +23 ========================================== + Hits 3704 3706 +2 - Misses 4258 4305 +47 - Partials 299 305 +6
Continue to review full report at Codecov.
|
NTaylorMullen left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These are some super exciting change @david-driscoll!
| } | ||
| | ||
| public void Start() | ||
| public void Send(object value) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@TanayParikh this OutputHandler class is essentialy what our AutoFlushingStreamWriter tries to workaround. Changes here are a lot like what we'd have to do on our side until we consume these changes 😄
Directory.Build.targets Outdated
| <PackageReference Update="MediatR" Version="8.0.1"/> | ||
| <PackageReference Update="MediatR.Extensions.Microsoft.DependencyInjection" Version="8.0.0"/> | ||
| <PackageReference Update="System.IO.Pipelines" Version="4.7.1"/> | ||
| <PackageReference Update="Nerdbank.Streams" Version="2.4.60"/> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We actually depend on this in VS to enable our LSP experience there. /cc @ToddGrun
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just found them and it was code I didn't have to 100% comprehend (I'm slowly catching on...)
src/JsonRpc/InputHandler.cs Outdated
| // some time to attach a debugger | ||
| // System.Threading.Thread.Sleep(TimeSpan.FromSeconds(5)); | ||
| // TODO: This might be simplified with SequenceReader... | ||
| // Not sure we can move to purely .netstandard 2.1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please don't move to netstandard2.1 since it doesn't work in .NET Framework :(
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that's an old comment, this PR works on netframework, netcoreapp2.1 and netcoreapp3.1 and support will be for the foreseeable future.
There are just things thank netstandard2.1 makes easier.... either way I learned a heck of lot by not having it 😄
| @NTaylorMullen hopefully it's not terrible pipelines code... this is my first time delving into them deeply. |
| _inputThread = new Thread(ProcessInputStream) { IsBackground = true, Name = "ProcessInputStream" }; | ||
| _scheduler = new ProcessScheduler(loggerFactory, concurrency, | ||
| new EventLoopScheduler(_ => new Thread(_) {IsBackground = true, Name = "InputHandler"})); | ||
| _headersBuffer = new Memory<byte>(new byte[HeadersFinishedLength]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@NTaylorMullen I originally was renting this from the shared memory pool but I discovered some intermittent unit test failures where it seems like the memory was getting shared somehow. I'm really not sure how or why but I just this is a small enough amount of memory it probably makes more sense to just hard code it here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ya all of this is super foreign to me sadly I wish I could be of more help :(. @halter73 @davidfowl @Tratcher and @pakrym are the real pipeline experts especially when it comes to reading headers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It sounds like there might have been a use-after-free issue. If the memory was being returned to the pool by the CompositeDisposable instead of at the end of ProcessInputStream(), it could very well have been returning the pooled memory before ProcessInputStream was done using it. I don't think CompositeDisposable should be completing the PipeReader either for the same reason. It would be better to complete it in a finally at the end of ProcessInputStream.
If possible, it would be best to have IInputHandler implement DisposeAsync (or if that's too breaking, have the caller try to cast the IInputHandler to IAsyncDisposable and call DisposeAsync instead Dispose if it's available), and have DisposeAsync wait on ProcessInputStream to complete to ensure every thing is cleaned up by the time DisposeAsync completes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
after reading this about 10 times, that actually makes sense. Thanks @halter73
| Here is the benchmark run BenchmarkDotNet=v0.12.1, OS=Windows 10.0.19603 Intel Core i7-7700K CPU 4.20GHz (Kaby Lake), 1 CPU, 8 logical and 4 physical cores .NET Core SDK=3.1.300-preview-015048 [Host] : .NET Core 3.1.3 (CoreCLR 4.700.20.11803, CoreFX 4.700.20.12001), X64 RyuJIT .NET 4.7.2 : .NET Framework 4.8 (4.8.4161.0), X64 RyuJIT .NET Core 2.1 : .NET Core 2.1.17 (CoreCLR 4.6.28619.01, CoreFX 4.6.28619.01), X64 RyuJIT .NET Core 3.1 : .NET Core 3.1.3 (CoreCLR 4.700.20.11803, CoreFX 4.700.20.12001), X64 RyuJIT
|
b1e0a69 to 865a41b Compare * fixed an issue where if cancellation token was throw and not caught the scheduling thread would die (and potentially the entire process if the exception went unhandled) * Deal with undhandled exceptions as well, and report down through logging * removed unused constants * Added some more work continued tests and a benchmark for comparisons * Made improvements to unit tests for input and output. Fixed a small issue related to not emptying out the shared memory for content length parsing * Added reference assemblies and removed debugging buffer
0c2a521 to a54a9e8 Compare
You know more than me to be honest. I've never done it to the extent that you do in this PR 😄 |
| I'll take a look. |
| Oh no... @davidfowl is going to tear me apart now 😭 |
| I have lots of questions about the InputHandler, it looks more complicated than it needs to be (outside of the pipelines usage). Remind me what TFMs this supports? |
We support |
| So I've been reading the code here and I want to suggest small changes to improve it but I think it might be better to suggest a rewrite 😄 . The |
| | ||
| // don't be async: We already allocated a seperate thread for this. | ||
| private void ProcessInputStream() | ||
| private bool TryParseHeaders(ref ReadOnlySequence<byte> buffer, out ReadOnlySequence<byte> line) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The parsing logic in here can be simplified using SequenceReader<byte>, it's currently only in .NET Core 3.0 but Nerdbank.Streams has https://github.com/AArnott/Nerdbank.Streams/blob/master/doc/SequenceTextReader.md that you should be able to use to simplify this logic.
It's slightly allocatey though, it might make sense to copy the SequenceReader sources into this ns2.0 project from .NET Core.
| If you have some docs with how to do fan-in/fan-out (including concurrency control) with channels I'd love to hear it. The ProcssScheduler is to manage back pressure, there are certain requests that are blocking (text changes) and we have to hold everything until the text requests are processed. We can easily make it take a |
| Back pressure doesn’t work if it isn’t happening everywhere. RX can’t do back pressure unless you’re going drop messages or block. Fan out is reasonable use case for RX as long as considered are fire and forget (there goes your back pressure). Where are you fanning in? |
| The spec doesn't specifically define response ordering so we have certain handlers defined as So we might see So we can fan out to handle all the parallel requests but then we have to fan back in to handle the serial ones. Something along the lines of Now if the editor is smart enough (which if I recall vscode is, but it's been a while) it will cancel any in flight completions or code lens requests if text changes are continued to be made, but if the editor isn't doing that we are handling requests such that our state is "correct". Now it's possible for an editor to fire off like 100 events, and we may not have a server that can handle that kind of load (or we want to constrain resources in someway) we also have the ability to say "We only want at most 10 parallel requests in flight at a time", and the |
| However the output handler should be converted to Channels, that's a clear and easy win I think. Frankly I just forgot they existed as a part of pipelines 😄 |
| _pipeReader.AdvanceTo(buffer.Start, buffer.End); | ||
| | ||
| // Stop reading if there's no more data coming. | ||
| if (result.IsCompleted && buffer.GetPosition(0, buffer.Start).Equals(buffer.GetPosition(0, buffer.End))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
empty buffer + IsEmpty is how you check for done.
| While I brew over how to refactor this code check out these guidance docs on how to use pipelines https://docs.microsoft.com/en-us/dotnet/standard/io/pipelines |
| @davidfowl thanks for taking a look at this PR. It's really awesome to see you getting involved for this - a ton of extensions out in the wild thank you 😃 @david-driscoll and friends have done an incredible job on this library so far and I'm excited to see how it grows in the future! |
You can use a semaphore and write directly to the pipe. You only need the queue or channels if you want to more some slack (potentially even more write batching).
Yes indeed, it's come a long way since I was last here 😄. It's so much more mature now (I use it daily). |
| try | ||
| { | ||
| payload = JToken.Parse(request); | ||
| using var textReader = new StreamReader(request.AsStream()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you could get rid of this you can make reduce allocations even further.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@davidfowl baby steps my friend!
Pipelines first, then I have this branch where I explored moving to system.text.json and it mostly works. I have to however refactor the language client as it doesn't really mesh well with the server at this point.
https://github.com/OmniSharp/csharp-language-server-protocol/tree/feature/systemtextjson
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And by works I mean it compiles 😆 (without the client project)
| Whyyyyyy |
| @davidfowl the changes were included in this :) |
This replaces the existing logic for reading and writing from the given streams using System.IO.Pipelines.
Goal is to make input and output as memory efficient as is logically possible.
For input parsing:
Content-Typeheader, so currently we just throw away any other headers (without allocations as far as I can tell)For output:
closes #151