Add this suggestion to a batch that can be applied as a single commit. This suggestion is invalid because no changes were made to the code. Suggestions cannot be applied while the pull request is closed. Suggestions cannot be applied while viewing a subset of changes. Only one suggestion per line can be applied in a batch. Add this suggestion to a batch that can be applied as a single commit. Applying suggestions on deleted lines is not supported. You must change the existing code in this line in order to create a valid suggestion. Outdated suggestions cannot be applied. This suggestion has been applied or marked resolved. Suggestions cannot be applied from pending reviews. Suggestions cannot be applied on multi-line comments. Suggestions cannot be applied while the pull request is queued to merge. Suggestion cannot be applied right now. Please check back later.
While running the agent version
v.2.12.1, it crashed after some time with an out of memory issue, having to be killed by the OS.The machine I was using has 48GB of RAM
It was happening for both mainet and testnet.
I did not see anything particularly special about the resources it was using:

But the number of tasks seems large.

One can see 7721 tasks while a lot of them are idle for some time.
After observing it, the number of tasks would increase with time until the point it would, the binary would be killed.
The idle tasks are created here https://github.com/pyth-network/pyth-agent/blob/main/src/agent/services/oracle.rs#L132 as can be seen in the image, where the subscriber is handling the
handle_price_account_update.That line is crating tokio tasks without keeping track of the task handle
JoinHandle, in cases where those are created faster than they are.awaited this can lead to a leak.After the proposed change I can see a much more comfortable number of tasks:

The number of tasks now is stable around 100, being 114 in the attached image.
It is worthy mentioning that I used 100 worker tasks to wait for the previously leaked ones to finish, so this number can be way smaller with another configuration.
In order to reproduce the tokio console one can follow the instructions in https://github.com/tokio-rs/console which are basic:
cargo install --locked tokio-consoleconsole-subscriber = "0.3.0"as a dependencyconsole_subscriber::init();as the first line in themainfuncitonRUSTFLAGS="--cfg tokio_unstable" cargo run --bin agent -- --config <config file path>tokio-consoleto watch its data