Skip to content

Conversation

@injectives
Copy link
Contributor

This update makes sure that the ReactiveDefaultBookmarkManager returns an unmodifiable copy of its current bookmarks instead of returning a view of the bookmarks.

@injectives
Copy link
Contributor Author

@injectives injectives force-pushed the bugfix/reactivebm branch 2 times, most recently from 69c7863 to 6cccde7 Compare July 20, 2023 22:03
This update makes sure that the `ReactiveDefaultBookmarkManager` returns an unmodifiable copy of its current bookmarks instead of returning a view of the bookmarks.
Copy link
Collaborator

@michael-simons michael-simons left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OOps.

Thank you.

@michael-simons michael-simons merged commit 5e6b1ea into spring-projects:main Jul 24, 2023
michael-simons added a commit that referenced this pull request Jul 24, 2023
This update makes sure that the `ReactiveDefaultBookmarkManager` returns an unmodifiable copy of its current bookmarks instead of returning a view of the bookmarks.
@injectives injectives deleted the bugfix/reactivebm branch July 24, 2023 08:34
@seabamirum
Copy link

seabamirum commented Aug 18, 2023

At some point the code that @injectives suggested for getBookmarks()

Collections.synchronizedSet(Collections.unmodifiableSet(new HashSet<>(this.bookmarks)));

got replaced with

Set.copyOf(this.bookmarks);

which although looks a lot prettier it still doesn't prevent ConcurrentModificationException, as can be seen in the below stack trace. However, I think we only need synchronization while making the copy. I would suggest using

synchronized(this.bookmarks) { return Set.copyOf(this.bookmarks); } 

Stack trace:

java.util.ConcurrentModificationException: null	at java.base/java.util.HashMap$HashIterator.nextNode(HashMap.java:1605) ~[na:na] Original Stack Trace:	at java.base/java.util.HashMap$HashIterator.nextNode(HashMap.java:1605) ~[na:na]	at java.base/java.util.HashMap$KeyIterator.next(HashMap.java:1628) ~[na:na]	at java.base/java.util.AbstractCollection.addAll(AbstractCollection.java:337) ~[na:na]	at java.base/java.util.HashSet.<init>(HashSet.java:121) ~[na:na]	at java.base/java.util.Set.copyOf(Set.java:732) ~[na:na]	at org.springframework.data.neo4j.core.transaction.ReactiveDefaultBookmarkManager.getBookmarks(ReactiveDefaultBookmarkManager.java:53) ~[spring-data-neo4j-7.1.3.jar:7.1.3] 
@michael-simons
Copy link
Collaborator

That's indeed true. Excellent catch.

michael-simons added a commit that referenced this pull request Aug 21, 2023
The separate reactive bookmark manager was introduced because Project Reactor and Blockhound have issues with the `ReentrantReadWriteLock` approach we have taken in the imperative approach. The first approach was using a synchronized set, but as it was correctly noted by @seabamirum on #2769 this is not enough on the reading path: > It is imperative that the user manually synchronize on the returned set when iterating over it: (From the JavaDoc of `Collections.synchronizedSet`. Using `ConcurrentHashMap.newKeySet` would solve that issue, but it would require a check for `null` values in the `usedBookmarks` argument for `updateBookmarks` AND it would also not solve the fact that removing the used bookmarks and adding the new ones is an atomic operation (such as it originally was in the imperative world). So therefor it is just easier to use a standard set and synchronize over it.
meistermeier pushed a commit that referenced this pull request Aug 21, 2023
The separate reactive bookmark manager was introduced because Project Reactor and Blockhound have issues with the `ReentrantReadWriteLock` approach we have taken in the imperative approach. The first approach was using a synchronized set, but as it was correctly noted by @seabamirum on #2769 this is not enough on the reading path: > It is imperative that the user manually synchronize on the returned set when iterating over it: (From the JavaDoc of `Collections.synchronizedSet`. Using `ConcurrentHashMap.newKeySet` would solve that issue, but it would require a check for `null` values in the `usedBookmarks` argument for `updateBookmarks` AND it would also not solve the fact that removing the used bookmarks and adding the new ones is an atomic operation (such as it originally was in the imperative world). So therefor it is just easier to use a standard set and synchronize over it.
meistermeier pushed a commit that referenced this pull request Aug 21, 2023
The separate reactive bookmark manager was introduced because Project Reactor and Blockhound have issues with the `ReentrantReadWriteLock` approach we have taken in the imperative approach. The first approach was using a synchronized set, but as it was correctly noted by @seabamirum on #2769 this is not enough on the reading path: > It is imperative that the user manually synchronize on the returned set when iterating over it: (From the JavaDoc of `Collections.synchronizedSet`. Using `ConcurrentHashMap.newKeySet` would solve that issue, but it would require a check for `null` values in the `usedBookmarks` argument for `updateBookmarks` AND it would also not solve the fact that removing the used bookmarks and adding the new ones is an atomic operation (such as it originally was in the imperative world). So therefor it is just easier to use a standard set and synchronize over it.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

status: waiting-for-triage An issue we've not yet triaged

4 participants