Skip to content

Conversation

@joelanford
Copy link
Member

When multiple controllers are watching the same resource type with a metadata-only informer, a data race occurs in sigs.k8s.io/controller-runtime/pkg/cache/internal.(*handlerPreserveGVK).resetGroupVersionKind()

This data race is resolved by setting the GVK before the objects are written to the cache.

Signed-off-by: Joe Lanford joe.lanford@gmail.com

…n cache Signed-off-by: Joe Lanford <joe.lanford@gmail.com>
@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Sep 2, 2021
@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Sep 2, 2021
@alvaroaleman
Copy link
Member

This data race is resolved by setting the GVK before the objects are written to the cache.

What exactly races there?

@joelanford
Copy link
Member Author

joelanford commented Sep 2, 2021

What exactly races there?

With just the added test, the race detector shows this:

Race detector details

WARNING: DATA RACE Write at 0x00c000535b10 by goroutine 173: k8s.io/apimachinery/pkg/apis/meta/v1.(*TypeMeta).SetGroupVersionKind() /home/joe/go/pkg/mod/k8s.io/apimachinery@v0.22.1/pkg/apis/meta/v1/meta.go:123 +0x10c k8s.io/apimachinery/pkg/apis/meta/v1.(*PartialObjectMetadata).SetGroupVersionKind() <autogenerated>:1 +0x90 sigs.k8s.io/controller-runtime/pkg/cache/internal.(*handlerPreserveGVK).resetGroupVersionKind() /home/joe/projects/work/kubernetes-sigs/controller-runtime/pkg/cache/internal/metadata_infomer_wrapper.go:53 +0x141 sigs.k8s.io/controller-runtime/pkg/cache/internal.(*handlerPreserveGVK).OnAdd() /home/joe/projects/work/kubernetes-sigs/controller-runtime/pkg/cache/internal/metadata_infomer_wrapper.go:58 +0x32 k8s.io/client-go/tools/cache.(*processorListener).run.func1() /home/joe/go/pkg/mod/k8s.io/client-go@v0.22.1/tools/cache/shared_informer.go:777 +0x1ef k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1() /home/joe/go/pkg/mod/k8s.io/apimachinery@v0.22.1/pkg/util/wait/wait.go:155 +0x75 k8s.io/apimachinery/pkg/util/wait.BackoffUntil() /home/joe/go/pkg/mod/k8s.io/apimachinery@v0.22.1/pkg/util/wait/wait.go:156 +0xba k8s.io/apimachinery/pkg/util/wait.JitterUntil() /home/joe/go/pkg/mod/k8s.io/apimachinery@v0.22.1/pkg/util/wait/wait.go:133 +0x114 k8s.io/apimachinery/pkg/util/wait.Until() /home/joe/go/pkg/mod/k8s.io/apimachinery@v0.22.1/pkg/util/wait/wait.go:90 +0xa4 k8s.io/client-go/tools/cache.(*processorListener).run() /home/joe/go/pkg/mod/k8s.io/client-go@v0.22.1/tools/cache/shared_informer.go:771 +0x4d k8s.io/client-go/tools/cache.(*processorListener).run-fm() /home/joe/go/pkg/mod/k8s.io/client-go@v0.22.1/tools/cache/shared_informer.go:765 +0x4a k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1() /home/joe/go/pkg/mod/k8s.io/apimachinery@v0.22.1/pkg/util/wait/wait.go:73 +0x6d Previous write at 0x00c000535b10 by goroutine 162: k8s.io/apimachinery/pkg/apis/meta/v1.(*TypeMeta).SetGroupVersionKind() /home/joe/go/pkg/mod/k8s.io/apimachinery@v0.22.1/pkg/apis/meta/v1/meta.go:123 +0x10c k8s.io/apimachinery/pkg/apis/meta/v1.(*PartialObjectMetadata).SetGroupVersionKind() <autogenerated>:1 +0x90 sigs.k8s.io/controller-runtime/pkg/cache/internal.(*handlerPreserveGVK).resetGroupVersionKind() /home/joe/projects/work/kubernetes-sigs/controller-runtime/pkg/cache/internal/metadata_infomer_wrapper.go:53 +0x141 sigs.k8s.io/controller-runtime/pkg/cache/internal.(*handlerPreserveGVK).OnAdd() /home/joe/projects/work/kubernetes-sigs/controller-runtime/pkg/cache/internal/metadata_infomer_wrapper.go:58 +0x32 k8s.io/client-go/tools/cache.(*processorListener).run.func1() /home/joe/go/pkg/mod/k8s.io/client-go@v0.22.1/tools/cache/shared_informer.go:777 +0x1ef k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1() /home/joe/go/pkg/mod/k8s.io/apimachinery@v0.22.1/pkg/util/wait/wait.go:155 +0x75 k8s.io/apimachinery/pkg/util/wait.BackoffUntil() /home/joe/go/pkg/mod/k8s.io/apimachinery@v0.22.1/pkg/util/wait/wait.go:156 +0xba k8s.io/apimachinery/pkg/util/wait.JitterUntil() /home/joe/go/pkg/mod/k8s.io/apimachinery@v0.22.1/pkg/util/wait/wait.go:133 +0x114 k8s.io/apimachinery/pkg/util/wait.Until() /home/joe/go/pkg/mod/k8s.io/apimachinery@v0.22.1/pkg/util/wait/wait.go:90 +0xa4 k8s.io/client-go/tools/cache.(*processorListener).run() /home/joe/go/pkg/mod/k8s.io/client-go@v0.22.1/tools/cache/shared_informer.go:771 +0x4d k8s.io/client-go/tools/cache.(*processorListener).run-fm() /home/joe/go/pkg/mod/k8s.io/client-go@v0.22.1/tools/cache/shared_informer.go:765 +0x4a k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1() /home/joe/go/pkg/mod/k8s.io/apimachinery@v0.22.1/pkg/util/wait/wait.go:73 +0x6d Goroutine 173 (running) created at: k8s.io/apimachinery/pkg/util/wait.(*Group).Start() /home/joe/go/pkg/mod/k8s.io/apimachinery@v0.22.1/pkg/util/wait/wait.go:71 +0x70 k8s.io/client-go/tools/cache.(*sharedProcessor).addListener() /home/joe/go/pkg/mod/k8s.io/client-go@v0.22.1/tools/cache/shared_informer.go:593 +0x30e k8s.io/client-go/tools/cache.(*sharedIndexInformer).AddEventHandlerWithResyncPeriod() /home/joe/go/pkg/mod/k8s.io/client-go@v0.22.1/tools/cache/shared_informer.go:521 +0x2c6 k8s.io/client-go/tools/cache.(*sharedIndexInformer).AddEventHandler() /home/joe/go/pkg/mod/k8s.io/client-go@v0.22.1/tools/cache/shared_informer.go:457 +0x69 sigs.k8s.io/controller-runtime/pkg/cache/internal.(*sharedInformerWrapper).AddEventHandler() /home/joe/projects/work/kubernetes-sigs/controller-runtime/pkg/cache/internal/metadata_infomer_wrapper.go:39 +0x114 sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1() /home/joe/projects/work/kubernetes-sigs/controller-runtime/pkg/source/source.go:133 +0x4e3 Goroutine 162 (running) created at: k8s.io/apimachinery/pkg/util/wait.(*Group).Start() /home/joe/go/pkg/mod/k8s.io/apimachinery@v0.22.1/pkg/util/wait/wait.go:71 +0x70 k8s.io/client-go/tools/cache.(*sharedProcessor).addListener() /home/joe/go/pkg/mod/k8s.io/client-go@v0.22.1/tools/cache/shared_informer.go:593 +0x30e k8s.io/client-go/tools/cache.(*sharedIndexInformer).AddEventHandlerWithResyncPeriod() /home/joe/go/pkg/mod/k8s.io/client-go@v0.22.1/tools/cache/shared_informer.go:521 +0x2c6 k8s.io/client-go/tools/cache.(*sharedIndexInformer).AddEventHandler() /home/joe/go/pkg/mod/k8s.io/client-go@v0.22.1/tools/cache/shared_informer.go:457 +0x69 sigs.k8s.io/controller-runtime/pkg/cache/internal.(*sharedInformerWrapper).AddEventHandler() /home/joe/projects/work/kubernetes-sigs/controller-runtime/pkg/cache/internal/metadata_infomer_wrapper.go:39 +0x114 sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1() /home/joe/projects/work/kubernetes-sigs/controller-runtime/pkg/source/source.go:133 +0x4e3 ================== ================== WARNING: DATA RACE Write at 0x00c000535b00 by goroutine 173: k8s.io/apimachinery/pkg/apis/meta/v1.(*TypeMeta).SetGroupVersionKind() /home/joe/go/pkg/mod/k8s.io/apimachinery@v0.22.1/pkg/apis/meta/v1/meta.go:123 +0x138 k8s.io/apimachinery/pkg/apis/meta/v1.(*PartialObjectMetadata).SetGroupVersionKind() <autogenerated>:1 +0x90 sigs.k8s.io/controller-runtime/pkg/cache/internal.(*handlerPreserveGVK).resetGroupVersionKind() /home/joe/projects/work/kubernetes-sigs/controller-runtime/pkg/cache/internal/metadata_infomer_wrapper.go:53 +0x141 sigs.k8s.io/controller-runtime/pkg/cache/internal.(*handlerPreserveGVK).OnAdd() /home/joe/projects/work/kubernetes-sigs/controller-runtime/pkg/cache/internal/metadata_infomer_wrapper.go:58 +0x32 k8s.io/client-go/tools/cache.(*processorListener).run.func1() /home/joe/go/pkg/mod/k8s.io/client-go@v0.22.1/tools/cache/shared_informer.go:777 +0x1ef k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1() /home/joe/go/pkg/mod/k8s.io/apimachinery@v0.22.1/pkg/util/wait/wait.go:155 +0x75 k8s.io/apimachinery/pkg/util/wait.BackoffUntil() /home/joe/go/pkg/mod/k8s.io/apimachinery@v0.22.1/pkg/util/wait/wait.go:156 +0xba k8s.io/apimachinery/pkg/util/wait.JitterUntil() /home/joe/go/pkg/mod/k8s.io/apimachinery@v0.22.1/pkg/util/wait/wait.go:133 +0x114 k8s.io/apimachinery/pkg/util/wait.Until() /home/joe/go/pkg/mod/k8s.io/apimachinery@v0.22.1/pkg/util/wait/wait.go:90 +0xa4 k8s.io/client-go/tools/cache.(*processorListener).run() /home/joe/go/pkg/mod/k8s.io/client-go@v0.22.1/tools/cache/shared_informer.go:771 +0x4d k8s.io/client-go/tools/cache.(*processorListener).run-fm() /home/joe/go/pkg/mod/k8s.io/client-go@v0.22.1/tools/cache/shared_informer.go:765 +0x4a k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1() /home/joe/go/pkg/mod/k8s.io/apimachinery@v0.22.1/pkg/util/wait/wait.go:73 +0x6d Previous write at 0x00c000535b00 by goroutine 162: k8s.io/apimachinery/pkg/apis/meta/v1.(*TypeMeta).SetGroupVersionKind() /home/joe/go/pkg/mod/k8s.io/apimachinery@v0.22.1/pkg/apis/meta/v1/meta.go:123 +0x138 k8s.io/apimachinery/pkg/apis/meta/v1.(*PartialObjectMetadata).SetGroupVersionKind() <autogenerated>:1 +0x90 sigs.k8s.io/controller-runtime/pkg/cache/internal.(*handlerPreserveGVK).resetGroupVersionKind() /home/joe/projects/work/kubernetes-sigs/controller-runtime/pkg/cache/internal/metadata_infomer_wrapper.go:53 +0x141 sigs.k8s.io/controller-runtime/pkg/cache/internal.(*handlerPreserveGVK).OnAdd() /home/joe/projects/work/kubernetes-sigs/controller-runtime/pkg/cache/internal/metadata_infomer_wrapper.go:58 +0x32 k8s.io/client-go/tools/cache.(*processorListener).run.func1() /home/joe/go/pkg/mod/k8s.io/client-go@v0.22.1/tools/cache/shared_informer.go:777 +0x1ef k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1() /home/joe/go/pkg/mod/k8s.io/apimachinery@v0.22.1/pkg/util/wait/wait.go:155 +0x75 k8s.io/apimachinery/pkg/util/wait.BackoffUntil() /home/joe/go/pkg/mod/k8s.io/apimachinery@v0.22.1/pkg/util/wait/wait.go:156 +0xba k8s.io/apimachinery/pkg/util/wait.JitterUntil() /home/joe/go/pkg/mod/k8s.io/apimachinery@v0.22.1/pkg/util/wait/wait.go:133 +0x114 k8s.io/apimachinery/pkg/util/wait.Until() /home/joe/go/pkg/mod/k8s.io/apimachinery@v0.22.1/pkg/util/wait/wait.go:90 +0xa4 k8s.io/client-go/tools/cache.(*processorListener).run() /home/joe/go/pkg/mod/k8s.io/client-go@v0.22.1/tools/cache/shared_informer.go:771 +0x4d k8s.io/client-go/tools/cache.(*processorListener).run-fm() /home/joe/go/pkg/mod/k8s.io/client-go@v0.22.1/tools/cache/shared_informer.go:765 +0x4a k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1() /home/joe/go/pkg/mod/k8s.io/apimachinery@v0.22.1/pkg/util/wait/wait.go:73 +0x6d Goroutine 173 (running) created at: k8s.io/apimachinery/pkg/util/wait.(*Group).Start() /home/joe/go/pkg/mod/k8s.io/apimachinery@v0.22.1/pkg/util/wait/wait.go:71 +0x70 k8s.io/client-go/tools/cache.(*sharedProcessor).addListener() /home/joe/go/pkg/mod/k8s.io/client-go@v0.22.1/tools/cache/shared_informer.go:593 +0x30e k8s.io/client-go/tools/cache.(*sharedIndexInformer).AddEventHandlerWithResyncPeriod() /home/joe/go/pkg/mod/k8s.io/client-go@v0.22.1/tools/cache/shared_informer.go:521 +0x2c6 k8s.io/client-go/tools/cache.(*sharedIndexInformer).AddEventHandler() /home/joe/go/pkg/mod/k8s.io/client-go@v0.22.1/tools/cache/shared_informer.go:457 +0x69 sigs.k8s.io/controller-runtime/pkg/cache/internal.(*sharedInformerWrapper).AddEventHandler() /home/joe/projects/work/kubernetes-sigs/controller-runtime/pkg/cache/internal/metadata_infomer_wrapper.go:39 +0x114 sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1() /home/joe/projects/work/kubernetes-sigs/controller-runtime/pkg/source/source.go:133 +0x4e3 Goroutine 162 (running) created at: k8s.io/apimachinery/pkg/util/wait.(*Group).Start() /home/joe/go/pkg/mod/k8s.io/apimachinery@v0.22.1/pkg/util/wait/wait.go:71 +0x70 k8s.io/client-go/tools/cache.(*sharedProcessor).addListener() /home/joe/go/pkg/mod/k8s.io/client-go@v0.22.1/tools/cache/shared_informer.go:593 +0x30e k8s.io/client-go/tools/cache.(*sharedIndexInformer).AddEventHandlerWithResyncPeriod() /home/joe/go/pkg/mod/k8s.io/client-go@v0.22.1/tools/cache/shared_informer.go:521 +0x2c6 k8s.io/client-go/tools/cache.(*sharedIndexInformer).AddEventHandler() /home/joe/go/pkg/mod/k8s.io/client-go@v0.22.1/tools/cache/shared_informer.go:457 +0x69 sigs.k8s.io/controller-runtime/pkg/cache/internal.(*sharedInformerWrapper).AddEventHandler() /home/joe/projects/work/kubernetes-sigs/controller-runtime/pkg/cache/internal/metadata_infomer_wrapper.go:39 +0x114 sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start.func1() /home/joe/projects/work/kubernetes-sigs/controller-runtime/pkg/source/source.go:133 +0x4e3 ================== 

TL;DR: it's multiple goroutines (one from each controller) trying to call SetGroupVersionKind on the same underlying object concurrently.

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: alvaroaleman, joelanford

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:
  • OWNERS [alvaroaleman,joelanford]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Sep 2, 2021
@k8s-ci-robot k8s-ci-robot merged commit 498ee8a into kubernetes-sigs:master Sep 2, 2021
@k8s-ci-robot k8s-ci-robot added this to the v0.10.x milestone Sep 2, 2021
Comment on lines +419 to +447
func newGVKFixupWatcher(gvk schema.GroupVersionKind, watcher watch.Interface) watch.Interface {
ch := make(chan watch.Event)
w := &gvkFixupWatcher{
gvk: gvk,
watcher: watcher,
ch: ch,
}
w.wg.Add(1)
go w.run()
return w
}

func (w *gvkFixupWatcher) run() {
for e := range w.watcher.ResultChan() {
e.Object.GetObjectKind().SetGroupVersionKind(w.gvk)
w.ch <- e
}
w.wg.Done()
}

func (w *gvkFixupWatcher) Stop() {
w.watcher.Stop()
w.wg.Wait()
close(w.ch)
}

func (w *gvkFixupWatcher) ResultChan() <-chan watch.Event {
return w.ch
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this the only way we can fix this behavior? Having an extra reader seems a bit overkill, given that's a reader on a reader, but I also don't have other options that I can think about.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.

4 participants