Platform Notes
Platform-Specific Notes
Depending on the application used to either install and manage Kyverno or the Kubernetes platform on which the cluster is built, there are some specific considerations of which to be aware. These notes are provided assuming the Helm chart is the installation artifact used.
Notes for ArgoCD users
ArgoCD v2.10 introduced support for ServerSideDiff, leveraging Kubernetes’ Server Side Apply feature to resolve OutOfSync issues. This strategy ensures comparisons are handled on the server side, respecting fields like skipBackgroundRequests that Kubernetes sets by default, and fields set by mutating admission controllers like Kyverno, thereby preventing unnecessary OutOfSync errors caused by local manifest discrepancies.
Configuration Best Practices
Server-Side Configuration
- Enable 
ServerSideDiffin one of two ways:- Per Application: Add the 
argocd.argoproj.io/compare-optionsannotation - Globally: Configure it in the 
argocd-cmd-params-cmConfigMap 
 - Per Application: Add the 
 
1apiVersion: argoproj.io/v1alpha1 2kind: Application 3metadata: 4 annotations: 5 argocd.argoproj.io/compare-options: ServerSideDiff=true,IncludeMutationWebhook=true 6 ...- Enable 
 RBAC and CRD Management
- Enable ServerSideApply in the 
syncOptionsto handle metadata properly - Configure ArgoCD to ignore differences in aggregated ClusterRoles
 - Ensure proper RBAC permissions for ArgoCD to manage Kyverno CRDs
 
- Enable ServerSideApply in the 
 Sync Options Configuration
- Avoid using 
Replace=trueas it may cause issues with existing resources - Use 
ServerSideApply=truefor smooth resource updates - Enable 
CreateNamespace=trueif deploying to a new namespace 
- Avoid using 
 Config Preservation
- By default, 
config.preserve=trueis set in the Helm chart. This is useful for Helm-based install, upgrade, and uninstall scenarios. - This setting enables a Helm post-delete hook, which can cause ArgoCD to show the application as out-of-sync if deployed using an App of Apps pattern.
 - It may also prevent ArgoCD from cleaning up the Kyverno application when the parent application is deleted.
 - Recommendation: Set 
config.preserve=falsewhen deploying Kyverno via ArgoCD to ensure proper resource cleanup and sync status. 
- By default, 
 
Complete Application Example
 1apiVersion: argoproj.io/v1alpha1  2kind: Application  3metadata:  4 name: kyverno  5 namespace: argocd  6 annotations:  7 argocd.argoproj.io/compare-options: ServerSideDiff=true,IncludeMutationWebhook=true  8spec:  9 destination: 10 namespace: kyverno 11 server: https://kubernetes.default.svc 12 project: default 13 source: 14 chart: kyverno 15 repoURL: https://kyverno.github.io/kyverno 16 targetRevision: <my.target.version> 17 helm: 18 values: | 19 webhookLabels: 20 app.kubernetes.io/managed-by: argocd 21 syncPolicy: 22 automated: 23 prune: true 24 selfHeal: true 25 syncOptions: 26 - CreateNamespace=true 27 - ServerSideApply=true Troubleshooting Guide
CRD Check Failures
- Symptom: Deployment fails during CRD validation
 - Common Causes:
- Insufficient RBAC permissions
 - CRDs not properly registered
 
 - Resolution:
- Verify RBAC permissions for ArgoCD service account
 - Ensure CRDs are installed before policies
 - Check ArgoCD logs for specific permission errors
 
 
Sync Failures
- Symptom: Resources show as OutOfSync
 - Common Causes:
- Missing ServerSideDiff configuration
 - Aggregated ClusterRole differences
 
 - Resolution:
- Enable ServerSideDiff as shown above
 - Configure resource exclusions for aggregated roles
 - Check resource health status in ArgoCD UI
 
 
Resource Management Issues
- Symptom: Resources not properly created or updated
 - Common Causes:
- Incorrect sync options
 - Resource ownership conflicts
 
 - Resolution:
- Use ServerSideApply instead of Replace
 - Configure resource tracking method
 - Verify resource ownership labels
 
 
Performance and Scaling
- Symptom: Slow syncs or resource processing
 - Common Causes:
- Large number of resources
 - Resource intensive operations
 
 - Resolution:
- Use selective sync for large deployments
 - Configure appropriate resource limits
 - Enable background processing where applicable
 
 
For considerations when using Argo CD along with Kyverno mutate policies, see the documentation here.
Resource Tracking and Ownership
ArgoCD automatically sets the app.kubernetes.io/instance label and uses it to determine which resources form the app. The Kyverno Helm chart also sets this label for the same purposes. To resolve this conflict:
- Configure ArgoCD to use a different tracking mechanism as described in the documentation.
 - Add appropriate annotations to your Application manifest.
 
Argo CD users may also have Kyverno add labels to webhooks via the webhookLabels key in the Kyverno ConfigMap, helpful when viewing the Kyverno application in Argo CD.
Notes for OpenShift Users
Red Hat OpenShift contains a feature called Security Context Constraints (SCC) which enforces certain security controls in a profile-driven manner. An OpenShift cluster contains several of these out of the box with OpenShift 4.11 preferring restricted-v2 by default. The Kyverno Helm chart defines its own values for the Pod’s securityContext object which, although it conforms to the upstream Pod Security Standards’ restricted profile, may potentially be incompatible with your defined Security Context Constraints. Deploying the Kyverno Helm chart as-is on an OpenShift environment may result in an error similar to “unable to validate against any security context constraint”. In order to get past this, deploy the Kyverno Helm chart with with the required securityContext flags/fields set to a value of null. OpenShift will apply the defined SCC upon deployment. If on OpenShift 4.11+, the restricted-v2 profile is known to allow for successful deployment of the chart without modifying the Helm chart installation process.
Notes for EKS Users
For EKS clusters built with the VPC CNI plug-in, if you wish to opt for the operability strategy as defined in the Security vs Operability section, during the installation of Kyverno you should exclude the kube-system Namespace from webhooks as this is the Namespace where the plug-in runs. In situations where all the cluster Nodes are “deleted” (ex., only one node group in the cluster which is scaled to zero), which also impacts where the Kyverno replicas run, if kube-system is not excluded and where at least one policy in Fail mode matches on Pods, the VPC CNI plug-in’s DaemonSet Pods may not be able to come online to finish the Node bootstrapping process. If this situation occurs, because the underlying cluster network cannot return to a healthy state, Kyverno will be unable to service webhook requests. As of Kyverno 1.12, kube-system is excluded by default in webhooks.
Notes for AKS Users
AKS uses an Admission Enforcer control the webhooks in an AKS cluster and will remove those that may impact system Namespaces. Since Kyverno registers as a webhook, this Admission Enforcer may remove Kyverno’s webhook causing the two to fight over webhook reconciliation. See this Microsoft Azure FAQ for further information. When deploying Kyverno on an AKS cluster, set the Helm option config.webhookAnnotations to include the necessary annotation to disable the Admission Enforcer. Kyverno will configure its webhooks with this annotation to prevent their removal by AKS. The annotation that should be used is "admissions.enforcer/disabled": true. See the chart README for more information. As of Kyverno 1.12, this annotation has already been set for you.
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.