Custom resource definition parameters
The following lists describes the available settings for Vertica custom resource definitions (CRDs).
VerticaDB
Parameters
annotations
- Custom annotations added to all of the objects that the operator creates. Each annotation is encoded as an environment variable in the Vertica server container. Annotations accept the following characters:
- Letters
- Numbers
- Underscores
Invalid character values are converted to underscore characters. For example:
vertica.com/git-ref: 1234abcd
Is converted to:
VERTICA_COM_GIT_REF=1234abcd
Note
Enclose integer values in double quotes (""), or the admission controller returns an error. autoRestartVertica
- Whether the operator restarts the Vertica process when the process is not running.
Set this parameter to false when performing manual maintenance that requires a DOWN database. This prevents the operator from interfering with the database state.
Default: true
certSecrets
- A list of Secrets for custom TLS certificates.
Each certificate is mounted in the container at
/certs/
cert-name
/
key
. For example, a PEM-encoded CA bundle named root_cert.pem and concealed in a Secret named aws-cert is mounted in/certs/aws-cert/root_cert.pem
.If you update the certificate after you add it to a custom resource, the operator updates the value automatically. If you add or delete a certificate, the operator reschedules the pod with the new configuration.
For implementation details, see VerticaDB CRD.
communal.additionalConfig
- Sets one or more configuration parameters in the CR:
spec: communal: additionalConfig: config-param: "value" ... ...
Configuration parameters are set only when the database is initialized. After the database is initialized, changes to this parameter have no effect in the server.
Important
Configuration parameters in the CR have the following requirements and behaviors:
- If you set an invalid configuration parameter, the Vertica server process does not start. For example, the server does not start if you misspell a parameter name or if the configuration parameter is not supported by the Vertica server version.
- If
communal.addtitionalConfig
sets a configuration parameter that the operator sets with a CR parameter, the operator ignores thecommunal.addtitionalConfig
setting. For example, thecommunal.endpoint
parameter sets the AWSEndpoint S3 parameter. If you setcommunal.endpoint
and also set AWSEndpoint withcommunal.addtitionalConfig
, the operator enforces thecommunal.endpoint
setting.
communal.caFile
- The mount path in the container filesystem to a CA certificate file that validates HTTPS connections to a communal storage endpoint.
Typically, the certificate is stored in a Secret and included in
certSecrets
. For details, see VerticaDB CRD. communal.credentialSecret
- The name of the Secret that stores the credentials for the communal storage endpoint.
For implementation details for each supported communal storage location, see Configuring communal storage.
This parameter is optional when you authenticate to an S3-compatible endpoint with an Identity and Access Management (IAM) profile.
communal.endpoint
- A communal storage endpoint URL. The endpoint must begin with either the
http://
orhttps://
protocol. For example:https://path/to/endpoint
You cannot change this value after you create the custom resource instance.
This setting is required when
initPolicy
is set toCreate
orRevive
. communal.s3ServerSideEncryption
- Server-side encryption type used when reading from or writing to S3. The value depends on which type of encryption at rest is configured for S3.
This parameter accepts the following values:
SSE-S3
SSE-KMS
: Requires that you pass the key identifier with thecommunal.additionalConfig
parameter.SSE-C
: Requires that you pass the client key with thecommunal.s3SSECustomerKeySecret
parameter.
You cannot change this value after you create the custom resource instance.
For implementation examples of all encryption types, see Configuring communal storage.
For details about each encryption type, see S3 object store.
Default: Empty string (""), no encryption
communal.s3SSECustomerKeySecret
- If
s3ServerSideEncryption
is set toSSE-C
, a Secret containing the client key for S3 access with the following requirements:- The Secret must be in the same namespace as the CR.
- You must set the client key contents with the
clientKey
field.
The client key must use one of the following formats:
- 32-character plaintext
- 44-character base64-encoded
For additional implementation details, see Configuring communal storage.
communal.path
- The path to the communal storage bucket. For example:
s3://bucket-name/key-name
You must create this bucket before you create the Vertica database.
The following
initPolicy
values determine how to set this value:-
Create
: The path must be empty. -
Revive
: The path cannot be empty.
You cannot change this value after you create the custom resource.
-
communal.region
- The geographic location where the communal storage resources are located.
If you do not set the correct region, the configuration fails. You might experience a delay because Vertica retries several times before failing.
This setting is valid for Amazon Web Services (AWS) and Google Cloud Platform (GCP) only. Vertica ignores this setting for other communal storage providers.
Default:
-
AWS: us-east-1
-
GCP: US-EAST1
-
dbName
- The database name. When
initPolicy
is set toRevive
orScheduleOnly
, this must match the name of the source database.Default: vertdb
encryptSpreadComm
- Sets the EncryptSpreadComm security parameter to configure Spread encryption for a new Vertica database. The VerticaDB operator ignores this parameter unless you set
initPolicy
to Create.Spread encryption is enabled by default. This parameter accepts the following values:
-
vertica
or Empty string (""): Enables Spread encryption. Vertica generates the Spread encryption key for the database cluster. -
disabled
: Clears encryption.
Default: Empty string ("")
Important
If you use the deprecated
v1beta1
API version with server version 24.1.0,encryptSpreadComm
behaves differently. Spread encryption is disabled by default and accepts the following values:vertica
: Enables Spread encryption. Vertica generates the Spread encryption key for the database cluster.- Empty string (""): Default setting. Clears encryption.
-
hadoopConfig
- A ConfigMap that contains the contents of the
/etc/hadoop
directory.This is mounted in the container to configure connections to a Hadoop Distributed File System (HDFS) communal path.
image
- The image that defines the Vertica server container's runtime environment. If the container is hosted in a private container repository, this name must include the path to the repository.
When you update the image, the operator stops and restarts the cluster.
Default: vertica/vertica-k8s:latest
imagePullPolicy
- How often Kubernetes pulls the image for an object. For details, see Updating Images in the Kubernetes documentation.
Default: If the image tag is
latest
, the default isAlways
. Otherwise, the default isIfNotPresent
. imagePullSecrets
- List of Secrets that store credentials for authentication to a private container repository. For details, see Specifying imagePullSecrets in the Kubernetes documentation.
initPolicy
- How to initialize the Vertica database in Kubernetes. This parameter accepts the following values:
-
Create
: Forces the creation of a new database for the custom resource. -
CreateSkipPackageInstall
: Same asCreate
, but does not install any default packages to quickly create a database.To install default packages, see Reinstalling packages.
Note
CreateSkipPackageInstall
is available in Vertica version 12.0.1 and later. -
Revive
: Initializes an existing Eon Mode database as a StatefulSet with the revive command. For information aboutRevive
, see Generating a custom resource from an existing Eon Mode database. -
ScheduleOnly
: Schedules a subcluster for a Hybrid Kubernetes cluster.
-
kerberosSecret
- The Secret that stores the following values for Kerberos authentication to Hadoop Distributed File System (HDFS):
-
krb5.conf: Contains Kerberos configuration information.
-
krb5.keytab: Contains credentials for the Vertica Kerberos principal. This file must be readable by the file owner that is running the process.
The default location for each of these files is the
/etc
directory. -
labels
- Custom labels added to all of the objects that the operator creates.
licenseSecret
- The Secret that contains the contents of license files. The Secret must share a namespace with the custom resource (CR). Each of the keys in the Secret is mounted as a file in
/home/dbadmin/licensing/mnt
.If this value is set when the CR is created, the operator installs one of the licenses automatically, choosing the first one alphabetically.
If you update this value after you create the custom resource, you must manually install the Secret in each Vertica pod.
livenessProbeOverride
- Overrides default
livenessProbe
settings that indicate whether the container is running. The VerticaDB operator sets or updates the liveness probe in the StatefulSet.For example, the following object overrides the default
initialDelaySeconds
,periodSeconds
, andfailureThreshold
settings:spec: ... livenessProbeOverride: initialDelaySeconds: 120 periodSeconds: 15 failureThreshold: 8
For a detailed list of the available probe settings, see the Kubernetes documentation.
local.catalogPath
- Optional parameter that sets a custom path in the container filesystem for the catalog, if your environment requires that the catalog is stored in a location separate from the local data.
If
initPolicy
is set toRevive
orScheduleOnly
,local.catalogPath
for the new database must matchlocal.catalogPath
for the source database. local.dataPath
- The path in the container filesystem for the local data. If
local.catalogPath
is not set, the catalog is stored in this location.If
initPolicy
is set toRevive
orScheduleOnly
, the dataPath for the new database must match the dataPath for the source database.Default:
/data
local.depotPath
- The path in the container filesystem that stores the depot.
If initPolicy is set to
Revive
orScheduleOnly
, the depotPath for the new database must match the depotPath for the source database.Default:
/depot
local.depotVolume
- The type of volume to use for the depot. This parameter accepts the following values:
PersistentVolume
: A PersistentVolume is used to store the depot data. This volume type persists depot data between pod lifecycles.EmptyDir
: A volume of type emptyDir is used to store the depot data. When the pod is removed from a node, the contents of the volume are deleted. If a container crashes, the depot data is unaffected.
Important
You cannot change the depot volume type on an existing database. If you want to change this setting, you must create a new custom resource.For details about each volume type, see the Kubernetes documentation.
Default:
PersistentVolume
local.requestSize
- The minimum size of the local data volume when selecting a PersistentVolume (PV).
If
local.storageClass
allows volume expansion, the operator automatically increases the size of the PV when you change this setting. It expands the size of the depot if the following conditions are met:local.storageClass
is set toPersistentVolume
.- Depot storage is allocated using a percentage of the total disk space rather than a unit, such as a gigabyte.
If you decrease this value, the operator does not decrease the size of the PV or the depot.
Default: 500 Gi
local.storageClass
- The StorageClass for the PersistentVolumes that persist local data between pod lifecycles. Select this value when defining the persistent volume claim (PVC).
By default, this parameter is not set. The PVC in the default configuration uses the default storage class set by Kubernetes.
nmaTLSSecret
- Adds custom Node Management Agent (NMA) certificates to the CR. The value must include the tls.key, tls.crt, and ca.crt encoded in base64 format.
If you omit this setting, the operator generates self-signed certificates for the NMA.
podSecurityContext
- Overrides any pod-level security context. This setting is merged with the default context for the pods in the cluster.
vclusterops
deployments can use this parameter to set a custom UID or GID:... spec: ... podSecurityContext: runAsUser: 3500 runAsGroup: 3500 ...
For details about the available settings for this parameter, see the Kubernetes documentation.
readinessProbeOverride
- Overrides default
readinessProbe
settings that indicate whether the Vertica pod is ready to accept traffic. The VerticaDB operator sets or updates the readiness probe in the StatefulSet.For example, the following object overrides the default
timeoutSeconds
andperiodSeconds
settings:spec: ... readinessProbeOverride: initialDelaySeconds: 0 periodSeconds: 10 failureThreshold: 3
For a detailed list of the available probe settings, see the Kubernetes documentation.
reviveOrder
- The order of nodes during a revive operation. Each entry contains the subcluster index, and the number of pods to include from the subcluster.
For example, consider a database with the following setup:
- v_db_node0001: subcluster A - v_db_node0002: subcluster A - v_db_node0003: subcluster B - v_db_node0004: subcluster A - v_db_node0005: subcluster B - v_db_node0006: subcluster B
If the subclusters[] list is defined as {'A', 'B'}, the revive order is as follows:
- {subclusterIndex:0, podCount:2} # 2 pods from subcluster A - {subclusterIndex:1, podCount:1} # 1 pod from subcluster B - {subclusterIndex:0, podCount:1} # 1 pod from subcluster A - {subclusterIndex:1, podCount:2} # 2 pods from subcluster B
This parameter is used only when
initPolicy
is set toRevive
. securityContext
- Sets any additional security context for the Vertica server container. This setting is merged with the security context value set for the VerticaDB Operator.
For example, if you need a core file for the Vertica server process, you can set the
privileged
property totrue
to elevate the server privileges on the host node:spec: ... securityContext: privileged: true
For additional information about generating a core file, see Metrics gathering. For details about this parameter, see the Kubernetes documentation.
serviceAccountName
- Sets the name of the ServiceAccount. This lets you create a service account independently of an operator or VerticaDB instance so that you can add it to the CR as needed.
If you omit this setting, the operator uses the default service account. If you specify a service account that does not exist, the operator creates that service account and then uses it.
shardCount
- The number of shards in the database. You cannot update this value after you create the custom resource.
For more information about database shards and Eon Mode, see Configuring your Vertica cluster for Eon Mode.
sidecars[]
- One or more optional utility containers that complete tasks for the Vertica server container. Each sidecar entry is a fully-formed container spec, similar to the container that you add to a Pod spec.
The following example adds a sidecar named
vlogger
to the custom resource:spec: ... sidecars: - name: vlogger image: vertica/vertica-logger:1.0.0 volumeMounts: - name: my-custom-vol mountPath: /path/to/custom-volume
volumeMounts.name
is the name of a custom volume. This value must matchvolumes.name
to mount the custom volume in the sidecar container filesystem. Seevolumes
for additional details.For implementation details, see VerticaDB CRD.
sidecars[i].volumeMounts
- List of custom volumes and mount paths that persist sidecar container data. Each volume element requires a
name
value and amountPath
.To mount a volume in the Vertica sidecar container filesystem,
volumeMounts.name
must match thevolumes.name
value for the corresponding sidecar definition, or the webhook returns an error.For implementation details, see VerticaDB CRD.
startupProbeOverride
- Overrides the default
startupProbe
settings that indicate whether the Vertica process is started in the container. The VerticaDB operator sets or updates the startup probe in the StatefulSet.For example, the following object overrides the default
initialDelaySeconds
,periodSeconds
, andfailureThreshold
settings:spec: ... startupProbeOverride: initialDelaySeconds: 30 periodSeconds: 10 failureThreshold: 117 timeoutSeconds: 5
For a detailed list of the available probe settings, see the Kubernetes documentation.
subclusters[i].affinity
- Applies rules that constrain the Vertica server pod to specific nodes. It is more expressive than
nodeSelector
. If this parameter is not set, then the pods use no affinity setting.In production settings, it is a best practice to configure affinity to run one server pod per host node. For configuration details, see VerticaDB CRD.
subclusters[i].externalIPs
- Enables the service object to attach to a specified external IP.
If not set, the external IP is empty in the service object.
subclusters[i].verticaHTTPNodePort
- When
subclusters[i].serviceType
is set toNodePort
, sets the port on each node that listens for external connections to the HTTPS service. The port must be within the defined range allocated by the control plane (ports 30000-32767).If you do not manually define a port number, Kubernetes chooses the port automatically.
subclusters[i].type
- Indicates the subcluster type. Valid values include the following:
primary
secondary
The admission controller's webhook verifies that each database has at least one primary subcluster.
Default:
primary
subclusters[i].loadBalancerIP
- When
subcluster[i].serviceType
is set toLoadBalancer
, assigns a static IP to the load balancing service.Default: Empty string ("")
subclusters[i].name
- The subcluster name. This is a required setting. If you change the name of an existing subcluster, the operator deletes the old subcluster and creates a new one with the new name.
Kubernetes derives names for the subcluster Statefulset, service object, and pod from the subcluster name. For additional details about Kubernetes and subcluster naming conventions, see Subclusters on Kubernetes.
subclusters[i].clientNodePort
- When
subclusters[i].serviceType
is set toNodePort
, sets the port on each node that listens for external client connections. The port must be within the defined range allocated by the control plane (ports 30000-32767).If you do not manually define a port number, Kubernetes chooses the port automatically.
subclusters[i].nodeSelector
- Provides control over which nodes are used to schedule each pod. If this is not set, the node selector is left off the pod when it is created. To set this parameter, provide a list of key/value pairs.
The following example schedules server pods only at nodes that have the
disktype=ssd
andregion=us-east
labels:subclusters: - name: defaultsubcluster nodeSelector: disktype: ssd region: us-east
subclusters[i].priorityClassName
- The PriorityClass name assigned to pods in the StatefulSet. This affects where the pod gets scheduled.
subclusters[i].resources.limits
- The resource limits for pods in the StatefulSet, which sets the maximum amount of CPU and memory that each server pod can consume.
Vertica recommends that you set these values equal to
subclusters[i].resources.requests
to ensure that the pods are assigned to the guaranteed QoS class. This reduces the possibility that pods are chosen by the out of memory (OOM) Killer.For more information, see Recommendations for Sizing Vertica Nodes and Clusters in the Vertica Knowledge Base.
subclusters[i].resources.requests
- The resource requests for pods in the StatefulSet, which sets the maximum amount of CPU and memory that each server pod can consume.
Vertica recommends that you set these values equal to
subclusters[i].resources.limits
to ensure that the pods are assigned to the guaranteed QoS class. This reduces the possibility that pods are chosen by the out of memory (OOM) Killer.For more information, see Recommendations for Sizing Vertica Nodes and Clusters in the Vertica Knowledge Base.
subclusters[i].serviceAnnotations
-
Custom annotations added to implementation-specific services. Managed Kubernetes use service annotations to configure services such as network load balancers, virtual private cloud (VPC) subnets, and loggers.
subclusters[i].serviceName
- Identifies the service object that directs client traffic to the subcluster. Assign a single service object to multiple subclusters to process client data with one or more subclusters. For example:
spec: ... subclusters: - name: subcluster-1 size: 3 serviceName: connections - name: subcluster-2 size: 3 serviceName: connections
The previous example creates a service object named
metadata.name
-connections
that load balances client traffic among its assigned subclusters.For implementation details, see VerticaDB CRD.
subclusters[i].serviceType
- Identifies the type of Kubernetes service to use for external client connectivity. The default is type is ClusterIP, which sets a stable IP and port that is accessible only from within Kubernetes itself.
Depending on the service type, you might need to set
nodePort
orexternalIPs
in addition to this configuration parameter.Default: ClusterIP
subclusters[i].size
- The number of pods in the subcluster. This determines the number of Vertica nodes in the subcluster. Changing this number deletes or schedules new pods.
The minimum size of a subcluster is 1. The subclusters
kSafety
setting determines the minimum and maximum size of the cluster.Note
By default, the Vertica container uses the Vertica community edition (CE) license. The CE license limits subclusters to 3 Vertica nodes and a maximum of 1TB of data. Use the
licenseSecret
parameter to add your Vertica license.For instructions about how to create the license Secret, see VerticaDB CRD.
subclusters[i].tolerations
- Any taints and tolerations used to influence where a pod is scheduled.
passwordSecret
- The Secret that contains the database superuser password. Create this Secret before deployment.
If you do not create this Secret before deployment, there is no password authentication for the database.
The Secret must use a key named
password
:$ kubectl create secret generic su-passwd --from-literal=password=secret-password
Add this Secret to the custom resource:
spec: passwordSecret: su-passwd
temporarySubclusterRouting.names
- The existing subcluster that accepts traffic during an online upgrade. The operator routes traffic to the first subcluster that is online. For example:
spec: ... temporarySubclusterRouting: names: - subcluster-2 - subcluster-1
In the previous example, the operator selects subcluster-2 during the upgrade, and then routes traffic to subcluster-1 when subcluster-2 is down. As a best practice, use secondary subclusters when rerouting traffic.
Note
By default, the operator selects an existing subcluster to receive rerouted client traffic even if you do not specify a subcluster with this parameter. temporarySubclusterRouting.template
- Instructs the operator create a new secondary subcluster during an Online upgrade. The operator creates the subcluster when the upgrade begins and deletes it when the upgrade completes.
To define a temporary subcluster, provide a name and size value. For example:
spec: ... temporarySubclusterRouting: template: name: transient size: 1
upgradePolicy
- Determines how the operator upgrades Vertica server versions. Accepts the following values:
- Offline: The operator stops the cluster to prevent multiple versions from running simultaneously.
- Online: The cluster continues to operator during a rolling update. The data is in read-only mode while the operator upgrades the image for the primary subcluster.
The Online setting has the following restrictions:
-
The cluster must currently run Vertica server version 11.1.0 or higher.
-
If you have only one subcluster, you must configure
temporarySubclusterRouting.template
to create a new secondary subcluster during the Online upgrade. Otherwise, the operator performs an Offline upgrade, regardless of the setting. -
Auto: The operator selects either Offline or Online depending on the configuration. The operator selects Online if all of the following are true:
-
A license Secret exists.
-
K-Safety is 1.
-
The cluster is currently running Vertica version 11.1.0 or higher.
-
Default: Auto
volumeMounts
- List of custom volumes and mount paths that persist Vertica server container data. Each volume element requires a
name
value and amountPath
.To mount a volume in the Vertica server container filesystem,
volumeMounts.name
must match thevolumes.name
value defined in thespec
definition, or the webhook returns an error.For implementation details, see VerticaDB CRD.
volumes
- List of custom volumes that persist Vertica server container data. Each volume element requires a
name
value and a volume type.volumes
accepts any Kubernetes volume type.To mount a volume in a filesystem,
volumes.name
must match thevolumeMounts.name
value for the corresponding volume mount, or the webhook returns an error.For implementation details, see VerticaDB CRD.
Annotations
Apply each of the following annotations to the metadata.annotations
section in the CR:
vertica.com/https-tls-conf-generation
- Determines whether the Vertica pod stores a plain text configuration file used to generate default certificates for the HTTPS service.
Set this to
false
to hide the configuration file when you are certain that the HTTPS service can start in its current configuration. For example:- You altered the TLS configuration with custom certificates.
- The VerticaDB CR was created on server version 24.1.0 or later.
The presence of this configuration file does not interfere with either of these certificate configurations.
Default: true
vertica.com/ignore-cluster-lease
- Ignore the cluster lease when starting or reviving the database.
Default: false
Caution
If another system is using the same communal storage, settingignore-cluster-lease
totrue
results in data corruption. vertica.com/ignore-upgrade-path
- When set to
false
, the operator ensures that you do not downgrade to an earlier release.Default: true
vertica.com/include-uid-in-path
- When set to
true
, the operator includes in the path the unique identifier (UID) that Kubernetes assigns to the VerticaDB object. Including the UID creates a unique database path so that you can reuse the communal path in the same endpoint.Default: false
vertica.com/restart-timeout
- When restarting pods, the number of seconds before the operation times out.
Default: 0. The operator uses a 20 minute default.
vertica.com/run-nma-in-sidecar
- Required that you set this to
false
for API versionv1
to prevent the Node Management Agent (NMA) from running in a sidecar container. vertica.com/superuser-name
- For
vclusterops
deployments, sets a custom superuser name. All admintools deployments use the default superuser name,dbadmin
. vertica.com/vcluster-ops
- Determines whether the VerticaDB CR installs the
vclusterops
library to manage the cluster. When omitted, API versionv1
assumes this annotation is set totrue
, and thev1beta1
annotation is set tofalse
.API version
v1
must installvclusterops
. You can omit this setting to use the default empty string, or explicitly set this totrue
.For deprecated API version
v1beta1
, you must set this totrue
forvclusterops
deployments. For admintools deployments, you can omit this setting or set it tofalse
.Default: Empty string ("")
VerticaAutoScaler
verticaDBName
- Required. Name of the VerticaDB CR that the VerticaAutoscaler CR scales resources for.
scalingGranularity
- Required. The scaling strategy. This parameter accepts one of the following values:
- Subcluster: Create or delete entire subclusters. To create a new subcluster, the operator uses a template or an existing subcluster with the same serviceName.
- Pod: Increase or decrease the size of an existing subcluster.
Default: Subcluster
serviceName
- Required. Refers to the subclusters[i].serviceName for the VerticaDB CR.
VerticaAutoscaler uses this value as a selector when scaling subclusters together.
template
- When
scalingGranularity
is set to Subcluster, you can use this parameter to define how VerticaAutoscaler scales the new subcluster. The following is an example:spec: verticaDBName: dbname scalingGranularity: Subcluster serviceName: service-name template: name: autoscaler-name size: 2 serviceName: service-name isPrimary: false
If you set template.size to 0, VerticaAutoscaler selects as a template an existing subcluster that uses
service-name
.This setting is ignored when
scalingGranularity
is set to Pod.
EventTrigger
matches[].condition.status
- The status portion of the status condition match. The operator watches the condition specified by
matches[].condition.type
on the EventTrigger reference object. When that condition changes to the status specified in this parameter, the operator runs the task defined in the EventTrigger. matches[].condition.type
- The condition portion of the status condition match. The operator watches this condition on the EventTrigger reference object. When this condition changes to the status specified with
matches[].condition.status
, the operator runs the task defined in the EventTrigger. references[].object.apiVersion
- Kubernetes API version of the object that the EventTrigger watches.
references[].object.kind
- The type of object that the EventTrigger watches.
references[].object.name
- The name of the object that the EventTrigger watches.
references[].object.namespace
- Optional. The namespace of the object that the EventTrigger watches. The object and the EventTrigger CR must exist within the same namespace.
If omitted, the operator uses the same namespace as the EventTrigger.
template
- Full
spec
for the Job that EventTrigger runs whenreferences[].condition.type
andreferences[].condition.status
are found for a reference object.
For implementation details, see EventTrigger CRD.