<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>OpenText Analytics Database 26.2.x – vkconfig script options</title>
    <link>/en/kafka-integration/vkconfig-script-options/</link>
    <description>Recent content in vkconfig script options on OpenText Analytics Database 26.2.x</description>
    <generator>Hugo -- gohugo.io</generator>
    
	  <atom:link href="/en/kafka-integration/vkconfig-script-options/index.xml" rel="self" type="application/rss+xml" />
    
    
      
        
      
    
    
    <item>
      <title>Kafka-Integration: Common vkconfig script options</title>
      <link>/en/kafka-integration/vkconfig-script-options/common-vkconfig-script-options/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/kafka-integration/vkconfig-script-options/common-vkconfig-script-options/</guid>
      <description>
        
        
        &lt;p&gt;These options are available across the different tools available in the vkconfig script.&lt;/p&gt;
&lt;dl&gt;
&lt;dt&gt;&lt;code&gt;--conf&lt;/code&gt; &lt;em&gt;&lt;code&gt;filename&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;A text file containing configuration options for the vkconfig script. See &lt;a href=&#34;#Configur&#34;&gt;Configuration File Format&lt;/a&gt; below.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--config-schema&lt;/code&gt; &lt;em&gt;&lt;code&gt;schema_name&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;The name of the scheduler&#39;s OpenText™ Analytics Database schema. This value is the same as the name of the scheduler. You use this name to identify the scheduler during configuration.
&lt;p&gt;&lt;strong&gt;Default:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;stream_config&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--dbhost&lt;/code&gt; &lt;em&gt;&lt;code&gt;host name&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;The host name or IP address of the database node acting as the initiator node for the scheduler.
&lt;p&gt;&lt;strong&gt;Default:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;localhost&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--dbport&lt;/code&gt;&lt;em&gt;&lt;code&gt; port_number&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;The port to use to connect to the database.
&lt;p&gt;&lt;strong&gt;Default:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;5433&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--enable-ssl&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Enables the vkconfig script to use SSL to connect to the database or between the database and Kafka. See &lt;a href=&#34;../../../en/kafka-integration/tlsssl-encryption-with-kafka/configuring-your-scheduler-tls-connections/#&#34;&gt;Configuring your scheduler for TLS connections&lt;/a&gt; for more information.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--help&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Prints out a help menu listing available options with a description.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--jdbc-opt&lt;/code&gt; &lt;em&gt;&lt;code&gt;option&lt;/code&gt;&lt;/em&gt;&lt;code&gt;=&lt;/code&gt;&lt;em&gt;&lt;code&gt;value&lt;/code&gt;&lt;/em&gt;&lt;code&gt;[&amp;amp;&lt;/code&gt;&lt;em&gt;&lt;code&gt;option2&lt;/code&gt;&lt;/em&gt;&lt;code&gt;=&lt;/code&gt;&lt;em&gt;&lt;code&gt;value2&lt;/code&gt;&lt;/em&gt;&lt;code&gt;...]&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;One or more options to add to the standard JDBC URL that vkconfig uses to connect to the database. Cannot be combined with &lt;code&gt;--jdbc-url&lt;/code&gt;.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--jdbc-url&lt;/code&gt; &lt;em&gt;&lt;code&gt;url&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;A complete JDBC URL that vkconfig uses instead of standard JDBC URL string to connect to the database.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--password&lt;/code&gt; &lt;em&gt;&lt;code&gt;password&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;Password for the database user.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--ssl-ca-alias&lt;/code&gt; &lt;em&gt;&lt;code&gt;alias_name&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;The alias of the root certificate authority in the truststore. When set, the scheduler loads only the certificate associated with the specified alias. When omitted, the scheduler loads all certificates into the truststore.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--ssl-key-alias&lt;/code&gt; &lt;em&gt;&lt;code&gt;alias_name&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;The alias of the key and certificate pairs within the keystore. Must be set when the database uses SSL to connect to Kafka.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--ssl-key-password&lt;/code&gt; &lt;em&gt;&lt;code&gt;password&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;The password for the SSL key. Must be set when the database uses SSL to connect to Kafka.

&lt;div class=&#34;admonition caution&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Caution&lt;/h4&gt;

Specifying this option on the command line can expose it to other users logged into the host. Always use a configuration file to set this option.

&lt;/div&gt;&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--username&lt;/code&gt; &lt;em&gt;&lt;code&gt;username&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;The database user used to alter the configuration of the scheduler. This use must have create privileges on the scheduler&#39;s schema.
&lt;p&gt;&lt;strong&gt;Default:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Current user&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--version&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Displays the version number of the scheduler.&lt;/dd&gt;
&lt;/dl&gt;
&lt;p&gt;&lt;a name=&#34;Configur&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;configuration-file-format&#34;&gt;Configuration file format&lt;/h2&gt;
&lt;p&gt;You can use a configuration file to store common parameters you use in your calls to the vkconfig utility. The configuration file is a text file containing one option setting per line in the format:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;&lt;span class=&#34;code-variable&#34;&gt;option&lt;/span&gt;=&lt;span class=&#34;code-variable&#34;&gt;value&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;You can also include comments in the option file by prefixing them with a hash mark (#).&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;#config.properties:
username=myuser
password=mypassword
dbhost=localhost
dbport=5433
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;You tell vkconfig to use the configuration file using the --conf option:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ /opt/vertica/packages/kafka/bin/vkconfig source --update --conf config.properties
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;You can override any stored parameter from the command line:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ /opt/vertica/packages/kafka/bin/vkconfig source --update --conf config.properties --dbhost otherVerticaHost
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;examples&#34;&gt;Examples&lt;/h2&gt;
&lt;p&gt;These examples show how you can use the shared utility options.&lt;/p&gt;
&lt;p&gt;Display help for the scheduler utility:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ vkconfig scheduler --help
This command configures a Scheduler, which can run and load data from configured
sources and clusters into database tables. It provides options for changing the
&amp;#39;frame duration&amp;#39; (time given per set of batches to resolve), as well as the
dedicated database resource pool the Scheduler will use while running.

Available Options:
PARAMETER               #ARGS    DESCRIPTION
conf                    1        Allow the use of a properties file to associate
                                 parameter keys and values. This file enables
                                 command string reuse and cleaner command strings.
help                    0        Outputs a help context for the given subutility.
version                 0        Outputs the current Version of the scheduer.
skip-validation         0        [Depricated] Use --validation-type.
validation-type         1        Determine what happens when there are
                                 configuration errors. Accepts: ERROR - errors
                                 out, WARN - prints out a message and continues,
                                 SKIP - skip running validations
dbhost                  1        The database hostname that contains
                                 metadata and configuration information. The
                                 default value is &amp;#39;localhost&amp;#39;.
dbport                  1        The port at the hostname to connect to the
                                 database. The default value is &amp;#39;5433&amp;#39;.
username                1        The user to connect to the database. The default
                                 value is the current system user.
password                1        The password for the user connecting to the database.
                                 The default value is empty.
jdbc-url                1        A JDBC URL that can override database connection
                                 parameters and provide additional JDBC options.
jdbc-opt                1        Options to add to the JDBC URL used to connect
                                 to the database (&amp;#39;&amp;amp;&amp;#39;-separated key=value list).
                                 Used with generated URL (i.e. not with
                                 &amp;#39;--jdbc-url&amp;#39; set).
enable-ssl              1        Enable SSL between JDBC and the database and/or
                                 database and Kafka.
ssl-ca-alias            1        The alias of the root CA within the provided
                                 truststore used when connecting between
                                 the database and Kafka.
ssl-key-alias           1        The alias of the key and certificate pair
                                 within the provided keystore used when
                                 connecting between the database and Kafka.
ssl-key-password        1        The password for the key used when connecting
                                 between the database and Kafka. Should be hidden
                                 with file access (see --conf).
config-schema           1        The schema containing the configuration details
                                 to be used, created or edited. This parameter
                                 defines the scheduler. The default value is
                                 &amp;#39;stream_config&amp;#39;.
create                  0        Create a new instance of the supplied type.
read                    0        Read an instance of the supplied type.
update                  0        Update an instance of the supplied type.
delete                  0        Delete an instance of the supplied type.
drop                    0        Drops the specified configuration schema.
                                 CAUTION: this command will completely delete
                                 and remove all configuration and monitoring
                                 data for the specified scheduler.
dump                    0        Dump the config schema query string used to
                                 answer this command in the output.
operator                1        Specifies a user designated as an operator for
                                 the created configuration. Used with --create.
add-operator            1        Add a user designated as an operator for the
                                 specified configuration. Used with --update.
remove-operator         1        Removes a user designated as an operator for
                                 the specified configuration. Used with
                                 --update.
upgrade                 0        Upgrade the current scheduler configuration
                                 schema to the current version of this
                                 scheduler. WARNING: if upgrading between
                                 EXCAVATOR and FRONTLOADER be aware that the
                                 Scheduler is not backwards compatible. The
                                 upgrade procedure will translate your kafka
                                 model into the new stream model.
upgrade-to-schema       1        Used with upgrade: will upgrade the
                                 configuration to a new given schema instead of
                                 upgrading within the same schema.
fix-config              0        Attempts to fix the configuration (ex: dropped
                                 tables) before doing any other updates. Used
                                 with --update.
frame-duration          1        The duration of the Scheduler&amp;#39;s frame, in
                                 which every configured Microbatch runs. Default
                                 is 300 seconds: &amp;#39;00:05:00&amp;#39;
resource-pool           1        The database resource pool to run the Scheduler
                                 on. Default is &amp;#39;general&amp;#39;.
config-refresh          1        The interval of time between Scheduler
                                 configuration refreshes. Default is 5 minutes:
                                 &amp;#39;00:05&amp;#39;
new-source-policy       1        The policy for new Sources to be scheduled
                                 during a frame. Options are: START, END, and
                                 FAIR. Default is &amp;#39;FAIR&amp;#39;.
pushback-policy         1
pushback-max-count      1
auto-sync               1        Automatically update configuration based on
                                 metadata from the Kafka cluster
consumer-group-id       1        The Kafka consumer group id to report offsets
                                 to.
eof-timeout-ms          1        [DEPRECATED] This option has no effect.
&lt;/code&gt;&lt;/pre&gt;
      </description>
    </item>
    
    <item>
      <title>Kafka-Integration: Scheduler tool options</title>
      <link>/en/kafka-integration/vkconfig-script-options/scheduler-tool-options/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/kafka-integration/vkconfig-script-options/scheduler-tool-options/</guid>
      <description>
        
        
        &lt;p&gt;The vkconfig script&#39;s scheduler tool lets you configure schedulers that continuously loads data from Kafka into OpenText™ Analytics Database. Use the scheduler tool to create, update, or delete a scheduler, defined by &lt;code&gt;config-schema&lt;/code&gt;. If you do not specify a scheduler, commands apply to the default stream_config scheduler.&lt;/p&gt;
&lt;h2 id=&#34;syntax&#34;&gt;Syntax&lt;/h2&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;vkconfig scheduler {--create | --read | --update | --drop} &lt;span class=&#34;code-variable&#34;&gt;other_options...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;dl&gt;
&lt;dt&gt;&lt;code&gt;--create&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Creates a new scheduler. Cannot be used with &lt;code&gt;--delete&lt;/code&gt;, &lt;code&gt;--read&lt;/code&gt;, or &lt;code&gt;--update&lt;/code&gt;.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--read&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Outputs the current setting of the scheduler in JSON format. Cannot be used with &lt;code&gt;--create&lt;/code&gt;, &lt;code&gt;--delete&lt;/code&gt;, or &lt;code&gt;--update&lt;/code&gt;.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--update&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Updates the settings of the scheduler. Cannot be used with &lt;code&gt;--create&lt;/code&gt;, &lt;code&gt;--delete&lt;/code&gt;, or &lt;code&gt;--read&lt;/code&gt;.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--drop&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Drops the scheduler&#39;s schema. Dropping its schema deletes the scheduler. After you drop the scheduler&#39;s schema, you cannot recover it.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--add-operator&lt;/code&gt; &lt;em&gt;&lt;code&gt;user_name&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;Grants a database user account or role access to use and alter the scheduler. Requires the &lt;code&gt;--update&lt;/code&gt; shared utility option.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--auto-sync&lt;/code&gt; &lt;code&gt;{TRUE|FALSE}&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;When TRUE, the database automatically synchronizes scheduler source information at the interval specified in &lt;code&gt;--config-refresh&lt;/code&gt;.
&lt;p&gt;For details about what the scheduler synchronizes at each interval, see the &amp;quot;Validating Schedulers&amp;quot; and &amp;quot;Synchronizing Schedulers&amp;quot; sections in &lt;a href=&#34;../../../en/kafka-integration/consuming-data-from-kafka/automatically-consume-data-from-kafka-with-scheduler/#&#34;&gt;Automatically consume data from Kafka with a scheduler&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Default:&lt;/strong&gt; TRUE&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--config-refresh&lt;/code&gt; &lt;em&gt;&lt;code&gt;HH:MM:SS &lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;The interval of time that the scheduler runs before synchronizing its settings and updating its cached metadata (such as changes made by using the &lt;code&gt;--update&lt;/code&gt; option).
&lt;p&gt;&lt;strong&gt;Default:&lt;/strong&gt; 00:05:00&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--consumer-group-id&lt;/code&gt; &lt;em&gt;&lt;code&gt;id_name&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;&lt;p&gt;The name of the Kafka consumer group to which OpenText™ Analytics Database reports its progress consuming messages. Set this value to disable progress reports to a Kafka consumer group. For details, see &lt;a href=&#34;../../../en/kafka-integration/consuming-data-from-kafka/monitoring-message-consumption/monitoring-message-consumption-with-consumer-groups/#&#34;&gt;Monitoring OpenText Analytics Database message consumption with consumer groups&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Default:&lt;/strong&gt; &lt;code&gt;vertica_&lt;/code&gt;&lt;em&gt;&lt;code&gt;database-name&lt;/code&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--dump&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;&lt;p&gt;When you use this option along with the &lt;code&gt;--read&lt;/code&gt; option, vkconfig outputs the OpenText™ Analytics Database query it would use to retrieve the data, rather than outputting the data itself. This option is useful if you want to access the data from within the database without having to go through vkconfig. This option has no effect if not used with &lt;code&gt;--read&lt;/code&gt;.&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--eof-timeout-ms&lt;/code&gt; &lt;em&gt;&lt;code&gt;number of milliseconds &lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;If a COPY command does not receive any messages within the eof-timeout-ms interval, the database responds by ending that COPY statement.
&lt;p&gt;See &lt;a href=&#34;../../../en/kafka-integration/consuming-data-from-kafka/manually-consume-data-from-kafka/#&#34;&gt;Manually consume data from Kafka&lt;/a&gt; for more information.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Default:&lt;/strong&gt; 1 second&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--fix-config&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Repairs the configuration and re-creates any missing tables. Valid only with the &lt;code&gt;--update&lt;/code&gt; shared configuration option.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--frame-duration&lt;/code&gt; &lt;em&gt;&lt;code&gt;HH:MM:SS &lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;The interval of time that all individual frames last with this scheduler. The scheduler must have enough time to run each microbatch (each of which execute a COPY statement). You can approximate the average available time per microbatch using the following equation:
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;&lt;span class=&#34;code-variable&#34;&gt;TimePerMicrobatch&lt;/span&gt;=(&lt;span class=&#34;code-variable&#34;&gt;FrameDuration&lt;/span&gt;*&lt;span class=&#34;code-variable&#34;&gt;Parallelism&lt;/span&gt;)/&lt;span class=&#34;code-variable&#34;&gt;Microbatches&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This is just a rough estimate as there are many factors that impact the amount of time that each microbatch will be able to run.&lt;/p&gt;
&lt;p&gt;The vkconfig utility warns you if the time allocated per microbatch is below 2 seconds. You usually should allocate more than two seconds per microbatch to allow the scheduler to load all of the data in the data stream.

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

In versions of Vertica earlier than 10.0, the default frame duration was 10 seconds. In version 10.0, this default value was increased to 5 minutes in part to compensate for the removal of WOS. If you created your scheduler with the default frame duration in a version prior to 10.0, the frame duration is not updated to the new default value. In this case, consider adjusting the frame duration manually. See &lt;a href=&#34;../../../en/kafka-integration/consuming-data-from-kafka/automatically-consume-data-from-kafka-with-scheduler/choosing-frame-duration/#&#34;&gt;Choosing a frame duration&lt;/a&gt; for more information.

&lt;/div&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Default:&lt;/strong&gt; 00:05:00&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--message_max_bytes&lt;/code&gt; &lt;em&gt;&lt;code&gt;max_message_size&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;&lt;p&gt;Specifies the maximum size, in bytes, of a Kafka protocol batch message.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Default:&lt;/strong&gt; 25165824&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--new-source-policy&lt;/code&gt; &lt;code&gt;{FAIR|START|END}&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Determines how the database allocates resources to the newly added source, one of the following:
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;FAIR: Takes the average length of time from the previous batches and schedules itself appropriately.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;START: All new sources start at the beginning of the frame. The batch receives the minimal amount of time to run.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;END: All new sources start at the end of the frame. The batch receives the maximum amount of time to run.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Default:&lt;/strong&gt; FAIR&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--operator&lt;/code&gt; &lt;em&gt;&lt;code&gt;username&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;Allows the dbadmin to grant privileges to a previously created database user or role.
&lt;p&gt;This option gives the specified user all privileges on the scheduler instance and EXECUTE privileges on the libkafka library and all its UDxs.&lt;/p&gt;
&lt;p&gt;Granting operator privileges gives the user the right to read data off any source in any cluster that can be reached from the database node.&lt;/p&gt;
&lt;p&gt;The dbadmin must grant the user separate permission for them to have write privileges on the target tables.&lt;/p&gt;
&lt;p&gt;Requires the &lt;code&gt;--create&lt;/code&gt; shared utility option. Use the &lt;code&gt;--add-operator&lt;/code&gt; option to grant operate privileges after the scheduler has been created.&lt;/p&gt;
&lt;p&gt;To revoke privileges, use the &lt;code&gt;--remove-operator&lt;/code&gt; option.&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--remove-operator&lt;/code&gt; &lt;em&gt;&lt;code&gt;user_name&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;Removes access to the scheduler from a database user account. Requires the &lt;code&gt;--update&lt;/code&gt; shared utility option.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--resource-pool&lt;/code&gt; &lt;em&gt;&lt;code&gt;pool_name&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;The resource pool to be used by all queries executed by this scheduler. You must create this pool in advance.
&lt;p&gt;&lt;strong&gt;Default:&lt;/strong&gt; &lt;a href=&#34;../../../en/admin/managing-db/managing-workloads/resource-pool-architecture/built-pools/&#34;&gt;GENERAL&lt;/a&gt; pool

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

The scheduler can use only one-fourth of GENERAL pool&#39;s &lt;a href=&#34;../../../en/sql-reference/system-tables/v-catalog-schema/resource-pools/&#34;&gt;PLANNEDCONCURRENCY&lt;/a&gt;.

&lt;/div&gt;&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--upgrade&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Upgrades the existing scheduler and configuration schema to the current database version. The upgraded version of the scheduler is not backwards compatible with earlier versions. To upgrade a scheduler to an alternate schema, use the &lt;code&gt;upgrade-to-schema&lt;/code&gt; parameter. See &lt;a href=&#34;../../../en/kafka-integration/consuming-data-from-kafka/automatically-consume-data-from-kafka-with-scheduler/updating-schedulers-after-upgrades/#&#34;&gt;Updating schedulers after OpenText Analytics Database upgrades&lt;/a&gt; for more information.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--upgrade-to-schema&lt;/code&gt; &lt;em&gt;&lt;code&gt;schema name&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;Copies the scheduler&#39;s schema to a new schema specified by &lt;em&gt;&lt;code&gt;schema name&lt;/code&gt;&lt;/em&gt; and then upgrades it to be compatible with the current version of the database. The database does not modify the old schema. Requires the &lt;code&gt;--upgrade&lt;/code&gt; scheduler utility option.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--validation-type&lt;/code&gt; &lt;code&gt;{ERROR|WARN|SKIP}&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Renamed from &lt;code&gt;--skip-validation&lt;/code&gt;, specifies the level of validation performed on the scheduler. Invalid SQL syntax and other errors can cause invalid microbatches. The database supports the following validation types:
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;ERROR: Cancel configuration or creation if validation fails.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;WARN: Proceed with task if validation fails, but display a warning.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;SKIP: Perform no validation.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For more information on validation, refer to &lt;a href=&#34;../../../en/kafka-integration/consuming-data-from-kafka/automatically-consume-data-from-kafka-with-scheduler/#&#34;&gt;Automatically consume data from Kafka with a scheduler&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Default:&lt;/strong&gt; ERROR&lt;/p&gt;
&lt;/dd&gt;
&lt;/dl&gt;
&lt;p&gt;See &lt;a href=&#34;../../../en/kafka-integration/vkconfig-script-options/common-vkconfig-script-options/#&#34;&gt;Common vkconfig script options&lt;/a&gt; for options that are available in all of the vkconfig tools.&lt;/p&gt;
&lt;h2 id=&#34;examples&#34;&gt;Examples&lt;/h2&gt;
&lt;p&gt;These examples show how you can use the scheduler utility options.&lt;/p&gt;
&lt;p&gt;Give a user, Jim, privileges on the StreamConfig scheduler. Specify that you are making edits to the stream_config scheduler with the &lt;code&gt;--config-schema&lt;/code&gt; option:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ /opt/vertica/packages/kafka/bin/vkconfig scheduler --update --config-schema stream_config --add-operator Jim
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Edit the default stream_config scheduler so that every microbatch waits for data for one second before ending:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ /opt/vertica/packages/kafka/bin/vkconfig scheduler --update --eof-timeout-ms 1000
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Upgrade the scheduler named iot_scheduler_8.1 to a new scheduler named iot_scheduler_9.0 that is compatible with the current version of the database:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ /opt/vertica/packages/kafka/bin/vkconfig scheduler --upgrade --config-schema iot_scheduler_8.1 \
                                           --upgrade-to-schema iot_scheduler_9.0
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Drop the schema scheduler219a:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ /opt/vertica/packages/kafka/bin/vkconfig scheduler --drop --config-schema  scheduler219a --username dbadmin
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Read the current setting of the options you can set using the scheduler tool for the scheduler defined in weblogs.conf.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ vkconfig scheduler --read --conf weblog.conf
{&amp;#34;version&amp;#34;:&amp;#34;v9.2.0&amp;#34;, &amp;#34;frame_duration&amp;#34;:&amp;#34;00:00:10&amp;#34;, &amp;#34;resource_pool&amp;#34;:&amp;#34;weblog_pool&amp;#34;,
&amp;#34;config_refresh&amp;#34;:&amp;#34;00:05:00&amp;#34;, &amp;#34;new_source_policy&amp;#34;:&amp;#34;FAIR&amp;#34;,
&amp;#34;pushback_policy&amp;#34;:&amp;#34;LINEAR&amp;#34;, &amp;#34;pushback_max_count&amp;#34;:5, &amp;#34;auto_sync&amp;#34;:true,
&amp;#34;consumer_group_id&amp;#34;:null}
&lt;/code&gt;&lt;/pre&gt;
      </description>
    </item>
    
    <item>
      <title>Kafka-Integration: Cluster tool options</title>
      <link>/en/kafka-integration/vkconfig-script-options/cluster-tool-options/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/kafka-integration/vkconfig-script-options/cluster-tool-options/</guid>
      <description>
        
        
        &lt;p&gt;The vkconfig script&#39;s cluster tool lets you define the streaming hosts your scheduler connects to.&lt;/p&gt;
&lt;h2 id=&#34;syntax&#34;&gt;Syntax&lt;/h2&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;vkconfig cluster {--create | --read | --update | --delete} \ 
         [--cluster &lt;span class=&#34;code-variable&#34;&gt;cluster_name&lt;/span&gt;] [&lt;span class=&#34;code-variable&#34;&gt;other_options...&lt;/span&gt;]
&lt;/code&gt;&lt;/pre&gt;&lt;dl&gt;
&lt;dt&gt;&lt;code&gt;--create&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Creates a new cluster. Cannot be used with &lt;code&gt;--delete&lt;/code&gt;, &lt;code&gt;--read&lt;/code&gt;, or &lt;code&gt;--update&lt;/code&gt;.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--read&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Outputs the settings of all clusters defined in the scheduler. This output is in JSON format. Cannot be used with &lt;code&gt;--create&lt;/code&gt;, &lt;code&gt;--delete&lt;/code&gt;, or &lt;code&gt;--update&lt;/code&gt;.
&lt;p&gt;You can limit the output to specific clusters by supplying one or more cluster names in the &lt;code&gt;--cluster&lt;/code&gt; option. You an also limit the output to clusters that contain one or more specific hosts using the &lt;code&gt;--hosts&lt;/code&gt; option. Use commas to separate multiple values.&lt;/p&gt;
&lt;p&gt;You can use LIKE wildcards in these options. See &lt;a href=&#34;../../../en/sql-reference/language-elements/predicates/like/#&#34;&gt;LIKE&lt;/a&gt; for more information about using wildcards.&lt;/p&gt;

&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--update&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Updates the settings of &lt;em&gt;&lt;code&gt;cluster_name&lt;/code&gt;&lt;/em&gt;. Cannot be used with &lt;code&gt;--create&lt;/code&gt;, &lt;code&gt;--delete&lt;/code&gt;, or &lt;code&gt;--read&lt;/code&gt;.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--delete&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Deletes the cluster &lt;em&gt;&lt;code&gt;cluster_name&lt;/code&gt;&lt;/em&gt;.  Cannot be used with &lt;code&gt;--create&lt;/code&gt;, &lt;code&gt;--read&lt;/code&gt;, or &lt;code&gt;--update&lt;/code&gt;.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--dump&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;&lt;p&gt;When you use this option along with the &lt;code&gt;--read&lt;/code&gt; option, vkconfig outputs the OpenText™ Analytics Database query it would use to retrieve the data, rather than outputting the data itself. This option is useful if you want to access the data from within the database without having to go through vkconfig. This option has no effect if not used with &lt;code&gt;--read&lt;/code&gt;.&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--cluster&lt;/code&gt; &lt;em&gt;&lt;code&gt;cluster_name&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;A unique, case-insensitive name for the cluster to operate on. This option is required for &lt;code&gt;--create&lt;/code&gt;, &lt;code&gt;--update&lt;/code&gt;, and &lt;code&gt;--delete&lt;/code&gt;.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--hosts&lt;/code&gt; &lt;em&gt;b1&lt;/em&gt;:&lt;em&gt;port&lt;/em&gt;[,&lt;em&gt;b2&lt;/em&gt;:&lt;em&gt;port&lt;/em&gt;...]&lt;/dt&gt;
&lt;dd&gt;Identifies the broker hosts that you want to add, edit, or remove from a Kafka cluster. To identify multiple hosts, use a comma delimiter.&lt;/dd&gt;
&lt;dt&gt;&lt;p&gt;&lt;code&gt;--kafka_conf &#39;&lt;/code&gt;&lt;em&gt;&lt;code&gt;kafka_configuration_setting&lt;/code&gt;&lt;/em&gt;&lt;code&gt;&#39;&lt;/code&gt;&lt;/p&gt;
&lt;/dt&gt;
&lt;dd&gt;&lt;p&gt;A JSON string of property/value pairs to pass directly to the rdkafka, the library that OpenText™ Analytics Database uses to communicate with Kafka. This parameter directly sets global configuration properties that are not available through the database integration with Kafka.&lt;/p&gt;
&lt;p&gt;For details, see &lt;a href=&#34;../../../en/kafka-integration/configuring-and-kafka/directly-setting-kafka-library-options/#&#34;&gt;Directly setting Kafka library options&lt;/a&gt;.&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--kafka_conf_secret &#39;&lt;/code&gt;&lt;em&gt;&lt;code&gt;kafka_configuration_setting&lt;/code&gt;&lt;/em&gt;&lt;code&gt;&#39;&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;&lt;p&gt;Conceals sensitive configuration data that you must pass directly to the rdkafka library, such as passwords. This parameter accepts settings in the same format as &lt;code&gt;kafka_conf&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Values passed to this parameter are not logged or stored in system tables.&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--new-cluster&lt;/code&gt; &lt;em&gt;&lt;code&gt;cluster_name&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;The updated name for the cluster. Requires the &lt;code&gt;--update&lt;/code&gt; shared utility option.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--validation-type {ERROR|WARN|SKIP}&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Specifies the level of validation performed on a created or updated cluster:
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;ERROR - Cancel configuration or creation if vkconfig cannot validate that the cluster exists. This is the default setting.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;WARN - Proceed with task if validation fails, but display a warning.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;SKIP - Perform no validation.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Renamed from &lt;code&gt;--skip-validation&lt;/code&gt;.&lt;/p&gt;
&lt;/dd&gt;
&lt;/dl&gt;
&lt;p&gt;See &lt;a href=&#34;../../../en/kafka-integration/vkconfig-script-options/common-vkconfig-script-options/#&#34;&gt;Common vkconfig script options&lt;/a&gt; for options that are available in all of the vkconfig tools.&lt;/p&gt;
&lt;h2 id=&#34;examples&#34;&gt;Examples&lt;/h2&gt;
&lt;p&gt;This example shows how you can create the cluster, StreamCluster1, and assign two hosts:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ /opt/vertica/packages/kafka/bin/vkconfig cluster --create --cluster StreamCluster1 \
                                           --hosts 10.10.10.10:9092,10.10.10.11:9092
                                           --conf myscheduler.config
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This example shows how you can list all of the clusters associated with the scheduler defined in the weblogs.conf file:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ vkconfig cluster --read --conf weblog.conf
{&amp;#34;cluster&amp;#34;:&amp;#34;kafka_weblog&amp;#34;,
&amp;#34;hosts&amp;#34;:&amp;#34;kafka01.example.com:9092,kafka02.example.com:9092&amp;#34;}
&lt;/code&gt;&lt;/pre&gt;
      </description>
    </item>
    
    <item>
      <title>Kafka-Integration: Source tool options</title>
      <link>/en/kafka-integration/vkconfig-script-options/source-tool-options/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/kafka-integration/vkconfig-script-options/source-tool-options/</guid>
      <description>
        
        
        &lt;p&gt;Use the vkconfig script&#39;s source tool to create, update, or delete a source.&lt;/p&gt;
&lt;h2 id=&#34;syntax&#34;&gt;Syntax&lt;/h2&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;vkconfig source {--create | --read | --update | --delete} \
         --source &lt;span class=&#34;code-variable&#34;&gt;source_name&lt;/span&gt; [&lt;span class=&#34;code-variable&#34;&gt;other_options...&lt;/span&gt;]
&lt;/code&gt;&lt;/pre&gt;&lt;dl&gt;
&lt;dt&gt;&lt;code&gt;--create&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Creates a new source. Cannot be used with &lt;code&gt;--delete&lt;/code&gt;, &lt;code&gt;--read&lt;/code&gt;, or &lt;code&gt;--update&lt;/code&gt;.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--read&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Outputs the current setting of the sources defined in the scheduler. The output is in JSON format. Cannot be used with &lt;code&gt;--create&lt;/code&gt;, &lt;code&gt;--delete&lt;/code&gt;, or &lt;code&gt;--update&lt;/code&gt;.
&lt;p&gt;By default this option outputs all of the sources defined in the scheduler. You can limit the output by using the &lt;code&gt;--cluster&lt;/code&gt;, &lt;code&gt;--enabled&lt;/code&gt;, &lt;code&gt;--partitions&lt;/code&gt;, and &lt;code&gt;--source&lt;/code&gt; options. The output will only contain sources that match the values in these options. The &lt;code&gt;--enabled&lt;/code&gt; option can only have a true or false value. The &lt;code&gt;--source&lt;/code&gt; option is case-sensitive.&lt;/p&gt;
&lt;p&gt;You can use LIKE wildcards in these options. See &lt;a href=&#34;../../../en/sql-reference/language-elements/predicates/like/#&#34;&gt;LIKE&lt;/a&gt; for more information about using wildcards.&lt;/p&gt;

&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--update&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Updates the settings of &lt;em&gt;&lt;code&gt;source_name&lt;/code&gt;&lt;/em&gt;. Cannot be used with &lt;code&gt;--create&lt;/code&gt;, &lt;code&gt;--delete&lt;/code&gt;, or &lt;code&gt;--read&lt;/code&gt;.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--delete&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Deletes the source named &lt;em&gt;&lt;code&gt;source_name&lt;/code&gt;&lt;/em&gt;. Cannot be used with &lt;code&gt;--create&lt;/code&gt;, &lt;code&gt;--read&lt;/code&gt;, or &lt;code&gt;--update&lt;/code&gt;.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--source&lt;/code&gt; &lt;em&gt;&lt;code&gt;source_name&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;Identifies the source to create or alter in the scheduler&#39;s configuration. This option is case-sensitive. You can use any name you like for a new source. Most people use the name of the Kafka topic the scheduler loads its data from. This option is required for &lt;code&gt;--create&lt;/code&gt;, &lt;code&gt;--update&lt;/code&gt;, and &lt;code&gt;--delete&lt;/code&gt;.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--cluster&lt;/code&gt; &lt;em&gt;&lt;code&gt;cluster_name&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;Identifies the cluster containing the source that you want to create or edit. You must have already defined this cluster in the scheduler.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--dump&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;&lt;p&gt;When you use this option along with the &lt;code&gt;--read&lt;/code&gt; option, vkconfig outputs the OpenText™ Analytics Database query it would use to retrieve the data, rather than outputting the data itself. This option is useful if you want to access the data from within the database without having to go through vkconfig. This option has no effect if not used with &lt;code&gt;--read&lt;/code&gt;.&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--enabled&lt;/code&gt; &lt;code&gt;TRUE|FALSE&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;When TRUE, the source is available for use.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--new-cluster&lt;/code&gt; &lt;em&gt;&lt;code&gt;cluster_name&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;Changes the cluster this source belongs to.
&lt;p&gt;All sources referencing the old cluster source now target this cluster.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Requires:&lt;/strong&gt;&lt;code&gt;--update&lt;/code&gt; and &lt;code&gt;--source&lt;/code&gt; options&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--new-source&lt;/code&gt; &lt;em&gt;&lt;code&gt;source_name&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;Updates the name of an existing source to the name specified by this parameter.
&lt;p&gt;&lt;strong&gt;Requires:&lt;/strong&gt; &lt;code&gt;--update&lt;/code&gt; shared utility option&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--partitions&lt;/code&gt; &lt;em&gt;&lt;code&gt;count&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;Sets the number of partitions in the source.
&lt;p&gt;&lt;strong&gt;Default:&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The number of partitions defined in the cluster.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Requires:&lt;/strong&gt;&lt;code&gt;--create&lt;/code&gt; and &lt;code&gt;--source&lt;/code&gt; options&lt;/p&gt;
&lt;p&gt;You must keep this consistent with the number of partitions in the Kafka topic.&lt;/p&gt;
&lt;p&gt;Renamed from &lt;code&gt;--num-partitions&lt;/code&gt;.&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--validation-typERROR|WARN|SKIP}&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Controls the validation performed on a created or updated source:
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;ERROR - Cancel configuration or creation if vkconfig cannot validate the source. This is the default setting.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;WARN - Proceed with task if validation fails, but display a warning.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;SKIP - Perform no validation.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Renamed from &lt;code&gt;--skip-validation&lt;/code&gt;.&lt;/p&gt;
&lt;/dd&gt;
&lt;/dl&gt;
&lt;p&gt;See &lt;a href=&#34;../../../en/kafka-integration/vkconfig-script-options/common-vkconfig-script-options/#&#34;&gt;Common vkconfig script options&lt;/a&gt; for options that are available in all of the vkconfig tools.&lt;/p&gt;
&lt;h2 id=&#34;examples&#34;&gt;Examples&lt;/h2&gt;
&lt;p&gt;The following examples show how you can create or update SourceFeed.&lt;/p&gt;
&lt;p&gt;Create the source SourceFeed and assign it to the cluster, StreamCluster1 in the scheduler defined by the myscheduler.conf config file:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ /opt/vertica/packages/kafka/bin/vkconfig source --create --source SourceFeed \
                                           --cluster StreamCluster1 --partitions 3
                                           --conf myscheduler.conf
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Update the existing source SourceFeed to use the existing cluster, StreamCluster2 in the scheduler defined by the myscheduler.conf config file:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ /opt/vertica/packages/kafka/bin/vkconfig source --update --source SourceFeed \
                                           --new-cluster StreamCluster2
                                           --conf myscheduler.conf
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The following example reads the sources defined in the scheduler defined by the weblogs.conf file.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ vkconfig source --read --conf weblog.conf
{&amp;#34;source&amp;#34;:&amp;#34;web_hits&amp;#34;, &amp;#34;partitions&amp;#34;:1, &amp;#34;src_enabled&amp;#34;:true,
&amp;#34;cluster&amp;#34;:&amp;#34;kafka_weblog&amp;#34;,
&amp;#34;hosts&amp;#34;:&amp;#34;kafka01.example.com:9092,kafka02.example.com:9092&amp;#34;}
&lt;/code&gt;&lt;/pre&gt;
      </description>
    </item>
    
    <item>
      <title>Kafka-Integration: Target tool options</title>
      <link>/en/kafka-integration/vkconfig-script-options/target-tool-options/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/kafka-integration/vkconfig-script-options/target-tool-options/</guid>
      <description>
        
        
        &lt;p&gt;Use the target tool to configure an OpenText™ Analytics Database table to receive data from your streaming data application.&lt;/p&gt;
&lt;h2 id=&#34;syntax&#34;&gt;Syntax&lt;/h2&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;vkconfig target {--create | --read | --update | --delete} \ 
                [--target-table &lt;span class=&#34;code-variable&#34;&gt;table&lt;/span&gt; --table_schema &lt;span class=&#34;code-variable&#34;&gt;schema&lt;/span&gt;] \
                [&lt;span class=&#34;code-variable&#34;&gt;other_options...&lt;/span&gt;]
&lt;/code&gt;&lt;/pre&gt;&lt;dl&gt;
&lt;dt&gt;&lt;code&gt;--create&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Adds a new target table for the scheduler. Cannot be used with &lt;code&gt;--delete&lt;/code&gt;, &lt;code&gt;--read&lt;/code&gt;, or &lt;code&gt;--update&lt;/code&gt;.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--read&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Outputs the targets defined in the scheduler. This output is in JSON format. Cannot be used with &lt;code&gt;--create&lt;/code&gt;, &lt;code&gt;--delete&lt;/code&gt;, or &lt;code&gt;--update&lt;/code&gt;.
&lt;p&gt;By default this option outputs all of the targets defined in the configuration schema. You can limit the output to specific targets by using the &lt;code&gt;--target-schema&lt;/code&gt; and &lt;code&gt;--target-table&lt;/code&gt; options. The vkconfig script only outputs targets that match the values you set in these options.&lt;/p&gt;
&lt;p&gt;You can use LIKE wildcards in these options. See &lt;a href=&#34;../../../en/sql-reference/language-elements/predicates/like/#&#34;&gt;LIKE&lt;/a&gt; for more information about using wildcards.&lt;/p&gt;

&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--update&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Updates the settings for the targeted table. Use with the &lt;code&gt;--new-target-schema&lt;/code&gt; and &lt;code&gt;--new-target-table&lt;/code&gt; options.  Cannot be used with &lt;code&gt;--create&lt;/code&gt;, &lt;code&gt;--delete&lt;/code&gt;, or &lt;code&gt;--read&lt;/code&gt;.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--delete&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Removes the scheduler&#39;s association with the target table &lt;em&gt;&lt;code&gt;table&lt;/code&gt;&lt;/em&gt;. Cannot be used with &lt;code&gt;--create&lt;/code&gt;, &lt;code&gt;--read&lt;/code&gt;, or &lt;code&gt;--update&lt;/code&gt;.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--target-table&lt;/code&gt; &lt;em&gt;&lt;code&gt;table&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;The existing database table for the scheduler to target. This option is required for &lt;code&gt;--create&lt;/code&gt;, &lt;code&gt;--update&lt;/code&gt;, and &lt;code&gt;--delete&lt;/code&gt;.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--target-schema&lt;/code&gt; &lt;em&gt;&lt;code&gt;schema&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;The existing database schema containing the target table. This option is required for &lt;code&gt;--create&lt;/code&gt;, &lt;code&gt;--update&lt;/code&gt;, and &lt;code&gt;--delete&lt;/code&gt;.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--dump&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;&lt;p&gt;When you use this option along with the &lt;code&gt;--read&lt;/code&gt; option, vkconfig outputs the OpenText™ Analytics Database query it would use to retrieve the data, rather than outputting the data itself. This option is useful if you want to access the data from within the database without having to go through vkconfig. This option has no effect if not used with &lt;code&gt;--read&lt;/code&gt;.&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--new-target-schema&lt;/code&gt; &lt;em&gt;&lt;code&gt;schema_name&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;Changes the schema containing the target table to a another existing schema.
&lt;p&gt;&lt;strong&gt;Requires:&lt;/strong&gt; &lt;code&gt;--update&lt;/code&gt; option.&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--new-target-table&lt;/code&gt; &lt;em&gt;&lt;code&gt;table_name&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;Changes the database target table associated with this schema to a another existing table.
&lt;p&gt;&lt;strong&gt;Requires:&lt;/strong&gt; &lt;code&gt;--update&lt;/code&gt; option.&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--validation-type {ERROR|WARN|SKIP}&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Controls validation performed on a created or updated target:
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;ERROR - Cancel configuration or creation if vkconfig cannot validate that the table exists. This is the default setting.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;WARN - Creates or updates the target if validation fails, but display a warning.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;SKIP - Perform no validation.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Renamed from &lt;code&gt;--skip-validation&lt;/code&gt;.&lt;/p&gt;
&lt;/dd&gt;
&lt;/dl&gt;

&lt;div class=&#34;admonition important&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Important&lt;/h4&gt;
Avoid having columns with primary key restrictions in your target table. The scheduler stops loading data if it encounters a row that has a value which violates this restriction. If you must have a primary key restricted column, try to filter out any redundant values for that column in the streamed data before is it loaded by the scheduler.
&lt;/div&gt;

&lt;p&gt;See &lt;a href=&#34;../../../en/kafka-integration/vkconfig-script-options/common-vkconfig-script-options/#&#34;&gt;Common vkconfig script options&lt;/a&gt; for options that are available in all of the vkconfig tools.&lt;/p&gt;
&lt;h2 id=&#34;examples&#34;&gt;Examples&lt;/h2&gt;
&lt;p&gt;This example shows how you can create a target for the scheduler defined in the myscheduler.conf configuration file from public.streamtarget table:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ /opt/vertica/packages/kafka/bin/vkconfig target --create \
            --target-table streamtarget --conf myscheduler.conf
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This example lists all of the targets in the scheduler defined in the weblogs.conf configuration file.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ vkconfig target --read --conf weblog.conf
{&amp;#34;target_schema&amp;#34;:&amp;#34;public&amp;#34;, &amp;#34;target_table&amp;#34;:&amp;#34;web_hits&amp;#34;}
&lt;/code&gt;&lt;/pre&gt;
      </description>
    </item>
    
    <item>
      <title>Kafka-Integration: Load spec tool options</title>
      <link>/en/kafka-integration/vkconfig-script-options/load-spec-tool-options/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/kafka-integration/vkconfig-script-options/load-spec-tool-options/</guid>
      <description>
        
        
        &lt;p&gt;The vkconfig script&#39;s load spec tool lets you provide parameters for a COPY statement that loads streaming data.&lt;/p&gt;
&lt;h2 id=&#34;syntax&#34;&gt;Syntax&lt;/h2&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ vkconfig load-spec {--create | --read | --update | --delete} \
           [--load-spec &lt;span class=&#34;code-variable&#34;&gt;spec-name&lt;/span&gt;] [&lt;span class=&#34;code-variable&#34;&gt;other-options&lt;/span&gt;...]
&lt;/code&gt;&lt;/pre&gt;&lt;dl&gt;
&lt;dt&gt;&lt;code&gt;--create&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Creates a new load spec. Cannot be used with &lt;code&gt;--delete&lt;/code&gt;, &lt;code&gt;--read&lt;/code&gt;, or &lt;code&gt;--update&lt;/code&gt;.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--read&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Outputs the current settings of the load specs defined in the scheduler. This output is in JSON format. Cannot be used with &lt;code&gt;--create&lt;/code&gt;, &lt;code&gt;--delete&lt;/code&gt;, or &lt;code&gt;--update&lt;/code&gt;.
&lt;p&gt;By default, this option outputs all load specs defined in the scheduler. You can limit the output by supplying a single value or a comma-separated list of values to these options:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;--load-spec&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;--filters&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;--uds-kv-parameters&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;--parser&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;--message-max-bytes&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;--parser-parameters&lt;/code&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The vkconfig script only outputs the configuration of load specs that match the values you supply.&lt;/p&gt;
&lt;p&gt;You can use LIKE wildcards in these options. See &lt;a href=&#34;../../../en/sql-reference/language-elements/predicates/like/#&#34;&gt;LIKE&lt;/a&gt; for more information about using wildcards.&lt;/p&gt;

&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--update&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Updates the settings of &lt;em&gt;&lt;code&gt;spec-name&lt;/code&gt;&lt;/em&gt;. Cannot be used with &lt;code&gt;--create&lt;/code&gt;, &lt;code&gt;--delete&lt;/code&gt;, or &lt;code&gt;--read&lt;/code&gt;.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--delete&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Deletes the load spec named &lt;em&gt;&lt;code&gt;spec-name&lt;/code&gt;&lt;/em&gt;. Cannot be used with &lt;code&gt;--create&lt;/code&gt;, &lt;code&gt;--read&lt;/code&gt;, or &lt;code&gt;--update&lt;/code&gt;.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--load-spec &lt;/code&gt;&lt;em&gt;&lt;code&gt;spec-name&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;A unique name for copy load spec to operate on. This option is required for &lt;code&gt;--create&lt;/code&gt;, &lt;code&gt;--update&lt;/code&gt;, and &lt;code&gt;--delete&lt;/code&gt;.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--dump&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;&lt;p&gt;When you use this option along with the &lt;code&gt;--read&lt;/code&gt; option, vkconfig outputs the OpenText™ Analytics Database query it would use to retrieve the data, rather than outputting the data itself. This option is useful if you want to access the data from within the database without having to go through vkconfig. This option has no effect if not used with &lt;code&gt;--read&lt;/code&gt;.&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--filters &amp;quot;&lt;/code&gt;&lt;em&gt;&lt;code&gt;filter-name&lt;/code&gt;&lt;/em&gt;&lt;code&gt;&amp;quot;&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;A OpenText™ Analytics Database FILTER chain containing all of the UDFilters to use in the COPY statement. For more information about filters, see &lt;a href=&#34;../../../en/kafka-integration/consuming-data-from-kafka/parsing-custom-formats/#&#34;&gt;Parsing custom formats&lt;/a&gt;.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--message-max-bytes &lt;/code&gt;&lt;em&gt;&lt;code&gt;max-size&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;&lt;p&gt;Specifies the maximum size, in bytes, of a Kafka protocol batch message.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Default:&lt;/strong&gt; 25165824&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--new-load-spec &lt;/code&gt;&lt;em&gt;&lt;code&gt;new-name&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;A new, unique name for an existing load spec. Requires the &lt;code&gt;--update&lt;/code&gt; parameter.&lt;/dd&gt;
&lt;dt&gt;
&lt;code&gt;--parser-parameters &#34;&lt;span class=&#34;code-variable&#34;&gt;key&lt;/span&gt;=&lt;span class=&#34;code-variable&#34;&gt;value&lt;/span&gt;[,...]`&#34;`&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;A list of parameters to provide to the parser specified in the &lt;code&gt;--parser&lt;/code&gt; parameter. When you use a database native parser, the scheduler passes these parameters to the COPY statement where they are in turn passed to the parser.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--parser &lt;/code&gt;&lt;em&gt;&lt;code&gt;parser-name&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;Identifies a database UDParser to use with a specified target.This parser is used within the COPY statement that the scheduler runs to load data. If you are using a database native parser, the values supplied to the &lt;code&gt;--parser-parameters&lt;/code&gt; option are passed through to the COPY statement.
&lt;p&gt;**Default:**KafkaParser&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--uds-kv-parameters &lt;/code&gt;&lt;em&gt;&lt;code&gt;key&lt;/code&gt;&lt;/em&gt;&lt;code&gt;=&lt;/code&gt;&lt;em&gt;&lt;code&gt;value&lt;/code&gt;&lt;/em&gt;&lt;code&gt;[,...]&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;A comma separated list of key value pairs for the user-defined source.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--validation-type {ERROR|WARN|SKIP}&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Specifies the validation performed on a created or updated load spec, to one of the following:
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;ERROR&lt;/code&gt;: Cancel configuration or creation if vkconfig cannot validate the load spec. This is the default setting.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;WARN&lt;/code&gt;: Proceed with task if validation fails, but display a warning.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;code&gt;SKIP&lt;/code&gt;: Perform no validation.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Renamed from &lt;code&gt;--skip-validation&lt;/code&gt;.&lt;/p&gt;
&lt;/dd&gt;
&lt;/dl&gt;
&lt;p&gt;See &lt;a href=&#34;../../../en/kafka-integration/vkconfig-script-options/common-vkconfig-script-options/#&#34;&gt;Common vkconfig script options&lt;/a&gt; for options that are available in all of the vkconfig tools.&lt;/p&gt;
&lt;h2 id=&#34;examples&#34;&gt;Examples&lt;/h2&gt;
&lt;p&gt;These examples show how you can use the Load Spec utility options.&lt;/p&gt;
&lt;p&gt;Create load spec &lt;code&gt;Streamspec1&lt;/code&gt;:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ /opt/vertica/packages/kafka/bin/vkconfig load-spec --create --load-spec Streamspec1 --conf myscheduler.conf
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Rename load spec &lt;code&gt;Streamspec1&lt;/code&gt; to &lt;code&gt;Streamspec2&lt;/code&gt;:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ /opt/vertica/packages/kafka/bin/vkconfig load-spec --update --load-spec Streamspec1 \
                                                     --new-load-spec Streamspec2 \
                                                     --conf myscheduler.conf
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Update load spec &lt;code&gt;Filterspec&lt;/code&gt; to use the &lt;code&gt;KafkaInsertLengths&lt;/code&gt; filter and a custom decryption filter:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ /opt/vertica/packages/kafka/bin/vkconfig load-spec --update --load-spec Filterspec \
                                                     --filters &amp;#34;KafkaInsertLengths() DecryptFilter(parameter=Key)&amp;#34; \
                                                     --conf myscheduler.conf
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Read the current settings for load spec &lt;code&gt;streamspec1&lt;/code&gt;:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ vkconfig load-spec --read --load-spec streamspec1 --conf weblog.conf
{&amp;#34;load_spec&amp;#34;:&amp;#34;streamspec1&amp;#34;, &amp;#34;filters&amp;#34;:null, &amp;#34;parser&amp;#34;:&amp;#34;KafkaParser&amp;#34;,
&amp;#34;parser_parameters&amp;#34;:null, &amp;#34;load_method&amp;#34;:&amp;#34;TRICKLE&amp;#34;, &amp;#34;message_max_bytes&amp;#34;:null,
&amp;#34;uds_kv_parameters&amp;#34;:null}
&lt;/code&gt;&lt;/pre&gt;
      </description>
    </item>
    
    <item>
      <title>Kafka-Integration: Microbatch tool options</title>
      <link>/en/kafka-integration/vkconfig-script-options/microbatch-tool-options/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/kafka-integration/vkconfig-script-options/microbatch-tool-options/</guid>
      <description>
        
        
        &lt;p&gt;The vkconfig script&#39;s microbatch tool lets you configure a scheduler&#39;s microbatches.&lt;/p&gt;
&lt;h2 id=&#34;syntax&#34;&gt;Syntax&lt;/h2&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;vkconfig microbatch {--create | --read | --update | --delete} \
         [--microbatch&lt;span class=&#34;code-variable&#34;&gt; microbatch_name&lt;/span&gt;] [&lt;span class=&#34;code-variable&#34;&gt;other_options...&lt;/span&gt;]
&lt;/code&gt;&lt;/pre&gt;&lt;dl&gt;
&lt;dt&gt;&lt;code&gt;--create&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Creates a new microbatch. Cannot be used with &lt;code&gt;--delete&lt;/code&gt;, &lt;code&gt;--read&lt;/code&gt;, or &lt;code&gt;--update&lt;/code&gt;.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--read&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Outputs the current settings of all microbatches defined in the scheduler. This output is in JSON format. Cannot be used with &lt;code&gt;--create&lt;/code&gt;, &lt;code&gt;--delete&lt;/code&gt;, or &lt;code&gt;--update&lt;/code&gt;.
&lt;p&gt;You can limit the output to specific microbatches by using the &lt;code&gt;--consumer-group-id&lt;/code&gt;, &lt;code&gt;--enabled&lt;/code&gt;, &lt;code&gt;--load-spec&lt;/code&gt;, &lt;code&gt;--microbatch&lt;/code&gt;, &lt;code&gt;--rejection-schema&lt;/code&gt;, &lt;code&gt;--rejection-table&lt;/code&gt;, &lt;code&gt;--target-schema&lt;/code&gt;, &lt;code&gt;--target-table&lt;/code&gt;, and &lt;code&gt;--target-columns&lt;/code&gt; options. The &lt;code&gt;--enabled&lt;/code&gt; option only accepts a true or false value.&lt;/p&gt;
&lt;p&gt;You can use LIKE wildcards in these options. See &lt;a href=&#34;../../../en/sql-reference/language-elements/predicates/like/#&#34;&gt;LIKE&lt;/a&gt; for more information about using wildcards.&lt;/p&gt;

&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--update&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Updates the settings of &lt;em&gt;&lt;code&gt;microbatch_name&lt;/code&gt;&lt;/em&gt;. Cannot be used with &lt;code&gt;--create&lt;/code&gt;, &lt;code&gt;--delete&lt;/code&gt;, or &lt;code&gt;--read&lt;/code&gt;.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--delete&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Deletes the microbatch named &lt;em&gt;&lt;code&gt;microbatch_name&lt;/code&gt;&lt;/em&gt;. Cannot be used with &lt;code&gt;--create&lt;/code&gt;, &lt;code&gt;--read&lt;/code&gt;, or &lt;code&gt;--update&lt;/code&gt;.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--microbatch &lt;/code&gt;&lt;em&gt;&lt;code&gt;microbatch_name&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;A unique, case insensitive name for the microbatch. This option is required for &lt;code&gt;--create&lt;/code&gt;, &lt;code&gt;--update&lt;/code&gt;, and &lt;code&gt;--delete&lt;/code&gt;.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--add-source-cluster &lt;/code&gt;&lt;em&gt;&lt;code&gt;cluster_name&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;The name of a cluster to assign to the microbatch you specify with the &lt;code&gt;--microbatch&lt;/code&gt; option. You can use this parameter once per command. You can also use it with &lt;code&gt;--update&lt;/code&gt; to add sources to a microbatch. You can only add sources from the same cluster to a single microbatch. Requires &lt;code&gt;--add-source&lt;/code&gt;.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--add-source &lt;/code&gt;&lt;em&gt;&lt;code&gt;source_name&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;The name of a source to assign to this microbatch. You can use this parameter once per command. You can also use it with &lt;code&gt;--update&lt;/code&gt; to add sources to a microbatch. Requires &lt;code&gt;--add-source-cluster&lt;/code&gt;.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--cluster &lt;/code&gt;&lt;em&gt;&lt;code&gt;cluster_name&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;The name of the cluster to which the &lt;code&gt;--offset&lt;/code&gt; option applies. Only required if the microbatch defines more than one cluster or the &lt;code&gt;--source&lt;/code&gt; parameter is supplied. Requires the &lt;code&gt;--offset&lt;/code&gt; option.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--consumer-group-id &lt;/code&gt;&lt;em&gt;&lt;code&gt;id_name&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;&lt;p&gt;The name of the Kafka consumer group to which OpenText™ Analytics Database reports its progress consuming messages. Set this value to disable progress reports to a Kafka consumer group. For details, see &lt;a href=&#34;../../../en/kafka-integration/consuming-data-from-kafka/monitoring-message-consumption/monitoring-message-consumption-with-consumer-groups/#&#34;&gt;Monitoring OpenText Analytics Database message consumption with consumer groups&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Default:&lt;/strong&gt; &lt;code&gt;vertica_&lt;/code&gt;&lt;em&gt;&lt;code&gt;database-name&lt;/code&gt;&lt;/em&gt;&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--dump&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;&lt;p&gt;When you use this option along with the &lt;code&gt;--read&lt;/code&gt; option, vkconfig outputs the OpenText™ Analytics Database query it would use to retrieve the data, rather than outputting the data itself. This option is useful if you want to access the data from within the database without having to go through vkconfig. This option has no effect if not used with &lt;code&gt;--read&lt;/code&gt;.&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--enabled TRUE|FALSE&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;When TRUE, allows the microbatch to execute.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--load-spec &lt;/code&gt;&lt;em&gt;&lt;code&gt;loadspec_name&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;The load spec to use while processing this microbatch.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--max-parallelism &lt;/code&gt;&lt;em&gt;&lt;code&gt;max_num_loads&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;The maximum number of simultaneous COPY statements created for the microbatch. The scheduler dynamically splits a single microbatch with multiple partitions into &lt;em&gt;&lt;code&gt;max_num_loads&lt;/code&gt;&lt;/em&gt; COPY statements with fewer partitions.
&lt;p&gt;This option allows you to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Control the transaction size.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Optimize your loads according to your scheduler&#39;s &lt;a href=&#34;../../../en/kafka-integration/consuming-data-from-kafka/automatically-consume-data-from-kafka-with-scheduler/managing-scheduler-resources-and-performance/&#34;&gt;scheduler&#39;s resource pool&lt;/a&gt; settings, such as &lt;a href=&#34;../../../en/sql-reference/system-tables/v-catalog-schema/resource-pools/&#34;&gt;PLANNEDCONCURRENCY&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--new-microbatch &lt;/code&gt;&lt;em&gt;&lt;code&gt;updated_name&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;The updated name for the microbatch. Requires the &lt;code&gt;--update&lt;/code&gt; option.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--offset &lt;/code&gt;&lt;em&gt;&lt;code&gt;partition_offset&lt;/code&gt;&lt;/em&gt;&lt;code&gt;[,...]&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;The offset of the message in the source where the microbatch starts its load. If you use this parameter, you must supply an offset value for each partition in the source or each partition you list in the &lt;code&gt;--partition&lt;/code&gt; option.
&lt;p&gt;You can use this option to skip some messages in the source or reload previously read messages.&lt;/p&gt;
&lt;p&gt;See &lt;a href=&#34;#Special&#34;&gt;Special Starting Offset Values&lt;/a&gt; below for more information.

&lt;div class=&#34;admonition important&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Important&lt;/h4&gt;
You cannot set an offset for a microbatch while the scheduler is running. If you attempt to do so, the vkconfig utility returns an error. Use the shutdown utility to shut the scheduler down before setting an offset for a microbatch.
&lt;/div&gt;&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--partition &lt;/code&gt;&lt;em&gt;&lt;code&gt;partition&lt;/code&gt;&lt;/em&gt;&lt;code&gt;[,...]&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;One or more partitions to which the offsets given in the &lt;code&gt;--offset&lt;/code&gt; option apply. If you supply this option, then the offset values given in the &lt;code&gt;--offset&lt;/code&gt; option applies to the partitions you specify. Requires the &lt;code&gt;--offset&lt;/code&gt; option.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--rejection-schema &lt;/code&gt;&lt;em&gt;&lt;code&gt;schema_name&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;The existing OpenText™ Analytics Database schema that contains a table for storing rejected messages.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--rejection-table &lt;/code&gt;&lt;em&gt;&lt;code&gt;table_name&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;The existing database table that stores rejected messages.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--remove-source-cluster &lt;/code&gt;&lt;em&gt;&lt;code&gt;cluster_name&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;The name of a cluster to remove from this microbatch. You can use this parameter once per command. Requires &lt;code&gt;--remove-source&lt;/code&gt;.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--remove-source &lt;/code&gt;&lt;em&gt;&lt;code&gt;source_name&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;The name of a source to remove from this microbatch. You can use this parameter once per command. You can also use it with &lt;code&gt;--update&lt;/code&gt; to remove multiple sources from a microbatch. Requires &lt;code&gt;--remove-source-cluster&lt;/code&gt;.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--source &lt;/code&gt;&lt;em&gt;&lt;code&gt;source_name&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;The name of the source to which the offset in the &lt;code&gt;--offset&lt;/code&gt; option applies. Required when the microbatch defines more than one source or the &lt;code&gt;--cluster&lt;/code&gt; parameter is given. Requires the &lt;code&gt;--offset&lt;/code&gt; option.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--target-columns &lt;/code&gt;&lt;em&gt;&lt;code&gt;column_expression&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;A column expression for the target table, where &lt;em&gt;&lt;code&gt;column_expression&lt;/code&gt;&lt;/em&gt; can be a comma-delimited list of columns or a complete expression.
&lt;p&gt;See the COPY statement &lt;a href=&#34;../../../en/sql-reference/statements/copy/parameters/#&#34;&gt;Parameters&lt;/a&gt; for a description of column expressions.&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--target-schema &lt;/code&gt;&lt;em&gt;&lt;code&gt;schema_name&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;The existing database target schema associated with this microbatch.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--target-table &lt;/code&gt;&lt;em&gt;&lt;code&gt;table_name&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;The name of a database table corresponding to the target. This table must belong to the target schema.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--validation-type {ERROR|WARN|SKIP}&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Controls the validation performed on a created or updated microbatch:
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;ERROR - Cancel configuration or creation if vkconfig cannot validate the microbatch. This is the default setting.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;WARN - Proceed with task if validation fails, but display a warning.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;SKIP - Perform no validation.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Renamed from &lt;code&gt;--skip-validation&lt;/code&gt;.&lt;/p&gt;
&lt;/dd&gt;
&lt;/dl&gt;
&lt;p&gt;See &lt;a href=&#34;../../../en/kafka-integration/vkconfig-script-options/common-vkconfig-script-options/#&#34;&gt;Common vkconfig script options&lt;/a&gt; for options that are available in all of the vkconfig tools.&lt;/p&gt;
&lt;p&gt;&lt;a name=&#34;Special&#34;&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&#34;special-starting-offset-values&#34;&gt;Special starting offset values&lt;/h2&gt;
&lt;p&gt;The &lt;em&gt;&lt;code&gt;start_offset&lt;/code&gt;&lt;/em&gt; portion of the &lt;code&gt;stream&lt;/code&gt; parameter lets you start loading messages from a specific point in the topic&#39;s partition. It also accepts one of two special offset values:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;-2 tells the scheduler to start loading at the earliest available message in the topic&#39;s partition. This value is useful when you want to load as many messages as you can from the Kafka topic&#39;s partition.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;-3 tells the scheduler to start loading from the consumer group&#39;s saved offset. If the consumer group does not have a saved offset, it starts loading from the earliest available message in the topic partition. See &lt;a href=&#34;../../../en/kafka-integration/consuming-data-from-kafka/monitoring-message-consumption/monitoring-message-consumption-with-consumer-groups/#&#34;&gt;Monitoring OpenText Analytics Database message consumption with consumer groups&lt;/a&gt; for more information.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;examples&#34;&gt;Examples&lt;/h2&gt;
&lt;p&gt;This example shows how you can create the microbatch, mbatch1. This microbatch identifies the schema, target table, load spec, and source for the microbatch:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ /opt/vertica/packages/kafka/bin/vkconfig microbatch --create --microbatch mbatch1 \
                                                    --target-schema public \
                                                    --target-table BatchTarget \
                                                    --load-spec Filterspec \
                                                    --add-source SourceFeed \
                                                    --add-source-cluster StreamCluster1 \
                                                    --conf myscheduler.conf
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This example demonstrates listing the current settings for the microbatches in the scheduler defined in the weblog.conf configuration file.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ vkconfig microbatch --read --conf weblog.conf
{&amp;#34;microbatch&amp;#34;:&amp;#34;weblog&amp;#34;, &amp;#34;target_columns&amp;#34;:null, &amp;#34;rejection_schema&amp;#34;:null,
&amp;#34;rejection_table&amp;#34;:null, &amp;#34;enabled&amp;#34;:true, &amp;#34;consumer_group_id&amp;#34;:null,
&amp;#34;load_spec&amp;#34;:&amp;#34;weblog_load&amp;#34;, &amp;#34;filters&amp;#34;:null, &amp;#34;parser&amp;#34;:&amp;#34;KafkaJSONParser&amp;#34;,
&amp;#34;parser_parameters&amp;#34;:null, &amp;#34;load_method&amp;#34;:&amp;#34;TRICKLE&amp;#34;, &amp;#34;message_max_bytes&amp;#34;:null,
&amp;#34;uds_kv_parameters&amp;#34;:null, &amp;#34;target_schema&amp;#34;:&amp;#34;public&amp;#34;, &amp;#34;target_table&amp;#34;:&amp;#34;web_hits&amp;#34;,
&amp;#34;source&amp;#34;:&amp;#34;web_hits&amp;#34;, &amp;#34;partitions&amp;#34;:1, &amp;#34;src_enabled&amp;#34;:true, &amp;#34;cluster&amp;#34;:&amp;#34;kafka_weblog&amp;#34;,
&amp;#34;hosts&amp;#34;:&amp;#34;kafka01.example.com:9092,kafka02.example.com:9092&amp;#34;}
&lt;/code&gt;&lt;/pre&gt;
      </description>
    </item>
    
    <item>
      <title>Kafka-Integration: Launch tool options</title>
      <link>/en/kafka-integration/vkconfig-script-options/launch-tool-options/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/kafka-integration/vkconfig-script-options/launch-tool-options/</guid>
      <description>
        
        
        &lt;p&gt;Use the vkconfig script&#39;s launch tool to assign a name to a scheduler instance.&lt;/p&gt;
&lt;h2 id=&#34;syntax&#34;&gt;Syntax&lt;/h2&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;vkconfig launch [&lt;span class=&#34;code-variable&#34;&gt;options...&lt;/span&gt;]
&lt;/code&gt;&lt;/pre&gt;&lt;dl&gt;
&lt;dt&gt;&lt;code&gt;--enable-ssl&lt;/code&gt; &lt;code&gt;{true|false}&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;(Optional) Enables SSL authentication
between Kafka and OpenText™ Analytics Database. For more information, see &lt;a href=&#34;../../../en/kafka-integration/tlsssl-encryption-with-kafka/#&#34;&gt;TLS/SSL encryption with Kafka&lt;/a&gt;.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--ssl-ca-alias&lt;/code&gt; &lt;em&gt;&lt;code&gt;alias&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;The user-defined alias of the root certifying authority you are using to authenticate communication between the database and Kafka. This parameter is used only when SSL is enabled.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--ssl-key-alias&lt;/code&gt; &lt;em&gt;&lt;code&gt;alias&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;The user-defined alias of the key/certificate pair you are using to authenticate communication between the database and Kafka. This parameter is used only when SSL is enabled.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--ssl-key-password&lt;/code&gt; &lt;em&gt;&lt;code&gt;password&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;The password used to create your SSL key. This parameter is used only when SSL is enabled.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--instance-name&lt;/code&gt; &lt;em&gt;&lt;code&gt;name&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;(Optional) Allows you to name the process running the scheduler. You can use this command when viewing the scheduler_history table, to find which instance is currently running.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--refresh-interval&lt;/code&gt; &lt;em&gt;&lt;code&gt;hours&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;(Optional) The time interval at which the connection between the database and Kafka is refreshed (24 hours by default).&lt;/dd&gt;
&lt;dt&gt;&lt;p&gt;&lt;code&gt;--kafka_conf &#39;&lt;/code&gt;&lt;em&gt;&lt;code&gt;kafka_configuration_setting&lt;/code&gt;&lt;/em&gt;&lt;code&gt;&#39;&lt;/code&gt;&lt;/p&gt;
&lt;/dt&gt;
&lt;dd&gt;&lt;p&gt;A JSON string of property/value pairs to pass directly to the rdkafka, the library that OpenText™ Analytics Database uses to communicate with Kafka. This parameter directly sets global configuration properties that are not available through the database integration with Kafka.&lt;/p&gt;
&lt;p&gt;For details, see &lt;a href=&#34;../../../en/kafka-integration/configuring-and-kafka/directly-setting-kafka-library-options/#&#34;&gt;Directly setting Kafka library options&lt;/a&gt;.&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--kafka_conf_secret &#39;&lt;/code&gt;&lt;em&gt;&lt;code&gt;kafka_configuration_setting&lt;/code&gt;&lt;/em&gt;&lt;code&gt;&#39;&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;&lt;p&gt;Conceals sensitive configuration data that you must pass directly to the rdkafka library, such as passwords. This parameter accepts settings in the same format as &lt;code&gt;kafka_conf&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Values passed to this parameter are not logged or stored in system tables.&lt;/p&gt;
&lt;/dd&gt;
&lt;/dl&gt;
&lt;p&gt;See &lt;a href=&#34;../../../en/kafka-integration/vkconfig-script-options/common-vkconfig-script-options/#&#34;&gt;Common vkconfig script options&lt;/a&gt; for options that are available in all of the vkconfig tools.&lt;/p&gt;
&lt;h2 id=&#34;examples&#34;&gt;Examples&lt;/h2&gt;
&lt;p&gt;This example shows how you can launch the scheduler defined in the myscheduler.conf config file and give it the instance name PrimaryScheduler:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ nohup /opt/vertica/packages/kafka/bin/vkconfig launch --instance-name PrimaryScheduler \
  --conf myscheduler.conf &amp;gt;/dev/null 2&amp;gt;&amp;amp;1 &amp;amp;
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This example shows how you can launch an instance named SecureScheduler with SSL enabled:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ nohup /opt/vertica/packages/kafka/bin/vkconfig launch --instance-name SecureScheduler --enable-SSL true \
                                                  --ssl-ca-alias authenticcert --ssl-key-alias ourkey \
                                                  --ssl-key-password secret \
                                                  --conf myscheduler.conf \
                                                  &amp;gt;/dev/null 2&amp;gt;&amp;amp;1 &amp;amp;
&lt;/code&gt;&lt;/pre&gt;
      </description>
    </item>
    
    <item>
      <title>Kafka-Integration: Shutdown tool options</title>
      <link>/en/kafka-integration/vkconfig-script-options/shutdown-tool-options/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/kafka-integration/vkconfig-script-options/shutdown-tool-options/</guid>
      <description>
        
        
        &lt;p&gt;Use the vkconfig script&#39;s shutdown tool to terminate one or all OpenText™ Analytics Database schedulers running on a host. Always run this command before restarting a scheduler to ensure the scheduler has shutdown correctly.&lt;/p&gt;
&lt;h2 id=&#34;syntax&#34;&gt;Syntax&lt;/h2&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;vkconfig shutdown [&lt;span class=&#34;code-variable&#34;&gt;options...&lt;/span&gt;]
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;See &lt;a href=&#34;../../../en/kafka-integration/vkconfig-script-options/common-vkconfig-script-options/#&#34;&gt;Common vkconfig script options&lt;/a&gt; for options that are available in all vkconfig tools.&lt;/p&gt;
&lt;h2 id=&#34;examples&#34;&gt;Examples&lt;/h2&gt;
&lt;p&gt;To terminate all schedulers running on a host, use the &lt;code&gt;shutdown&lt;/code&gt; command with no options:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ /opt/vertica/packages/kafka/bin/vkconfig shutdown
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Use the &lt;code&gt;--conf&lt;/code&gt; or &lt;code&gt;--config-schema&lt;/code&gt; option to specify a scheduler to shut down. The following command terminates the scheduler that was launched with the same &lt;code&gt;--conf myscheduler.conf&lt;/code&gt; option:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ /opt/vertica/packages/kafka/bin/vkconfig shutdown --conf myscheduler.conf
&lt;/code&gt;&lt;/pre&gt;
      </description>
    </item>
    
    <item>
      <title>Kafka-Integration: Statistics tool options</title>
      <link>/en/kafka-integration/vkconfig-script-options/statistics-tool-options/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/kafka-integration/vkconfig-script-options/statistics-tool-options/</guid>
      <description>
        
        
        &lt;p&gt;The statistics tool lets you access the history of microbatches that your scheduler has run. This tool outputs the log of the microbatches in JSON format to the standard output. You can use its options to filter the list of microbatches to get just the microbatches that interest you.

&lt;div class=&#34;alert admonition note&#34; role=&#34;alert&#34;&gt;
&lt;h4 class=&#34;admonition-head&#34;&gt;Note&lt;/h4&gt;

The statistics tool can sometimes produce confusing output if you have altered the scheduler configuration over time. For example, suppose you have microbatch-a target a table. Later, you change the scheduler&#39;s configuration so that microbatch-b targets the table. Afterwards, you run the statistics tool and filter the microbatch log based on target table. Then the log output will show entries from both microbatch-a and microbatch-b.

&lt;/div&gt;&lt;/p&gt;
&lt;h2 id=&#34;syntax&#34;&gt;Syntax&lt;/h2&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;vkconfig statistics [&lt;span class=&#34;code-variable&#34;&gt;options&lt;/span&gt;]
&lt;/code&gt;&lt;/pre&gt;&lt;dl&gt;
&lt;dt&gt;&lt;code&gt;--cluster &amp;quot;&lt;/code&gt;&lt;em&gt;&lt;code&gt;cluster&lt;/code&gt;&lt;/em&gt;&lt;code&gt;&amp;quot;[,&amp;quot;&lt;/code&gt;&lt;em&gt;&lt;code&gt;cluster2&lt;/code&gt;&lt;/em&gt;&lt;code&gt;&amp;quot;...]&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Only return microbatches that retrieved data from a cluster whose name matches one in the list you supply.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--dump&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Instead of returning microbatch data, return the SQL query that vkconfig would execute to extract the data from the scheduler tables. You can use this option if you want use a OpenText™ Analytics Database client application to get the microbatch log instead of using vkconfig&#39;s JSON output.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--from-timestamp &amp;quot;&lt;/code&gt;&lt;em&gt;&lt;code&gt;timestamp&lt;/code&gt;&lt;/em&gt;&lt;code&gt;&amp;quot;&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Only return microbatches that began after &lt;em&gt;&lt;code&gt;timestamp&lt;/code&gt;&lt;/em&gt;. The timestamp value is in &lt;em&gt;yyyy&lt;/em&gt;-[&lt;em&gt;m&lt;/em&gt;]&lt;em&gt;m&lt;/em&gt;-[&lt;em&gt;d&lt;/em&gt;]&lt;em&gt;d&lt;/em&gt; &lt;em&gt;hh&lt;/em&gt;:&lt;em&gt;mm&lt;/em&gt;:&lt;em&gt;ss&lt;/em&gt; format.
&lt;p&gt;Cannot be used in conjunction with &lt;code&gt;--last&lt;/code&gt;.&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--last &lt;/code&gt;&lt;em&gt;&lt;code&gt;number&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;Returns the &lt;em&gt;number&lt;/em&gt; most recent microbatches that meet all other filters. Cannot be used in conjunction with &lt;code&gt;--from-timestamp&lt;/code&gt; or &lt;code&gt;--to-timestamp&lt;/code&gt;.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--microbatch &amp;quot;&lt;/code&gt;&lt;em&gt;&lt;code&gt;name&lt;/code&gt;&lt;/em&gt;&lt;code&gt;&amp;quot;[,&amp;quot;&lt;/code&gt;&lt;em&gt;&lt;code&gt;name2&lt;/code&gt;&lt;/em&gt;&lt;code&gt;&amp;quot;...]&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Only return microbatches whose name matches one of the names in the comma-separated list.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--partition &lt;/code&gt;&lt;em&gt;&lt;code&gt;partition#&lt;/code&gt;&lt;/em&gt;&lt;code&gt;[,&lt;/code&gt;&lt;em&gt;&lt;code&gt;partition#2&lt;/code&gt;&lt;/em&gt;&lt;code&gt;...]&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Only return microbatches that accessed data from the topic partition that matches ones of the values in the partition list.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--source &amp;quot;&lt;/code&gt;&lt;em&gt;&lt;code&gt;source&lt;/code&gt;&lt;/em&gt;&lt;code&gt;&amp;quot;[,&amp;quot;&lt;/code&gt;&lt;em&gt;&lt;code&gt;source2&lt;/code&gt;&lt;/em&gt;&lt;code&gt;&amp;quot;...]&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Only return microbatches that accessed data from a source whose name matches one of the names in the list you supply to this argument.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--target-schema &amp;quot;schema&amp;quot;[,&amp;quot;schema2&amp;quot;...]&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Only return microbatches that wrote data to the database schemas whose name matches one of the names in the target schema list argument.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--target-table &amp;quot;table&amp;quot;[,&amp;quot;table2&amp;quot;...]&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Only return microbatches that wrote data to database tables whose name match one of the names in the target schema list argument.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--to-timestamp &amp;quot;&lt;/code&gt;&lt;em&gt;&lt;code&gt;timestamp&lt;/code&gt;&lt;/em&gt;&lt;code&gt;&amp;quot;&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;Only return microbatches that began before &lt;em&gt;&lt;code&gt;timestamp&lt;/code&gt;&lt;/em&gt;. The timestamp value is in &lt;em&gt;yyyy&lt;/em&gt;-[&lt;em&gt;m&lt;/em&gt;]&lt;em&gt;m&lt;/em&gt;-[&lt;em&gt;d&lt;/em&gt;]&lt;em&gt;d&lt;/em&gt; &lt;em&gt;hh&lt;/em&gt;:&lt;em&gt;mm&lt;/em&gt;:&lt;em&gt;ss&lt;/em&gt; format.
&lt;p&gt;Cannot be used in conjunction with &lt;code&gt;--last&lt;/code&gt;.&lt;/p&gt;
&lt;/dd&gt;
&lt;/dl&gt;
&lt;p&gt;See &lt;a href=&#34;../../../en/kafka-integration/vkconfig-script-options/common-vkconfig-script-options/#&#34;&gt;Common vkconfig script options&lt;/a&gt; for options that are available in all of the vkconfig tools.&lt;/p&gt;
&lt;h2 id=&#34;usage-considerations&#34;&gt;Usage considerations&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;You can use LIKE wildcards in the values you supply to the &lt;code&gt;--cluster&lt;/code&gt;, &lt;code&gt;--microbatch&lt;/code&gt;, &lt;code&gt;--source&lt;/code&gt;, &lt;code&gt;--target-schema&lt;/code&gt;, and &lt;code&gt;--target-table&lt;/code&gt; arguments. This feature lets you match partial strings in the microbatch data. See &lt;a href=&#34;../../../en/sql-reference/language-elements/predicates/like/#&#34;&gt;LIKE&lt;/a&gt; for more information about using wildcards.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The string comparisons for the &lt;code&gt;--cluster&lt;/code&gt;, &lt;code&gt;--microbatch&lt;/code&gt;, &lt;code&gt;--source&lt;/code&gt;, &lt;code&gt;--target-schema&lt;/code&gt;, and &lt;code&gt;--target-table&lt;/code&gt; arguments are case-insensitive.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The date and time values you supply to the &lt;code&gt;--from-timestamp&lt;/code&gt; and &lt;code&gt;--to-timestamp&lt;/code&gt; arguments use the &lt;a href=&#34;https://docs.oracle.com/javase/8/docs/api/java/sql/Timestamp.html&#34;&gt;java.sql.timestamp&lt;/a&gt; format for parsing the value. This format&#39;s parsing can accept values that you may consider invalid and would expect it to reject. For example, if you supply a timestamp of 01-01-2018 24:99:99, the Java timestamp parser silently converts the date to 2018-01-02 01:40:39 instead of returning an error.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;examples&#34;&gt;Examples&lt;/h2&gt;
&lt;p&gt;This example gets the last microbatch that the scheduler defined in the weblog.conf file ran:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ /opt/vertica/packages/kafka/bin/vkconfig statistics --last 1 --conf weblog.conf
{&amp;#34;microbatch&amp;#34;:&amp;#34;weblog&amp;#34;, &amp;#34;target_schema&amp;#34;:&amp;#34;public&amp;#34;, &amp;#34;target_table&amp;#34;:&amp;#34;web_hits&amp;#34;,
&amp;#34;source_name&amp;#34;:&amp;#34;web_hits&amp;#34;, &amp;#34;source_cluster&amp;#34;:&amp;#34;kafka_weblog&amp;#34;, &amp;#34;source_partition&amp;#34;:0,
&amp;#34;start_offset&amp;#34;:80000, &amp;#34;end_offset&amp;#34;:79999, &amp;#34;end_reason&amp;#34;:&amp;#34;END_OF_STREAM&amp;#34;,
&amp;#34;end_reason_message&amp;#34;:null, &amp;#34;partition_bytes&amp;#34;:0, &amp;#34;partition_messages&amp;#34;:0,
&amp;#34;timeslice&amp;#34;:&amp;#34;00:00:09.793000&amp;#34;, &amp;#34;batch_start&amp;#34;:&amp;#34;2018-11-06 09:42:00.176747&amp;#34;,
&amp;#34;batch_end&amp;#34;:&amp;#34;2018-11-06 09:42:00.437787&amp;#34;, &amp;#34;source_duration&amp;#34;:&amp;#34;00:00:00.214314&amp;#34;,
&amp;#34;consecutive_error_count&amp;#34;:null, &amp;#34;transaction_id&amp;#34;:45035996274513069,
&amp;#34;frame_start&amp;#34;:&amp;#34;2018-11-06 09:41:59.949&amp;#34;, &amp;#34;frame_end&amp;#34;:null}
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;If your scheduler is reading from more than partition, the &lt;code&gt;--last 1&lt;/code&gt; option lists the last microbatch from each partition:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ /opt/vertica/packages/kafka/bin/vkconfig statistics --last 1 --conf iot.conf
{&amp;#34;microbatch&amp;#34;:&amp;#34;iotlog&amp;#34;, &amp;#34;target_schema&amp;#34;:&amp;#34;public&amp;#34;, &amp;#34;target_table&amp;#34;:&amp;#34;iot_data&amp;#34;,
&amp;#34;source_name&amp;#34;:&amp;#34;iot_data&amp;#34;, &amp;#34;source_cluster&amp;#34;:&amp;#34;kafka_iot&amp;#34;, &amp;#34;source_partition&amp;#34;:0,
&amp;#34;start_offset&amp;#34;:-2, &amp;#34;end_offset&amp;#34;:-2, &amp;#34;end_reason&amp;#34;:&amp;#34;DEADLINE&amp;#34;,
&amp;#34;end_reason_message&amp;#34;:null, &amp;#34;partition_bytes&amp;#34;:0, &amp;#34;partition_messages&amp;#34;:0,
&amp;#34;timeslice&amp;#34;:&amp;#34;00:00:09.842000&amp;#34;, &amp;#34;batch_start&amp;#34;:&amp;#34;2018-11-06 12:52:49.387567&amp;#34;,
&amp;#34;batch_end&amp;#34;:&amp;#34;2018-11-06 12:52:59.400219&amp;#34;, &amp;#34;source_duration&amp;#34;:&amp;#34;00:00:09.950127&amp;#34;,
&amp;#34;consecutive_error_count&amp;#34;:null, &amp;#34;transaction_id&amp;#34;:45035996274537015,
&amp;#34;frame_start&amp;#34;:&amp;#34;2018-11-06 12:52:49.213&amp;#34;, &amp;#34;frame_end&amp;#34;:null}
{&amp;#34;microbatch&amp;#34;:&amp;#34;iotlog&amp;#34;, &amp;#34;target_schema&amp;#34;:&amp;#34;public&amp;#34;, &amp;#34;target_table&amp;#34;:&amp;#34;iot_data&amp;#34;,
&amp;#34;source_name&amp;#34;:&amp;#34;iot_data&amp;#34;, &amp;#34;source_cluster&amp;#34;:&amp;#34;kafka_iot&amp;#34;, &amp;#34;source_partition&amp;#34;:1,
&amp;#34;start_offset&amp;#34;:1604, &amp;#34;end_offset&amp;#34;:1653, &amp;#34;end_reason&amp;#34;:&amp;#34;END_OF_STREAM&amp;#34;,
&amp;#34;end_reason_message&amp;#34;:null, &amp;#34;partition_bytes&amp;#34;:4387, &amp;#34;partition_messages&amp;#34;:50,
&amp;#34;timeslice&amp;#34;:&amp;#34;00:00:09.842000&amp;#34;, &amp;#34;batch_start&amp;#34;:&amp;#34;2018-11-06 12:52:49.387567&amp;#34;,
&amp;#34;batch_end&amp;#34;:&amp;#34;2018-11-06 12:52:59.400219&amp;#34;, &amp;#34;source_duration&amp;#34;:&amp;#34;00:00:00.220329&amp;#34;,
&amp;#34;consecutive_error_count&amp;#34;:null, &amp;#34;transaction_id&amp;#34;:45035996274537015,
&amp;#34;frame_start&amp;#34;:&amp;#34;2018-11-06 12:52:49.213&amp;#34;, &amp;#34;frame_end&amp;#34;:null}
{&amp;#34;microbatch&amp;#34;:&amp;#34;iotlog&amp;#34;, &amp;#34;target_schema&amp;#34;:&amp;#34;public&amp;#34;, &amp;#34;target_table&amp;#34;:&amp;#34;iot_data&amp;#34;,
&amp;#34;source_name&amp;#34;:&amp;#34;iot_data&amp;#34;, &amp;#34;source_cluster&amp;#34;:&amp;#34;kafka_iot&amp;#34;, &amp;#34;source_partition&amp;#34;:2,
&amp;#34;start_offset&amp;#34;:1603, &amp;#34;end_offset&amp;#34;:1652, &amp;#34;end_reason&amp;#34;:&amp;#34;END_OF_STREAM&amp;#34;,
&amp;#34;end_reason_message&amp;#34;:null, &amp;#34;partition_bytes&amp;#34;:4383, &amp;#34;partition_messages&amp;#34;:50,
&amp;#34;timeslice&amp;#34;:&amp;#34;00:00:09.842000&amp;#34;, &amp;#34;batch_start&amp;#34;:&amp;#34;2018-11-06 12:52:49.387567&amp;#34;,
&amp;#34;batch_end&amp;#34;:&amp;#34;2018-11-06 12:52:59.400219&amp;#34;, &amp;#34;source_duration&amp;#34;:&amp;#34;00:00:00.318997&amp;#34;,
&amp;#34;consecutive_error_count&amp;#34;:null, &amp;#34;transaction_id&amp;#34;:45035996274537015,
&amp;#34;frame_start&amp;#34;:&amp;#34;2018-11-06 12:52:49.213&amp;#34;, &amp;#34;frame_end&amp;#34;:null}
{&amp;#34;microbatch&amp;#34;:&amp;#34;iotlog&amp;#34;, &amp;#34;target_schema&amp;#34;:&amp;#34;public&amp;#34;, &amp;#34;target_table&amp;#34;:&amp;#34;iot_data&amp;#34;,
&amp;#34;source_name&amp;#34;:&amp;#34;iot_data&amp;#34;, &amp;#34;source_cluster&amp;#34;:&amp;#34;kafka_iot&amp;#34;, &amp;#34;source_partition&amp;#34;:3,
&amp;#34;start_offset&amp;#34;:1604, &amp;#34;end_offset&amp;#34;:1653, &amp;#34;end_reason&amp;#34;:&amp;#34;END_OF_STREAM&amp;#34;,
&amp;#34;end_reason_message&amp;#34;:null, &amp;#34;partition_bytes&amp;#34;:4375, &amp;#34;partition_messages&amp;#34;:50,
&amp;#34;timeslice&amp;#34;:&amp;#34;00:00:09.842000&amp;#34;, &amp;#34;batch_start&amp;#34;:&amp;#34;2018-11-06 12:52:49.387567&amp;#34;,
&amp;#34;batch_end&amp;#34;:&amp;#34;2018-11-06 12:52:59.400219&amp;#34;, &amp;#34;source_duration&amp;#34;:&amp;#34;00:00:00.219543&amp;#34;,
&amp;#34;consecutive_error_count&amp;#34;:null, &amp;#34;transaction_id&amp;#34;:45035996274537015,
&amp;#34;frame_start&amp;#34;:&amp;#34;2018-11-06 12:52:49.213&amp;#34;, &amp;#34;frame_end&amp;#34;:null}
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;You can use the &lt;code&gt;--partition&lt;/code&gt; argument to get just the partitions you want:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ /opt/vertica/packages/kafka/bin/vkconfig statistics --last 1 --partition 2 --conf iot.conf
{&amp;#34;microbatch&amp;#34;:&amp;#34;iotlog&amp;#34;, &amp;#34;target_schema&amp;#34;:&amp;#34;public&amp;#34;, &amp;#34;target_table&amp;#34;:&amp;#34;iot_data&amp;#34;,
&amp;#34;source_name&amp;#34;:&amp;#34;iot_data&amp;#34;, &amp;#34;source_cluster&amp;#34;:&amp;#34;kafka_iot&amp;#34;, &amp;#34;source_partition&amp;#34;:2,
&amp;#34;start_offset&amp;#34;:1603, &amp;#34;end_offset&amp;#34;:1652, &amp;#34;end_reason&amp;#34;:&amp;#34;END_OF_STREAM&amp;#34;,
&amp;#34;end_reason_message&amp;#34;:null, &amp;#34;partition_bytes&amp;#34;:4383, &amp;#34;partition_messages&amp;#34;:50,
&amp;#34;timeslice&amp;#34;:&amp;#34;00:00:09.842000&amp;#34;, &amp;#34;batch_start&amp;#34;:&amp;#34;2018-11-06 12:52:49.387567&amp;#34;,
&amp;#34;batch_end&amp;#34;:&amp;#34;2018-11-06 12:52:59.400219&amp;#34;, &amp;#34;source_duration&amp;#34;:&amp;#34;00:00:00.318997&amp;#34;,
&amp;#34;consecutive_error_count&amp;#34;:null, &amp;#34;transaction_id&amp;#34;:45035996274537015,
&amp;#34;frame_start&amp;#34;:&amp;#34;2018-11-06 12:52:49.213&amp;#34;, &amp;#34;frame_end&amp;#34;:null}
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;If your scheduler reads from more than one source, the &lt;code&gt;--last 1&lt;/code&gt; option outputs the last microbatch from each source:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ /opt/vertica/packages/kafka/bin/vkconfig statistics --last 1 --conf weblog.conf
{&amp;#34;microbatch&amp;#34;:&amp;#34;weberrors&amp;#34;, &amp;#34;target_schema&amp;#34;:&amp;#34;public&amp;#34;, &amp;#34;target_table&amp;#34;:&amp;#34;web_errors&amp;#34;,
&amp;#34;source_name&amp;#34;:&amp;#34;web_errors&amp;#34;, &amp;#34;source_cluster&amp;#34;:&amp;#34;kafka_weblog&amp;#34;,
&amp;#34;source_partition&amp;#34;:0, &amp;#34;start_offset&amp;#34;:10000, &amp;#34;end_offset&amp;#34;:9999,
&amp;#34;end_reason&amp;#34;:&amp;#34;END_OF_STREAM&amp;#34;, &amp;#34;end_reason_message&amp;#34;:null,
&amp;#34;partition_bytes&amp;#34;:0, &amp;#34;partition_messages&amp;#34;:0, &amp;#34;timeslice&amp;#34;:&amp;#34;00:00:04.909000&amp;#34;,
&amp;#34;batch_start&amp;#34;:&amp;#34;2018-11-06 10:58:02.632624&amp;#34;,
&amp;#34;batch_end&amp;#34;:&amp;#34;2018-11-06 10:58:03.058663&amp;#34;, &amp;#34;source_duration&amp;#34;:&amp;#34;00:00:00.220618&amp;#34;,
&amp;#34;consecutive_error_count&amp;#34;:null, &amp;#34;transaction_id&amp;#34;:45035996274523991,
&amp;#34;frame_start&amp;#34;:&amp;#34;2018-11-06 10:58:02.394&amp;#34;, &amp;#34;frame_end&amp;#34;:null}
{&amp;#34;microbatch&amp;#34;:&amp;#34;weblog&amp;#34;, &amp;#34;target_schema&amp;#34;:&amp;#34;public&amp;#34;, &amp;#34;target_table&amp;#34;:&amp;#34;web_hits&amp;#34;,
&amp;#34;source_name&amp;#34;:&amp;#34;web_hits&amp;#34;, &amp;#34;source_cluster&amp;#34;:&amp;#34;kafka_weblog&amp;#34;, &amp;#34;source_partition&amp;#34;:0,
&amp;#34;start_offset&amp;#34;:80000, &amp;#34;end_offset&amp;#34;:79999, &amp;#34;end_reason&amp;#34;:&amp;#34;END_OF_STREAM&amp;#34;,
&amp;#34;end_reason_message&amp;#34;:null, &amp;#34;partition_bytes&amp;#34;:0, &amp;#34;partition_messages&amp;#34;:0,
&amp;#34;timeslice&amp;#34;:&amp;#34;00:00:09.128000&amp;#34;, &amp;#34;batch_start&amp;#34;:&amp;#34;2018-11-06 10:58:03.322852&amp;#34;,
&amp;#34;batch_end&amp;#34;:&amp;#34;2018-11-06 10:58:03.63047&amp;#34;, &amp;#34;source_duration&amp;#34;:&amp;#34;00:00:00.226493&amp;#34;,
&amp;#34;consecutive_error_count&amp;#34;:null, &amp;#34;transaction_id&amp;#34;:45035996274524004,
&amp;#34;frame_start&amp;#34;:&amp;#34;2018-11-06 10:58:02.394&amp;#34;, &amp;#34;frame_end&amp;#34;:null}
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;You can use wildcards to enable partial matches. This example demonstrates getting the last microbatch for all microbatches whose names end with &amp;quot;log&amp;quot;:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;~$ /opt/vertica/packages/kafka/bin/vkconfig statistics --microbatch &amp;#34;%log&amp;#34; \
                                            --last 1 --conf weblog.conf
{&amp;#34;microbatch&amp;#34;:&amp;#34;weblog&amp;#34;, &amp;#34;target_schema&amp;#34;:&amp;#34;public&amp;#34;, &amp;#34;target_table&amp;#34;:&amp;#34;web_hits&amp;#34;,
&amp;#34;source_name&amp;#34;:&amp;#34;web_hits&amp;#34;, &amp;#34;source_cluster&amp;#34;:&amp;#34;kafka_weblog&amp;#34;, &amp;#34;source_partition&amp;#34;:0,
&amp;#34;start_offset&amp;#34;:80000, &amp;#34;end_offset&amp;#34;:79999, &amp;#34;end_reason&amp;#34;:&amp;#34;END_OF_STREAM&amp;#34;,
&amp;#34;end_reason_message&amp;#34;:null, &amp;#34;partition_bytes&amp;#34;:0, &amp;#34;partition_messages&amp;#34;:0,
&amp;#34;timeslice&amp;#34;:&amp;#34;00:00:04.874000&amp;#34;, &amp;#34;batch_start&amp;#34;:&amp;#34;2018-11-06 11:37:16.17198&amp;#34;,
&amp;#34;batch_end&amp;#34;:&amp;#34;2018-11-06 11:37:16.460844&amp;#34;, &amp;#34;source_duration&amp;#34;:&amp;#34;00:00:00.213129&amp;#34;,
&amp;#34;consecutive_error_count&amp;#34;:null, &amp;#34;transaction_id&amp;#34;:45035996274529932,
&amp;#34;frame_start&amp;#34;:&amp;#34;2018-11-06 11:37:15.877&amp;#34;, &amp;#34;frame_end&amp;#34;:null}
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;To get microbatches from a specific period of time, use the &lt;code&gt;--from-timestamp&lt;/code&gt; and &lt;code&gt;--to-timestamp&lt;/code&gt; arguments. This example gets the microbatches that read from partition #2 between 12:52:30 and 12:53:00 on 2018-11-06 for the scheduler defined in iot.conf.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ /opt/vertica/packages/kafka/bin/vkconfig statistics  --partition 1 \
                        --from-timestamp &amp;#34;2018-11-06 12:52:30&amp;#34; \
                        --to-timestamp &amp;#34;2018-11-06 12:53:00&amp;#34; --conf iot.conf
{&amp;#34;microbatch&amp;#34;:&amp;#34;iotlog&amp;#34;, &amp;#34;target_schema&amp;#34;:&amp;#34;public&amp;#34;, &amp;#34;target_table&amp;#34;:&amp;#34;iot_data&amp;#34;,
&amp;#34;source_name&amp;#34;:&amp;#34;iot_data&amp;#34;, &amp;#34;source_cluster&amp;#34;:&amp;#34;kafka_iot&amp;#34;, &amp;#34;source_partition&amp;#34;:1,
&amp;#34;start_offset&amp;#34;:1604, &amp;#34;end_offset&amp;#34;:1653, &amp;#34;end_reason&amp;#34;:&amp;#34;END_OF_STREAM&amp;#34;,
&amp;#34;end_reason_message&amp;#34;:null, &amp;#34;partition_bytes&amp;#34;:4387, &amp;#34;partition_messages&amp;#34;:50,
&amp;#34;timeslice&amp;#34;:&amp;#34;00:00:09.842000&amp;#34;, &amp;#34;batch_start&amp;#34;:&amp;#34;2018-11-06 12:52:49.387567&amp;#34;,
&amp;#34;batch_end&amp;#34;:&amp;#34;2018-11-06 12:52:59.400219&amp;#34;, &amp;#34;source_duration&amp;#34;:&amp;#34;00:00:00.220329&amp;#34;,
&amp;#34;consecutive_error_count&amp;#34;:null, &amp;#34;transaction_id&amp;#34;:45035996274537015,
&amp;#34;frame_start&amp;#34;:&amp;#34;2018-11-06 12:52:49.213&amp;#34;, &amp;#34;frame_end&amp;#34;:null}
{&amp;#34;microbatch&amp;#34;:&amp;#34;iotlog&amp;#34;, &amp;#34;target_schema&amp;#34;:&amp;#34;public&amp;#34;, &amp;#34;target_table&amp;#34;:&amp;#34;iot_data&amp;#34;,
&amp;#34;source_name&amp;#34;:&amp;#34;iot_data&amp;#34;, &amp;#34;source_cluster&amp;#34;:&amp;#34;kafka_iot&amp;#34;, &amp;#34;source_partition&amp;#34;:1,
&amp;#34;start_offset&amp;#34;:1554, &amp;#34;end_offset&amp;#34;:1603, &amp;#34;end_reason&amp;#34;:&amp;#34;END_OF_STREAM&amp;#34;,
&amp;#34;end_reason_message&amp;#34;:null, &amp;#34;partition_bytes&amp;#34;:4371, &amp;#34;partition_messages&amp;#34;:50,
&amp;#34;timeslice&amp;#34;:&amp;#34;00:00:09.788000&amp;#34;, &amp;#34;batch_start&amp;#34;:&amp;#34;2018-11-06 12:52:38.930428&amp;#34;,
&amp;#34;batch_end&amp;#34;:&amp;#34;2018-11-06 12:52:48.932604&amp;#34;, &amp;#34;source_duration&amp;#34;:&amp;#34;00:00:00.231709&amp;#34;,
&amp;#34;consecutive_error_count&amp;#34;:null, &amp;#34;transaction_id&amp;#34;:45035996274536981,
&amp;#34;frame_start&amp;#34;:&amp;#34;2018-11-06 12:52:38.685&amp;#34;, &amp;#34;frame_end&amp;#34;:null}
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This example demonstrates using the &lt;code&gt;--dump&lt;/code&gt; argument to get the SQL statement vkconfig executed to retrieve the output from the previous example:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ /opt/vertica/packages/kafka/bin/vkconfig statistics  --dump --partition 1 \
                       --from-timestamp &amp;#34;2018-11-06 12:52:30&amp;#34; \
                       --to-timestamp &amp;#34;2018-11-06 12:53:00&amp;#34; --conf iot.conf
SELECT microbatch, target_schema, target_table, source_name, source_cluster,
source_partition, start_offset, end_offset, end_reason, end_reason_message,
partition_bytes, partition_messages, timeslice, batch_start, batch_end,
last_batch_duration AS source_duration, consecutive_error_count, transaction_id,
frame_start, frame_end FROM &amp;#34;iot_sched&amp;#34;.stream_microbatch_history WHERE
(source_partition = &amp;#39;1&amp;#39;) AND (frame_start &amp;gt;= &amp;#39;2018-11-06 12:52:30.0&amp;#39;) AND
(frame_start &amp;lt; &amp;#39;2018-11-06 12:53:00.0&amp;#39;) ORDER BY frame_start DESC, microbatch,
source_cluster, source_name, source_partition;
&lt;/code&gt;&lt;/pre&gt;
      </description>
    </item>
    
    <item>
      <title>Kafka-Integration: Sync tool options</title>
      <link>/en/kafka-integration/vkconfig-script-options/sync-tool-options/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/kafka-integration/vkconfig-script-options/sync-tool-options/</guid>
      <description>
        
        
        &lt;p&gt;The sync utility immediately updates all source definitions by querying the Kafka cluster&#39;s brokers defined by the source. By default, it updates all of the sources defined in the target schema. To update just specific sources, use the &lt;code&gt;--source&lt;/code&gt; and &lt;code&gt;--cluster&lt;/code&gt; options to specify which sources to update.&lt;/p&gt;
&lt;h2 id=&#34;syntax&#34;&gt;Syntax&lt;/h2&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;vkconfig sync [&lt;span class=&#34;code-variable&#34;&gt;options...&lt;/span&gt;]
&lt;/code&gt;&lt;/pre&gt;&lt;dl&gt;
&lt;dt&gt;&lt;code&gt;--source&lt;/code&gt; &lt;em&gt;&lt;code&gt;source_name&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;The name of the source sync. This source must already exist in the target schema.&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--cluster&lt;/code&gt; &lt;em&gt;&lt;code&gt;cluster_name&lt;/code&gt;&lt;/em&gt;&lt;/dt&gt;
&lt;dd&gt;Identifies the cluster containing the source that you want to sync. You must have already defined this cluster in the scheduler.&lt;/dd&gt;
&lt;dt&gt;&lt;p&gt;&lt;code&gt;--kafka_conf &#39;&lt;/code&gt;&lt;em&gt;&lt;code&gt;kafka_configuration_setting&lt;/code&gt;&lt;/em&gt;&lt;code&gt;&#39;&lt;/code&gt;&lt;/p&gt;
&lt;/dt&gt;
&lt;dd&gt;&lt;p&gt;A JSON string of property/value pairs to pass directly to the rdkafka, the library that OpenText™ Analytics Database uses to communicate with Kafka. This parameter directly sets global configuration properties that are not available through the database integration with Kafka.&lt;/p&gt;
&lt;p&gt;For details, see &lt;a href=&#34;../../../en/kafka-integration/configuring-and-kafka/directly-setting-kafka-library-options/#&#34;&gt;Directly setting Kafka library options&lt;/a&gt;.&lt;/p&gt;
&lt;/dd&gt;
&lt;dt&gt;&lt;code&gt;--kafka_conf_secret &#39;&lt;/code&gt;&lt;em&gt;&lt;code&gt;kafka_configuration_setting&lt;/code&gt;&lt;/em&gt;&lt;code&gt;&#39;&lt;/code&gt;&lt;/dt&gt;
&lt;dd&gt;&lt;p&gt;Conceals sensitive configuration data that you must pass directly to the rdkafka library, such as passwords. This parameter accepts settings in the same format as &lt;code&gt;kafka_conf&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Values passed to this parameter are not logged or stored in system tables.&lt;/p&gt;
&lt;/dd&gt;
&lt;/dl&gt;
&lt;p&gt;See the &lt;a href=&#34;../../../en/kafka-integration/vkconfig-script-options/common-vkconfig-script-options/#&#34;&gt;Common vkconfig script options&lt;/a&gt; for options available in all of the vkconfig tools..&lt;/p&gt;

      </description>
    </item>
    
  </channel>
</rss>
