<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>OpenText Analytics Database 26.2.x – Monitoring message consumption</title>
    <link>/en/kafka-integration/consuming-data-from-kafka/monitoring-message-consumption/</link>
    <description>Recent content in Monitoring message consumption on OpenText Analytics Database 26.2.x</description>
    <generator>Hugo -- gohugo.io</generator>
    
	  <atom:link href="/en/kafka-integration/consuming-data-from-kafka/monitoring-message-consumption/index.xml" rel="self" type="application/rss+xml" />
    
    
      
        
      
    
    
    <item>
      <title>Kafka-Integration: Monitoring OpenText Analytics Database message consumption with consumer groups</title>
      <link>/en/kafka-integration/consuming-data-from-kafka/monitoring-message-consumption/monitoring-message-consumption-with-consumer-groups/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/kafka-integration/consuming-data-from-kafka/monitoring-message-consumption/monitoring-message-consumption-with-consumer-groups/</guid>
      <description>
        
        
        &lt;p&gt;Apache Kafka has a feature named consumer groups that helps distribute message consumption loads across sets of consumers. When using consumer groups, Kafka evenly divides up messages based on the number of consumers in the group. Consumers report back to the Kafka broker which messages it read successfully. This reporting helps Kafka to manage message offsets in the topic&#39;s partitions, so that no consumer in the group is sent the same message twice.&lt;/p&gt;
&lt;p&gt;OpenText™ Analytics Database does not rely on Kafka&#39;s consumer groups to manage load distribution or preventing duplicate loads of messages. The streaming job scheduler manages topic partition offsets on its own.&lt;/p&gt;
&lt;p&gt;Even though the database does not need consumer groups to manage offsets, it does report back to the Kafka brokers which messages it consumed. This feature lets you use third-party tools to monitor the database cluster&#39;s progress as it loads messages. By default, the database reports its progress to a consumer group named vertica-&lt;em&gt;databaseName&lt;/em&gt;, where &lt;em&gt;databaseName&lt;/em&gt; is the name of the database. You can change the name of the consumer group to which the database reports its progress when defining a scheduler or during manual loads of data. Third party tools can query the Kafka brokers to monitor the database cluster&#39;s progress when loading data.&lt;/p&gt;
&lt;p&gt;For example, you can use Kafka&#39;s &lt;code&gt;kafka-consumer-groups.sh&lt;/code&gt; script (located in the &lt;code&gt;bin&lt;/code&gt; directory of your Kafka installation) to view the status of the database consumer group. The following example demonstrates listing the consumer groups available defined in the Kafka cluster and showing the details of the database consumer group:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ cd /&lt;span class=&#34;code-variable&#34;&gt;path&lt;/span&gt;/&lt;span class=&#34;code-variable&#34;&gt;to&lt;/span&gt;/kafka/bin
$ ./kafka-consumer-groups.sh --list --bootstrap-server localhost:9092
Note: This will not show information about old Zookeeper-based consumers.

vertica-vmart
$ ./kafka-consumer-groups.sh --describe --group vertica-vmart \
   --bootstrap-server localhost:9092
Note: This will not show information about old Zookeeper-based consumers.

Consumer group &amp;#39;vertica-vmart&amp;#39; has no active members.

TOPIC                          PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG        CONSUMER-ID                                       HOST                           CLIENT-ID
web_hits                       0          24500           30000           5500       -                                                 -                              -
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;From the output, you can see that the database reports its consumption of messages back to the vertica-vmart consumer group. This group is the default consumer group when the database has the example VMart database loaded. The second command lists the topics being consumed by the vertica-vmart consumer group. You can see that the database cluster has read 24500 of the 30000 messages in the topic&#39;s only partition. Later, running the same command will show the database cluster&#39;s progress:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ cd /&lt;span class=&#34;code-variable&#34;&gt;path&lt;/span&gt;/&lt;span class=&#34;code-variable&#34;&gt;to&lt;/span&gt;/kafka/bin
$ ./kafka-consumer-groups.sh --describe --group vertica-vmart \
    --bootstrap-server localhost:9092
Note: This will not show information about old Zookeeper-based consumers.

Consumer group &amp;#39;vertica-vmart&amp;#39; has no active members.

TOPIC                          PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG        CONSUMER-ID                                       HOST                           CLIENT-ID
web_hits                       0          30000           30000           0          -
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;changing-the-consumer-group-where-opentexttrade-analytics-database-reports-its-progress&#34;&gt;Changing the consumer group where OpenText™ Analytics Database reports its progress&lt;/h2&gt;
&lt;p&gt;You can change the consumer group that OpenText™ Analytics Database reports its progress to when consuming messages.&lt;/p&gt;
&lt;h3 id=&#34;changing-for-automatic-loads-with-the-scheduler&#34;&gt;Changing for automatic loads with the scheduler&lt;/h3&gt;
&lt;p&gt;When using a scheduler, you set the consumer group by setting the &lt;code&gt;--consumer-group-id&lt;/code&gt; argument to the vkconfig script&#39;s &lt;a href=&#34;../../../../en/kafka-integration/vkconfig-script-options/scheduler-tool-options/&#34;&gt;scheduler&lt;/a&gt; or &lt;a href=&#34;../../../../en/kafka-integration/vkconfig-script-options/microbatch-tool-options/&#34;&gt;microbatch&lt;/a&gt; utilities. For example, if you want the example scheduler shown in &lt;a href=&#34;../../../../en/kafka-integration/consuming-data-from-kafka/automatically-consume-data-from-kafka-with-scheduler/setting-up-scheduler/#&#34;&gt;Setting up a scheduler&lt;/a&gt; to report its consumption to the consumer group name vertica-database. Then you could use the command:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ /opt/vertica/packages/kafka/bin/vkconfig microbatch --update \
    --conf weblog.conf --microbatch weblog --consumer-group-id vertica-database
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;When the scheduler begins loading data, it will start updating the new consumer group. You can see this on a Kafka node using &lt;code&gt;kafka-consumer-groups.sh&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Use the &lt;code&gt;--list&lt;/code&gt; option to return the consumer groups:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ /&lt;span class=&#34;code-variable&#34;&gt;path&lt;/span&gt;/&lt;span class=&#34;code-variable&#34;&gt;to&lt;/span&gt;/kafka/bin/kafka-consumer-groups.sh --list --bootstrap-server localhost:9092
Note: This will not show information about old Zookeeper-based consumers.

vertica-database
vertica-vmart
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;Use the &lt;code&gt;--describe&lt;/code&gt; and &lt;code&gt;--group&lt;/code&gt; options to return details about a specific consumer group:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ /&lt;span class=&#34;code-variable&#34;&gt;path&lt;/span&gt;/&lt;span class=&#34;code-variable&#34;&gt;to&lt;/span&gt;/kafka/bin/kafka-consumer-groups.sh --describe --group vertica-database \
                                          --bootstrap-server localhost:9092
Note: This will not show information about old Zookeeper-based consumers.

Consumer group &amp;#39;vertica-database&amp;#39; has no active members.

TOPIC                          PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG        CONSUMER-ID                                       HOST                           CLIENT-ID
web_hits                       0          30300           30300           0          -                                                 -                              -
&lt;/code&gt;&lt;/pre&gt;&lt;h3 id=&#34;changing-for-manual-loads&#34;&gt;Changing for manual loads&lt;/h3&gt;
&lt;p&gt;To change the consumer group when manually loading data, use the group_id parameter of KafkaSource function:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; COPY web_hits SOURCE KafkaSource(stream=&amp;#39;web_hits|0|-2&amp;#39;,
                                    brokers=&amp;#39;kafka01.example.com:9092&amp;#39;,
                                    stop_on_eof=True,
                                    group_id=&amp;#39;vertica_database&amp;#39;)
                 PARSER KafkaJSONParser();
 Rows Loaded
-------------
       50000
(1 row)
&lt;/code&gt;&lt;/pre&gt;&lt;h2 id=&#34;using-consumer-group-offsets-when-loading-messages&#34;&gt;Using consumer group offsets when loading messages&lt;/h2&gt;
&lt;p&gt;You can choose to have your scheduler, manual load, or custom loading script start loading messages from the consumer group&#39;s offset. To load messages from the last offset stored in the consumer group, use the special -3 offset.&lt;/p&gt;
&lt;h3 id=&#34;automatic-load-with-the-scheduler-example&#34;&gt;Automatic load with the scheduler example&lt;/h3&gt;
&lt;p&gt;To instruct your scheduler to load messages from the consumer group&#39;s saved offset, use the vkconfig script &lt;a href=&#34;../../../../en/kafka-integration/vkconfig-script-options/microbatch-tool-options/&#34;&gt;microbatch tool&#39;s&lt;/a&gt; &lt;code&gt;--offset&lt;/code&gt; argument.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Stop the scheduler using the shutdown command and the configuration file that you used to create the scheduler:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ /opt/vertica/packages/kafka/bin/vkconfig microbatch shutdown --conf weblog.conf
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Set the microbatch --offset option to -3:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ /opt/vertica/packages/kafka/bin/vkconfig microbatch --update --conf weblog.conf --microbatch weblog --offset -3
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;This sets the offset to -3 for all topic partitions that your scheduler reads from. The scheduler begins the next load with the consumer group&#39;s saved offset, and all subsequent loads use the offset saved in &lt;a href=&#34;../../../../en/kafka-integration/data-streaming-schema-tables/stream-microbatch-history/#&#34;&gt;stream_microbatch_history&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id=&#34;manual-load-example&#34;&gt;Manual load example&lt;/h3&gt;
&lt;p&gt;This example loads messages from the web_hits topic that has one partition consisting of 51,000 messages. For details about manual loads with &lt;a href=&#34;../../../../en/kafka-integration/kafka-function-reference/kafkasource/&#34;&gt;KafkaSource&lt;/a&gt;, see &lt;a href=&#34;../../../../en/kafka-integration/consuming-data-from-kafka/manually-consume-data-from-kafka/#&#34;&gt;Manually consume data from Kafka&lt;/a&gt;.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;The first COPY statement creates a consumer group named vertica_manual, and loads the first 50,000 messages from the first partition in the web_hits topic:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; COPY web_hits
   SOURCE KafkaSource(stream=&amp;#39;web_hits|0|0|50000&amp;#39;,
                              brokers=&amp;#39;kafka01.example.com:9092&amp;#39;,
                              stop_on_eof=True,
                              group_id=&amp;#39;vertica_manual&amp;#39;)
   PARSER KafkaJSONParser()
   REJECTED DATA AS TABLE public.web_hits_rejections;
 Rows Loaded
-------------
       50000
(1 row)
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The next COPY statement passes -3 as the start_offset stream parameter to load from the consumer group&#39;s saved offset:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; COPY web_hits
   SOURCE KafkaSource(stream=&amp;#39;web_hits|0|-3&amp;#39;,
                              brokers=&amp;#39;kafka01.example.com:9092&amp;#39;,
                              stop_on_eof=True,
                              group_id=&amp;#39;vertica_manual&amp;#39;)
   PARSER KafkaJSONParser()
   REJECTED DATA AS TABLE public.web_hits_rejections;
 Rows Loaded
-------------
        1000
(1 row)
&lt;/code&gt;&lt;/pre&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;h2 id=&#34;disabling-consumer-group-reporting&#34;&gt;Disabling consumer group reporting&lt;/h2&gt;
&lt;p&gt;The database reports the offsets of the messages it consumes to Kafka by default. If you do not specifically configure a consumer group for the database, it still reports its offsets to a consumer group named vertica_&lt;em&gt;database-name&lt;/em&gt; (where &lt;em&gt;database-name&lt;/em&gt; is the name of the database that is currently running).&lt;/p&gt;
&lt;p&gt;If you want to completely disable having the database report its consumption back to Kafka, you can set the consumer group to an empty string or NULL. For example:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;=&amp;gt; COPY web_hits SOURCE KafkaSource(stream=&amp;#39;web_hits|0|-2&amp;#39;,
                                    brokers=&amp;#39;kafka01.example.com:9092&amp;#39;,
                                    stop_on_eof=True,
                                    group_id=NULL)
                 PARSER KafkaJsonParser();
 Rows Loaded
-------------
       60000
(1 row)
&lt;/code&gt;&lt;/pre&gt;
      </description>
    </item>
    
    <item>
      <title>Kafka-Integration: Getting configuration and statistics information from vkconfig</title>
      <link>/en/kafka-integration/consuming-data-from-kafka/monitoring-message-consumption/getting-config-and-statistics-information-from-vkconfig/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>/en/kafka-integration/consuming-data-from-kafka/monitoring-message-consumption/getting-config-and-statistics-information-from-vkconfig/</guid>
      <description>
        
        
        &lt;p&gt;The vkconfig tool has two features that help you examine your scheduler&#39;s configuration and monitor your data load:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The vkconfig tools that configure your scheduler (scheduler, cluster, source, target, load-spec, and microbatch) have a &lt;code&gt;--read&lt;/code&gt; argument that has them output their current settings in the scheduler.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The vkconfig statistics tool lets you get statistics on your microbatches. You can filter the microbatch records based on a date and time range, cluster, partition, and other criteria.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Both of these features output their data in JSON format. You can use third-party tools that can consume JSON data or write your own scripts to process the configuration and statics data.&lt;/p&gt;
&lt;p&gt;You can also access the data provided by these vkconfig options by querying the configuration tables in the scheduler&#39;s schema. However, you may find these options easier to use as they do not require you to connect to the database.&lt;/p&gt;
&lt;h2 id=&#34;getting-configuration-information&#34;&gt;Getting configuration information&lt;/h2&gt;
&lt;p&gt;You pass the &lt;code&gt;--read&lt;/code&gt; option to vkconfig&#39;s configuration tools to get the current settings for the options that the tool can set. This output is in JSON format. This example demonstrates getting the configuration information from the scheduler and cluster tools for the scheduler defined in the weblog.conf configuration file:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ vkconfig scheduler --read --conf weblog.conf
{&amp;#34;version&amp;#34;:&amp;#34;v9.2.0&amp;#34;, &amp;#34;frame_duration&amp;#34;:&amp;#34;00:00:10&amp;#34;, &amp;#34;resource_pool&amp;#34;:&amp;#34;weblog_pool&amp;#34;,
 &amp;#34;config_refresh&amp;#34;:&amp;#34;00:05:00&amp;#34;, &amp;#34;new_source_policy&amp;#34;:&amp;#34;FAIR&amp;#34;,
 &amp;#34;pushback_policy&amp;#34;:&amp;#34;LINEAR&amp;#34;, &amp;#34;pushback_max_count&amp;#34;:5, &amp;#34;auto_sync&amp;#34;:true,
 &amp;#34;consumer_group_id&amp;#34;:null}

$ vkconfig cluster --read --conf weblog.conf
{&amp;#34;cluster&amp;#34;:&amp;#34;kafka_weblog&amp;#34;, &amp;#34;hosts&amp;#34;:&amp;#34;kafak01.example.com:9092,kafka02.example.com:9092&amp;#34;}
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;The &lt;code&gt;--read&lt;/code&gt; option lists all of values created by the tool in the scheduler schema. For example, if you have defined multiple targets in your scheduler, the &lt;code&gt;--read&lt;/code&gt; option lists all of them.&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ vkconfig target --list --conf weblog.conf
{&amp;#34;target_schema&amp;#34;:&amp;#34;public&amp;#34;, &amp;#34;target_table&amp;#34;:&amp;#34;health_data&amp;#34;}
{&amp;#34;target_schema&amp;#34;:&amp;#34;public&amp;#34;, &amp;#34;target_table&amp;#34;:&amp;#34;iot_data&amp;#34;}
{&amp;#34;target_schema&amp;#34;:&amp;#34;public&amp;#34;, &amp;#34;target_table&amp;#34;:&amp;#34;web_hits&amp;#34;}
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;You can filter the &lt;code&gt;--read&lt;/code&gt; option output using the other arguments that the vkconfig tools accept. For example, in the cluster tool, you can use the &lt;code&gt;--host&lt;/code&gt; argument to limit the output to just show clusters that contain a specific host. These arguments support LIKE-predicate wildcards, so you can match partial values. See &lt;a href=&#34;../../../../en/sql-reference/language-elements/predicates/like/#&#34;&gt;LIKE&lt;/a&gt; for more information about using wildcards.&lt;/p&gt;
&lt;p&gt;The following example demonstrates how you can filter the output of the &lt;code&gt;--read&lt;/code&gt; option of the cluster tool using the &lt;code&gt;--host&lt;/code&gt; argument. The first call shows the unfiltered output. The second call filters the output to show only those clusters that start with &amp;quot;kafka&amp;quot;:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ vkconfig cluster --read --conf weblog.conf
{&amp;#34;cluster&amp;#34;:&amp;#34;some_cluster&amp;#34;, &amp;#34;hosts&amp;#34;:&amp;#34;host01.example.com&amp;#34;}
{&amp;#34;cluster&amp;#34;:&amp;#34;iot_cluster&amp;#34;,
 &amp;#34;hosts&amp;#34;:&amp;#34;kafka-iot01.example.com:9092,kafka-iot02.example.com:9092&amp;#34;}
{&amp;#34;cluster&amp;#34;:&amp;#34;weblog&amp;#34;,
 &amp;#34;hosts&amp;#34;:&amp;#34;web01.example.com.com:9092,web02.example.com:9092&amp;#34;}
{&amp;#34;cluster&amp;#34;:&amp;#34;streamcluster1&amp;#34;,
 &amp;#34;hosts&amp;#34;:&amp;#34;kafka-a-01.example.com:9092,kafka-a-02.example.com:9092&amp;#34;}
{&amp;#34;cluster&amp;#34;:&amp;#34;test_cluster&amp;#34;,
 &amp;#34;hosts&amp;#34;:&amp;#34;test01.example.com:9092,test02.example.com:9092&amp;#34;}

$ vkconfig cluster --read --conf weblog.conf --hosts kafka%
{&amp;#34;cluster&amp;#34;:&amp;#34;iot_cluster&amp;#34;,
 &amp;#34;hosts&amp;#34;:&amp;#34;kafka-iot01.example.com:9092,kafka-iot02.example.com:9092&amp;#34;}
{&amp;#34;cluster&amp;#34;:&amp;#34;streamcluster1&amp;#34;,
 &amp;#34;hosts&amp;#34;:&amp;#34;kafka-a-01.example.com:9092,kafka-a-02.example.com:9092&amp;#34;}
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;See the &lt;a href=&#34;../../../../en/kafka-integration/vkconfig-script-options/cluster-tool-options/#&#34;&gt;Cluster tool options&lt;/a&gt;, &lt;a href=&#34;../../../../en/kafka-integration/vkconfig-script-options/load-spec-tool-options/#&#34;&gt;Load spec tool options&lt;/a&gt;, &lt;a href=&#34;../../../../en/kafka-integration/vkconfig-script-options/microbatch-tool-options/#&#34;&gt;Microbatch tool options&lt;/a&gt;, &lt;a href=&#34;../../../../en/kafka-integration/vkconfig-script-options/scheduler-tool-options/#&#34;&gt;Scheduler tool options&lt;/a&gt;, &lt;a href=&#34;../../../../en/kafka-integration/vkconfig-script-options/target-tool-options/#&#34;&gt;Target tool options&lt;/a&gt;, and &lt;a href=&#34;../../../../en/kafka-integration/vkconfig-script-options/source-tool-options/#&#34;&gt;Source tool options&lt;/a&gt; for more information.&lt;/p&gt;
&lt;h2 id=&#34;getting-streaming-data-load-statistics&#34;&gt;Getting streaming data load statistics&lt;/h2&gt;
&lt;p&gt;The vkconfig script&#39;s statistics tool lets you view the history of your scheduler&#39;s microbatches. You can filter the results using any combination of the following criteria:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;The name of the microbatch&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The Kafka cluster that was the source of the data load&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The name of the topic&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The partition within the topic&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The OpenText™ Analytics Database schema and table targeted by the data load&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;A date and time range&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The latest microbatches&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;See &lt;a href=&#34;../../../../en/kafka-integration/vkconfig-script-options/statistics-tool-options/#&#34;&gt;Statistics tool options&lt;/a&gt; for all of the options available in this tool.&lt;/p&gt;
&lt;p&gt;This example gets the last two microbatches that the scheduler ran:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ vkconfig statistics --last 2 --conf weblog.conf
{&amp;#34;microbatch&amp;#34;:&amp;#34;weblog&amp;#34;, &amp;#34;target_schema&amp;#34;:&amp;#34;public&amp;#34;, &amp;#34;target_table&amp;#34;:&amp;#34;web_hits&amp;#34;,
 &amp;#34;source_name&amp;#34;:&amp;#34;web_hits&amp;#34;, &amp;#34;source_cluster&amp;#34;:&amp;#34;kafka_weblog&amp;#34;, &amp;#34;source_partition&amp;#34;:0,
 &amp;#34;start_offset&amp;#34;:73300, &amp;#34;end_offset&amp;#34;:73399, &amp;#34;end_reason&amp;#34;:&amp;#34;END_OF_STREAM&amp;#34;,
 &amp;#34;end_reason_message&amp;#34;:null, &amp;#34;partition_bytes&amp;#34;:19588, &amp;#34;partition_messages&amp;#34;:100,
 &amp;#34;timeslice&amp;#34;:&amp;#34;00:00:09.807000&amp;#34;, &amp;#34;batch_start&amp;#34;:&amp;#34;2018-11-02 13:22:07.825295&amp;#34;,
 &amp;#34;batch_end&amp;#34;:&amp;#34;2018-11-02 13:22:08.135299&amp;#34;, &amp;#34;source_duration&amp;#34;:&amp;#34;00:00:00.219619&amp;#34;,
 &amp;#34;consecutive_error_count&amp;#34;:null, &amp;#34;transaction_id&amp;#34;:45035996273976123,
 &amp;#34;frame_start&amp;#34;:&amp;#34;2018-11-02 13:22:07.601&amp;#34;, &amp;#34;frame_end&amp;#34;:null}
{&amp;#34;microbatch&amp;#34;:&amp;#34;weblog&amp;#34;, &amp;#34;target_schema&amp;#34;:&amp;#34;public&amp;#34;, &amp;#34;target_table&amp;#34;:&amp;#34;web_hits&amp;#34;,
 &amp;#34;source_name&amp;#34;:&amp;#34;web_hits&amp;#34;, &amp;#34;source_cluster&amp;#34;:&amp;#34;kafka_weblog&amp;#34;, &amp;#34;source_partition&amp;#34;:0,
 &amp;#34;start_offset&amp;#34;:73200, &amp;#34;end_offset&amp;#34;:73299, &amp;#34;end_reason&amp;#34;:&amp;#34;END_OF_STREAM&amp;#34;,
 &amp;#34;end_reason_message&amp;#34;:null, &amp;#34;partition_bytes&amp;#34;:19781, &amp;#34;partition_messages&amp;#34;:100,
 &amp;#34;timeslice&amp;#34;:&amp;#34;00:00:09.561000&amp;#34;, &amp;#34;batch_start&amp;#34;:&amp;#34;2018-11-02 13:21:58.044698&amp;#34;,
 &amp;#34;batch_end&amp;#34;:&amp;#34;2018-11-02 13:21:58.335431&amp;#34;, &amp;#34;source_duration&amp;#34;:&amp;#34;00:00:00.214868&amp;#34;,
 &amp;#34;consecutive_error_count&amp;#34;:null, &amp;#34;transaction_id&amp;#34;:45035996273976095,
 &amp;#34;frame_start&amp;#34;:&amp;#34;2018-11-02 13:21:57.561&amp;#34;, &amp;#34;frame_end&amp;#34;:null}
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;This example gets the microbatches from the source named web_hits between 13:21:00 and 13:21:20 on November 2nd 2018:&lt;/p&gt;
&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;$ vkconfig statistics --source &amp;#34;web_hits&amp;#34; --from-timestamp \
           &amp;#34;2018-11-02 13:21:00&amp;#34; --to-timestamp &amp;#34;2018-11-02 13:21:20&amp;#34;  \
           --conf weblog.conf
{&amp;#34;microbatch&amp;#34;:&amp;#34;weblog&amp;#34;, &amp;#34;target_schema&amp;#34;:&amp;#34;public&amp;#34;, &amp;#34;target_table&amp;#34;:&amp;#34;web_hits&amp;#34;,
 &amp;#34;source_name&amp;#34;:&amp;#34;web_hits&amp;#34;, &amp;#34;source_cluster&amp;#34;:&amp;#34;kafka_weblog&amp;#34;, &amp;#34;source_partition&amp;#34;:0,
 &amp;#34;start_offset&amp;#34;:72800, &amp;#34;end_offset&amp;#34;:72899, &amp;#34;end_reason&amp;#34;:&amp;#34;END_OF_STREAM&amp;#34;,
 &amp;#34;end_reason_message&amp;#34;:null, &amp;#34;partition_bytes&amp;#34;:19989, &amp;#34;partition_messages&amp;#34;:100,
 &amp;#34;timeslice&amp;#34;:&amp;#34;00:00:09.778000&amp;#34;, &amp;#34;batch_start&amp;#34;:&amp;#34;2018-11-02 13:21:17.581606&amp;#34;,
 &amp;#34;batch_end&amp;#34;:&amp;#34;2018-11-02 13:21:18.850705&amp;#34;, &amp;#34;source_duration&amp;#34;:&amp;#34;00:00:01.215751&amp;#34;,
 &amp;#34;consecutive_error_count&amp;#34;:null, &amp;#34;transaction_id&amp;#34;:45035996273975997,
 &amp;#34;frame_start&amp;#34;:&amp;#34;2018-11-02 13:21:17.34&amp;#34;, &amp;#34;frame_end&amp;#34;:null}
{&amp;#34;microbatch&amp;#34;:&amp;#34;weblog&amp;#34;, &amp;#34;target_schema&amp;#34;:&amp;#34;public&amp;#34;, &amp;#34;target_table&amp;#34;:&amp;#34;web_hits&amp;#34;,
 &amp;#34;source_name&amp;#34;:&amp;#34;web_hits&amp;#34;, &amp;#34;source_cluster&amp;#34;:&amp;#34;kafka_weblog&amp;#34;, &amp;#34;source_partition&amp;#34;:0,
 &amp;#34;start_offset&amp;#34;:72700, &amp;#34;end_offset&amp;#34;:72799, &amp;#34;end_reason&amp;#34;:&amp;#34;END_OF_STREAM&amp;#34;,
 &amp;#34;end_reason_message&amp;#34;:null, &amp;#34;partition_bytes&amp;#34;:19640, &amp;#34;partition_messages&amp;#34;:100,
 &amp;#34;timeslice&amp;#34;:&amp;#34;00:00:09.857000&amp;#34;, &amp;#34;batch_start&amp;#34;:&amp;#34;2018-11-02 13:21:07.470834&amp;#34;,
 &amp;#34;batch_end&amp;#34;:&amp;#34;2018-11-02 13:21:08.737255&amp;#34;, &amp;#34;source_duration&amp;#34;:&amp;#34;00:00:01.218932&amp;#34;,
 &amp;#34;consecutive_error_count&amp;#34;:null, &amp;#34;transaction_id&amp;#34;:45035996273975978,
 &amp;#34;frame_start&amp;#34;:&amp;#34;2018-11-02 13:21:07.309&amp;#34;, &amp;#34;frame_end&amp;#34;:null}
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;See &lt;a href=&#34;../../../../en/kafka-integration/vkconfig-script-options/statistics-tool-options/#&#34;&gt;Statistics tool options&lt;/a&gt; for more examples of using this tool.&lt;/p&gt;

      </description>
    </item>
    
  </channel>
</rss>
