KafkaOffsets
The KafkaOffsets user-defined transform function returns load operation statistics generated by the most recent invocation of KafkaSource. Query KafkaOffsets to see the metadata produced by your most recent load operation. You can query KafkaOffsets after each KafkaSource invocation to view information about that load. If you are using the scheduler, you can also view historical load information in the stream_microbatch_history table.
For each load operation, KafkaOffsets returns the following:
-
source kafka topic
-
source kafka partition
-
starting offset
-
ending offset
-
number of messages loaded
-
number of bytes read
-
duration of the load operation
-
end message
-
end reason
The following example demonstrates calling KafkaOffsets to show partition information that was loaded using KafkaSource:
=> SELECT kpartition, start_offset, end_offset, msg_count, ending FROM (select KafkaOffsets() over()) AS stats ORDER BY kpartition;
kpartition | start_offset | end_offset | msg_count | ending
------------+--------------+------------+-----------+------------
0 | -2 | 9999 | 1068 | END_OFFSET
The output shows that KafkaSource loaded 1068 messages (rows) from Kafka in a single partition. The KafkaSource ended the data load because it reached the ending offset.
Note
The values shown in the start_offset column are exclusive (the message with the shown offset was not loaded) and the values in the end_offset column are inclusive (the message with the shown offset was loaded). This is the opposite of the values specified in the KafkaSource'sstream
parameter. The difference between the inclusiveness of KafkaSource's and KafkaOffset's start and end offsets are based on the needs of the job scheduler. KafkaOffset is primarily intended for the job scheduler's use, so the start and end offset values are defined so the scheduler can easily start streaming from where left off.