This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Language elements

The following topics provide detailed descriptions of the language elements and conventions of Vertica SQL.

The following topics provide detailed descriptions of the language elements and conventions of Vertica SQL.

1 - Keywords

Keywords are words that have a specific meaning in the SQL language.

Keywords are words that have a specific meaning in the SQL language. Every SQL statement contains one or more keywords. Although SQL is not case-sensitive with respect to keywords, they are generally shown in uppercase letters throughout this documentation for readability purposes.

Reserved words and keywords

Many keywords are also reserved words.

Vertica recommends that you not use reserved words as names for objects, or as identifiers. Including reserved words can make your SQL statements confusing. Reserved words that are used as names for objects or identifiers must be enclosed in double-quotes.

Non-reserved keywords

Non-reserved keywords have a special meaning in some contexts, but can be used as identifiers in others. You can use non-reserved keywords as aliases—for example, SOURCE:

=> SELECT my_node AS SOURCE FROM nodes;

Viewing the list of reserved and non-reserved keywords

To view the current list of Vertica reserved and non-reserved words, query system table KEYWORDS. Vertica lists keywords alphabetically and identifies them as reserved (R) or non-reserved (N).

For example, the following query gets all reserved keywords that begin with B:

=> SELECT * FROM keywords WHERE reserved = 'R' AND keyword ilike 'B%';
 keyword | reserved
---------+----------
 BETWEEN | R
 BIGINT  | R
 BINARY  | R
 BIT     | R
 BOOLEAN | R
 BOTH    | R
(6 rows)

2 - Identifiers

Identifiers (names) of objects such as schema, table, projection, column names, and so on, can be up to 128 bytes in length.

Identifiers (names) of objects such as schema, table, projection, column names, and so on, can be up to 128 bytes in length.

Unquoted identifiers

Unquoted SQL identifiers must begin with one of the following:

  • Non-Unicode letters: A–Z or a-z

  • Underscore (_)

Subsequent characters in an identifier can be any combination of the following:

  • Non-Unicode letters: A–Z or a-z

  • Underscore (_)

  • Digits(0–9)

  • Unicode letters (letters with diacriticals or not in the Latin alphabet), unsupported for model names

  • Dollar sign ($), unsupported for model names

Quoted identifiers

Identifiers enclosed in double quote (") characters can contain any character. If you want to include a double quote, you need a pair of them; for example """". You can use names that would otherwise be invalid—for example, names that include only numeric characters ("123") or contain space characters, punctuation marks, and SQL or Vertica-reserved keywords. For example:

CREATE SEQUENCE "my sequence!";

Double quotes are required for non-alphanumerics and SQL keywords such as "1time", "Next week" and "Select".

Case sensitivity

Identifiers are not case-sensitive. Thus, identifiers "ABC", "ABc", and "aBc" are synonymous, as are ABC, ABc, and aBc.

Non-ASCII characters

Vertica accepts non-ASCII UTF-8 Unicode characters for table names, column names, and other identifiers, extending the cases where upper/lower case distinctions are ignored (case-folded) to all alphabets, including Latin, Cyrillic, and Greek.

For example, the following CREATE TABLE statement uses the ß (German eszett) in the table name:

=> CREATE TABLE straße(x int, y int);
   CREATE TABLE

Identifiers are stored as created

SQL identifiers, such as table and column names, are not converted to lowercase. They are stored as created, and references to them are resolved using case-insensitive compares. For example, the following statement creates table ALLCAPS.

=> CREATE TABLE ALLCAPS(c1 varchar(30));
=> INSERT INTO ALLCAPS values('upper case');

The following statements are variations of the same query:

=> SELECT * FROM ALLCAPS;
=> SELECT * FROM allcaps;
=> SELECT * FROM "allcaps";

The three queries all return the same result:

     c1
------------
 upper case
(1 row)

Note that Vertica returns an error if you try to create table AllCaps:

=> CREATE TABLE AllCaps(c1 varchar(30));
   ROLLBACK:  table "AllCaps" already exists

See QUOTE_IDENT for additional information.

3 - Literals

Literals are numbers or strings used in SQL as constants.

Literals are numbers or strings used in SQL as constants. Literals are included in the select-list, along with expressions and built-in functions and can also be constants.

Vertica provides support for number-type literals (integers and numerics), string literals, VARBINARY string literals, and date/time literals. The various string literal formats are discussed in this section.

3.1 - Number-type literals

Vertica supports three types of numbers: integers, numerics, and floats.

Vertica supports three types of numbers: integers, numerics, and floats.

  • Integers are whole numbers less than 2^63 and must be digits.

  • Numerics are whole numbers larger than 2^63 or that include a decimal point with a precision and a scale. Numerics can contain exponents. Numbers that begin with 0x are hexadecimal numerics.

Numeric-type values can also be generated using casts from character strings. This is a more general syntax. See the Examples section below, as well as Data type coercion operators (CAST).

Syntax

digits
digits.[digits] | [digits].digits
digits e[+-]digits | [digits].digits e[+-]digits | digits.[digits] e[+-]digits

Parameters

digits
One or more numeric characters, 0 through 9
e
Exponent marker

Notes

  • At least one digit must follow the exponent marker (e), if e is present.
  • There cannot be any spaces or other characters embedded in the constant.
  • Leading plus (+) or minus (–) signs are not considered part of the constant; they are unary operators applied to the constant.

  • In most cases a numeric-type constant is automatically coerced to the most appropriate type depending on context. When necessary, you can force a numeric value to be interpreted as a specific data type by casting it as described in Data type coercion operators (CAST).

  • Floating point literals are not supported. If you specifically need to specify a float, you can cast as described in Data type coercion operators (CAST).

  • Vertica follows the IEEE specification for floating point, including NaN (not a number) and Infinity (Inf).

  • A NaN is not greater than and at the same time not less than anything, even itself. In other words, comparisons always return false whenever a NaN is involved.

  • Dividing INTEGERS (x / y) yields a NUMERIC result. You can use the // operator to truncate the result to a whole number.

Examples

The following are examples of number-type literals:

42
3.5
4.
.001
5e2
1.925e-3

Scientific notation:

=> SELECT NUMERIC '1e10';
 ?column?
-------------
 10000000000
(1 row)

BINARY scaling:

=> SELECT NUMERIC '1p10';
 ?column?
----------
     1024
(1 row)
=> SELECT FLOAT 'Infinity';
  ?column?
----------
 Infinity
(1 row)

The following examples illustrated using the / and // operators to divide integers:

=> SELECT 40/25;
 ?column?
----------------------
 1.600000000000000000
(1 row)
=> SELECT 40//25;
 ?column?
----------
        1
(1 row)

See also

Data type coercion

3.2 - String literals

String literals are string values surrounded by single or double quotes.

String literals are string values surrounded by single or double quotes. Double-quoted strings are subject to the backslash, but single-quoted strings do not require a backslash, except for \' and \\.

You can embed single quotes and backslashes into single-quoted strings.

To include other backslash (escape) sequences, such as \t (tab), you must use the double-quoted form.

Precede single-quoted strings with a space between the string and its preceding word, since single quotes are allowed in identifiers.

See also

3.2.1 - Character string literals

Character string literals are a sequence of characters from a predefined character set, enclosed by single quotes.

Character string literals are a sequence of characters from a predefined character set, enclosed by single quotes.

Syntax

'character-seq'

Parameters

character-seq
Arbitrary sequence of characters

Embedded single quotes

If a character string literal includes a single quote, it must be doubled. For example:

=> SELECT 'Chester''s gorilla';
  ?column?
-------------------
Chester's gorilla
(1 row)

Standard-conforming strings and escape characters

Vertica uses standard-conforming strings as specified in the SQL standard, so backslashes are treated as string literals and not escape characters.

Examples

=> SELECT 'This is a string';
  ?column?
------------------
 This is a string
(1 row)
=> SELECT 'This \is a string';
   WARNING:  nonstandard use of escape in a string literal at character 8
   HINT:  Use the escape string syntax for escapes, e.g., E'\r\n'.
     ?column?
------------------
 This is a string
(1 row)
vmartdb=> SELECT E'This \is a string';
     ?column?
------------------
 This is a string
=> SELECT E'This is a \n new line';
       ?column?
----------------------
 This is a
 new line
(1 row)
=> SELECT 'String''s characters';
      ?column?
--------------------
 String's characters
(1 row)

See also

3.2.2 - Dollar-quoted string literals

Dollar-quoted string literals are rarely used, but are provided here for your convenience.

Dollar-quoted string literals are rarely used, but are provided here for your convenience.

The standard syntax for specifying string literals can be difficult to understand. To allow more readable queries in such situations, Vertica SQL provides dollar quoting. Dollar quoting is not part of the SQL standard, but it is often a more convenient way to write complicated string literals than the standard-compliant single quote syntax.

Syntax

$$characters$$

Parameters

characters
Arbitrary sequence of characters bounded by paired dollar signs ($$)

Dollar-quoted string content is treated as a literal. Single quote, backslash, and dollar sign characters have no special meaning within a dollar-quoted string.

Notes

A dollar-quoted string that follows a keyword or identifier must be separated from the preceding word by whitespace; otherwise, the dollar-quoting delimiter is taken as part of the preceding identifier.

Examples

=> SELECT $$Fred's\n car$$;
     ?column?
-------------------
 Fred's\n car
(1 row)

=> SELECT 'SELECT 'fact';';
ERROR:  syntax error at or near "';'" at character 21
LINE 1: SELECT 'SELECT 'fact';';

=> SELECT 'SELECT $$fact';$$;
   ?column?
---------------
 SELECT $$fact
(1 row)

=> SELECT 'SELECT ''fact'';';
    ?column?
----------------
 SELECT 'fact';
(1 row)

3.2.3 - Unicode string literals

hexit is hexadecimal integer (0-9, a-f).

Syntax

U&'characters' [ UESCAPE '<Unicode escape character>' ]

Parameters

characters
Arbitrary sequence of UTF-8 characters bounded by single quotes (')
Unicode escape character
A single character from the source language character set other than a hexit, plus sign (+), quote ('), double quote (''), or white space

Using standard conforming strings

With StandardConformingStrings enabled, Vertica supports SQL standard Unicode character string literals (the character set is UTF-8 only).

Before you enter a Unicode character string literal, enable standard conforming strings in one of the following ways.

See also Extended String Literals.

Examples

To enter a Unicode character in hexadecimal, such as the Russian phrase for "thank you, use the following syntax:

=> SET STANDARD_CONFORMING_STRINGS TO ON;
=> SELECT U&'\0441\043F\0430\0441\0438\0431\043E' as 'thank you';
 thank you
-----------
 спасибо
(1 row)

To enter the German word mude (where u is really u-umlaut) in hexadecimal:

=> SELECT U&'m\00fcde';
?column?
----------
müde
(1 row)
=> SELECT 'ü';
?column?
----------
ü
(1 row)

To enter the LINEAR B IDEOGRAM B240 WHEELED CHARIOT in hexadecimal:

=> SELECT E'\xF0\x90\x83\x8C';
?column?
----------
(wheeled chariot character)
(1 row)

See also

3.2.4 - VARBINARY string literals

VARBINARY string literals allow you to specify hexadecimal or binary digits in a string literal.

VARBINARY string literals allow you to specify hexadecimal or binary digits in a string literal.

Syntax

X''
B''

Parameters

X or x
Specifies hexadecimal digits. The <hexadecimal digits> string must be enclosed in single quotes (').
B or b
Specifies binary digits. The <binary digits> string must be enclosed in single quotes (').

Examples

=> SELECT X'abcd';
 ?column?
----------
 \253\315
(1 row)

=> SELECT B'101100';
 ?column?
----------
 ,
(1 row)

3.2.5 - Extended string literals

Syntax

E'characters'

Parameters

characters
Arbitrary sequence of characters bounded by single quotes (') You can use C-style backslash sequence in extended string literals, which are an extension to the SQL standard. You specify an extended string literal by writing the letter E as a prefix (before the opening single quote); for example:
E'extended character string\n'

Within an extended string, the backslash character (\) starts a C-style backslash sequence, in which the combination of backslash and following character or numbers represent a special byte value, as shown in the following list. Any other character following a backslash is taken literally; for example, to include a backslash character, write two backslashes (\\).

  • \\ is a backslash

  • \b is a backspace

  • \f is a form feed

  • \n is a newline

  • \r is a carriage return

  • \t is a tab

  • \x##,where ## is a 1 or 2-digit hexadecimal number; for example \x07 is a tab

  • \###, where ### is a 1, 2, or 3-digit octal number representing a byte with the corresponding code.

When an extended string literal is concatenated across lines, write only E before the first opening quote:

=> SELECT E'first part o'
    'f a long line';
         ?column?
---------------------------
 first part of a long line
(1 row)

Two adjacent single quotes are used as one single quote:

=> SELECT 'Aren''t string literals fun?';
          ?column?
-----------------------------
 Aren't string literals fun?
(1 row)

Standard conforming strings and escape characters

When interpreting commands, such as those entered in vsql or in queries passed via JDBC or ODBC, Vertica uses standard conforming strings as specified in the SQL standard. In standard conforming strings, backslashes are treated as string literals (ordinary characters), not escape characters.

The following options are available, but Vertica recommends that you migrate your application to use standard conforming strings at your earliest convenience, after warnings have been addressed.

  • To treat back slashes as escape characters, set configuration parameter StandardConformingStrings to 0.

  • To enable standard conforming strings permanently, set the StandardConformingStrings parameter to '1', as described below.

  • To enable standard conforming strings per session, use SET STANDARD_CONFORMING_STRING TO ON, which treats backslashes as escape characters for the current session only.

Identifying strings that are not standard conforming

The following procedure can be used to identify nonstandard conforming strings in your application so that you can convert them into standard conforming strings:

  1. Be sure the StandardConformingStrings parameter is off, as described in Internationalization parameters.

    => ALTER DATABASE DEFAULT SET StandardConformingStrings = 0;
    
  2. If necessary, turn on the EscapeStringWarning parameter.

    => ALTER DATABASE DEFAULT SET EscapeStringWarning = 1;
    

    Vertica now returns a warning each time it encounters an escape string within a string literal. For example, Vertica interprets the \n in the following example as a new line:

    => SELECT 'a\nb';
       WARNING:  nonstandard use of escape in a string literal at character 8
       HINT:  Use the escape string syntax for escapes, e.g., E'\r\n'.
     ?column?
    ----------
     a
    b
    (1 row)
    

    When StandardConformingStrings is ON, the string is interpreted as four characters: a \ n b.

    Modify each string that Vertica flags by extending it as in the following example:

    E'a\nb'
    

    Or if the string has quoted single quotes, double them; for example, 'one'' double'.

  3. Turn on the StandardConformingStrings parameter for all sessions:

    => ALTER DATABASE DEFAULT SET StandardConformingStrings = 1;
    

Doubled single quotes

This section discusses vsql inputs that are not passed on to the server. Vertica recognizes two consecutive single quotes within a string literal as one single quote character. For example, the following inputs, 'You''re here!' ignored the second consecutive quote and returns the following:

=> SELECT 'You''re here!';
   ?column?
--------------
 You're here!at
(1 row)

This is the SQL standard representation and is preferred over the form, 'You\'re here!', because backslashes are not parsed as before. You need to escape the backslash:

=> SELECT (E'You\'re here!');
   ?column?
--------------
 You're here!
(1 row)

This behavior change introduces a potential incompatibility in the use of the vsql meta-command \set, which automatically concatenates its arguments. For example:

\set file  '\''  'pwd'  '/file.txt'  '\''\echo :file

vsql takes the four arguments and outputs the following:

'/home/vertica/file.txt'

Vertica parses the adjacent single quotes as follows:

\set file  '\'''pwd''/file.txt''\''\echo :file
'/home/vertica/file.txt''

Note the extra single quote at the end. This is due to the pair of adjacent single quotes together with the backslash-quoted single quote.

The extra quote can be resolved either as in the first example above, or by combining the literals as follows:

\set file '\''`pwd`'/file.txt'''\echo :file
'/home/vertica/file.txt'

In either case the backslash-quoted single quotes should be changed to doubled single quotes as follows:

\set file '''' `pwd` '/file.txt'''

Additional examples

=> SELECT 'This \is a string';
     ?column?
------------------
 This \is a string
(1 row)

=> SELECT E'This \is a string';
     ?column?
------------------
 This is a string

=> SELECT E'This is a \n new line';
       ?column?
----------------------
 This is a
 new line
(1 row)

=> SELECT 'String''s characters';
      ?column?
--------------------
 String's characters
(1 row)

3.3 - Date/time literals

Date or time literal input must be enclosed in single quotes.

Date or time literal input must be enclosed in single quotes. Input is accepted in almost any reasonable format, including ISO 8601, SQL-compatible, traditional POSTGRES, and others.

Vertica handles date/time input more flexibly than the SQL standard requires. The exact parsing rules of date/time input and for the recognized text fields including months, days of the week, and time zones are described in Date/time expressions.

3.3.1 - Time zone values

Vertica attempts to be compatible with the SQL standard definitions for time zones.

Vertica attempts to be compatible with the SQL standard definitions for time zones. However, the SQL standard has an odd mix of date and time types and capabilities. Obvious problems are:

  • Although the DATE type does not have an associated time zone, the TIME type can. Time zones in the real world have little meaning unless associated with a date as well as a time, since the offset can vary through the year with daylight-saving time boundaries.

  • Vertica assumes your local time zone for any data type containing only date or time.

  • The default time zone is specified as a constant numeric offset from UTC. It is therefore not possible to adapt to daylight-saving time when doing date/time arithmetic across DST boundaries.

To address these difficulties, OpenText recommends using Date/Time types that contain both date and time when you use time zones. OpenText recommends that you do not use the type TIME WITH TIME ZONE, even though it is supported it for legacy applications and for compliance with the SQL standard.

Time zones and time-zone conventions are influenced by political decisions, not just earth geometry. Time zones around the world became somewhat standardized during the 1900's, but continue to be prone to arbitrary changes, particularly with respect to daylight-savings rules.

Vertica currently supports daylight-savings rules over the time period 1902 through 2038, corresponding to the full range of conventional UNIX system time. Times outside that range are taken to be in "standard time" for the selected time zone, no matter what part of the year in which they occur.

Example Description
PST Pacific Standard Time
-8:00 ISO-8601 offset for PST
-800 ISO-8601 offset for PST
-8 ISO-8601 offset for PST
zulu Military abbreviation for UTC
z Short form of zulu

3.3.2 - Day of the week names

The following tokens are recognized as names of days of the week:.

The following tokens are recognized as names of days of the week:

Day Abbreviations
SUNDAY SUN
MONDAY MON
TUESDAY TUE, TUES
WEDNESDAY WED, WEDS
THURSDAY THU, THUR, THURS
FRIDAY FRI
SATURDAY SAT

3.3.3 - Month names

The following tokens are recognized as names of months:.

The following tokens are recognized as names of months:

Month Abbreviations
JANUARY JAN
FEBRUARY FEB
MARCH MAR
APRIL APR
MAY MAY
JUNE JUN
JULY JUL
AUGUST AUG
SEPTEMBER SEP, SEPT
OCTOBER OCT
NOVEMBER NOV
DECEMBER DEC

3.3.4 - Interval literal

A literal that represents a time span.

A literal that represents a time span.

Syntax

[ @ ] [-] { quantity subtype-unit }[...] [ AGO ]

Parameters

@
Ignored
- (minus)
Specifies a negative interval value.
quantity
Integer numeric constant
subtype-unit
See Interval subtype units for valid values. Subtype units must be specified for year-month intervals; they are optional for day-time intervals.
AGO
Specifies a negative interval value. AGO and - (minus) are synonymous.

Notes

  • The amounts of different units are implicitly added up with appropriate sign accounting.

  • The boundaries of an interval constant are:

    • 9223372036854775807 usec to -9223372036854775807 usec

    • 296533 years 3 mons 21 days 04:00:54.775807 to -296533 years -3 mons -21 days -04:00:54.775807

  • The range of an interval constant is +/– 263 – 1 microseconds.

  • In Vertica, interval fields are additive and accept large floating-point numbers.

Examples

See Specifying interval input.

3.3.4.1 - Interval subtype units

The following tables lists subtype units that you can specify in an interval literal, divided into major categories:.

The following tables lists subtype units that you can specify in an interval literal, divided into major categories:

Year-month subtype units

Subtypes Units Notes
Millennium mil, millennium, millennia, mils, millenniums
Century c, cent, century, centuries
Decade dec, decs, decade, decades
Year a Julian year: 365.25 days
ka Julian kilo-year: 365250 days
y, yr, yrs, year, years Calendar year: 365 days
Quarter q, qtr, qtrs, quarter, quarters
Month m, mon, mons, months, month Vertica can interpret m as minute or month, depending on context. See Processing m Input below.
Week w, wk, week, wks, weeks

Day-time subtype units

Subtypes Units Notes
Day d, day, days
Hour h, hr, hrs, hour, hours
Minute m, min, mins, minute, minutes Vertica can interpret input unit m as minute or month, depending on context. See Processing m Input below.
Second s, sec, secs, second, seconds
Millisecond ms, msec, msecs, msecond, mseconds, millisecond, milliseconds
Microsecond us, usec, usecs, usecond, useconds, microseconds, microsecond

Processing m input

Vertica uses context to interpret the input unit m as months or minutes. For example, the following command creates a one-column table with an interval value:

=> CREATE TABLE int_test(i INTERVAL YEAR TO MONTH);

Given the following INSERT statement, Vertica interprets the interval literal 1y 6m as 1 year 6 months:

=> INSERT INTO int_test VALUES('1y 6m');
 OUTPUT
--------
      1
(1 row)
=> COMMIT;
COMMIT
=> SET INTERVALSTYLE TO UNITS;
SET
=> SELECT * FROM int_test;
        i
-----------------
 1 year 6 months
(1 row)

The following ALTER TABLE statement adds a DAY TO MINUTE interval column to table int_test:

=>  ALTER TABLE int_test ADD COLUMN x INTERVAL DAY TO MINUTE;
ALTER TABLE

The next INSERT statement sets the first and second columns to 3y 20m and 1y 6m, respectively. In this case, Vertica interprets the m input literals in two ways:

  • For column i, Vertica interprets the m input as months, and displays 4 years 8 months.

  • For column x, Vertica interprets the m input as minutes. Because the interval is defined as DAY TO MINUTE, it converts the inserted input value 1y 6m to 365 days 6 minutes:

=> INSERT INTO int_test VALUES ('3y 20m', '1y 6m');
 OUTPUT
--------
      1
(1 row)

=> SELECT * FROM int_test;
        i         |        x
------------------+-----------------
 1 year 6 months  |
 4 years 8 months | 365 days 6 mins
(2 rows)

3.3.4.2 - Interval qualifier

Specifies how to interpret and format an interval literal for output, and optionally sets precision.

Specifies how to interpret and format an interval literal for output, and optionally sets precision. Interval qualifiers are composed of one or two units:

unit [ TO unit ] [ (p) ]

where:

  • unit specifies a day-time or year-month subtype.

  • p specifies precision, an integer between 0 and 6.

If an interval omits an interval qualifier, the default is DAY TO SECOND(6).

Interval qualifiers are divided into two categories:

Day-time interval qualifiers

Qualifier Description
DAY Unconstrained
DAY TO HOUR Span of days and hours
DAY TO MINUTE Span of days and minutes
DAY TO SECOND [(p)] Span of days, hours, minutes, seconds, and fractions of a second.
HOUR Hours within days
HOUR TO MINUTE Span of hours and minutes
HOUR TO SECOND [(p)] Span of hours and seconds
MINUTE Minutes within hours
MINUTE TO SECOND [(p)] Span of minutes and seconds
SECOND [(p)] Seconds within minutes

Year-month interval qualifiers

YEAR
Unconstrained
MONTH
Months within year
YEAR TO MONTH
Span of years and months

Examples

See Controlling interval format.

4 - Operators

Operators are logical, mathematical, and equality symbols used in SQL to evaluate, compare, or calculate values.

Operators are logical, mathematical, and equality symbols used in SQL to evaluate, compare, or calculate values.

4.1 - Bitwise operators

Bitwise operators perform bit manipulations on INTEGER and BINARY/VARBINARY data types:.

Bitwise operators perform bit manipulations on INTEGER and BINARY/VARBINARY data types:

Operator Description Example Result
& AND 12 & 4 4
| OR 32 | 3 35
# XOR 17 # 5 20
~ NOT ~1 -2
<< Bitwise shift left 1 << 4 16
>> Bitwise shift right 8 >> 2 2

Invalid for BINARY/VARBINARY data types

String argument handling

String arguments must be explicitly cast as BINARY or VARBINARY data types for all bitwise operators. For example:

=> SELECT 'xyz'::VARBINARY & 'zyx'::VARBINARY AS AND;
 AND
-----
 xyx
(1 row)

=> SELECT 'xyz'::VARBINARY | 'zyx'::VARBINARY AS OR;
 OR
-----
 zyz
(1 row)

Bitwise operators treats all string arguments as equal in length. If the arguments have different lengths, the operator function right-pads the smaller string with one or more zero bytes to equal the length of the larger string.

For example, the following statement ANDs unequal strings xyz and zy. Vertica right-pads string zy with one zero byte. The last character in the result is represented accordingly, as \000:

=> SELECT 'xyz'::VARBINARY & 'zy'::VARBINARY AS AND;
  AND
--------
 xy\000
(1 row)

4.2 - Logical operators

Vertica supports the logical operators AND, OR, and NOT:.

Vertica supports the logical operators AND, OR, and NOT:

  • AND evaluates to true when both of the conditions joined by AND are true.

  • OR evaluates to true when either condition is true.

  • NOT negates the result of any Boolean expression.

AND and OR are commutative—that is, you can switch left and right operands without affecting the result. However, the order of evaluation of sub-expressions is not defined. To force evaluation order, use a CASE construct.

Logic

SQL uses a three-valued Boolean logic where NULL represents "unknown":

  • true AND NULL = NULL

  • true OR NULL = true

  • false AND NULL = false

  • false OR NULL = NULL

  • NULL AND NULL = NULL

  • NULL OR NULL = NULL

  • NOT NULL = NULL

4.3 - Comparison operators

Comparison operators are available for all data types where comparison makes sense.

Comparison operators are available for all data types where comparison makes sense. All comparison operators are binary operators that return values of true, false, or NULL (unknown).

Operator Description Binary function
< less than binary_lt
> greater than binary_gt
<= less than or equal to binary_le
>= greater than or equal to binary_ge
=, <=>

equal

binary_eq
!=, <> not equal (unsupported for correlated subqueries) binary_ne

NULL handling

Comparison operators return NULL (unknown) if either or both operands are null. One exception applies: <=> returns true if both operands are NULL, and false if one operand is NULL.

Collections

When comparing collections, null collections are ordered last. Otherwise, collections are compared element by element until there is a mismatch, and then they are ordered based on the non-matching elements. If all elements are equal up to the length of the shorter one, then the shorter one is ordered first.

4.4 - Data type coercion operators (CAST)

Data type coercion (casting) passes an expression value to an input conversion routine for a specified data type, resulting in a constant of the indicated type.

Data type coercion (casting) passes an expression value to an input conversion routine for a specified data type, resulting in a constant of the indicated type. In Vertica, data type coercion can be invoked by an explicit cast request that uses one of the following constructs:

Syntax

SELECT CAST ( expression AS data-type )
SELECT expression::data-type
SELECT data-type 'string'

Parameters

expression An expression of any type
data-type An SQL data type that Vertica supports to convert expression.

Truncation

If a binary value is cast (implicitly or explicitly) to a binary type with a smaller length, the value is silently truncated. For example:

=> SELECT 'abcd'::BINARY(2);
 ?column?
----------
 ab
(1 row)

Similarly, if a character value is cast (implicitly or explicitly) to a character value with a smaller length, the value is silently truncated. For example:

=> SELECT 'abcd'::CHAR(3);
 ?column?
----------
 abc
(1 row)

Binary casting and resizing

Vertica supports only casts and resize operations as follows:

  • BINARY to and from VARBINARY

  • VARBINARY to and from LONG VARBINARY

  • BINARY to and from LONG VARBINARY

On binary data that contains a value with fewer bytes than the target column, values are right-extended with the zero byte '\0' to the full width of the column. Trailing zeros on variable-length binary values are not right-extended:

=> SELECT 'ab'::BINARY(4), 'ab'::VARBINARY(4), 'ab'::LONG VARBINARY(4);
  ?column?  | ?column? | ?column?
------------+----------+----------
 ab\000\000 | ab       | ab
(1 row)

Automatic coercion

The explicit type cast can be omitted if there is no ambiguity as to the type the constant must be. For example, when a constant is assigned directly to a column, it is automatically coerced to the column's data type.

Examples

=> SELECT CAST((2 + 2) AS VARCHAR);
 ?column?
----------
 4
(1 row)

=> SELECT (2 + 2)::VARCHAR;
 ?column?
----------
 4
(1 row)

=> SELECT INTEGER '123';
 ?column?
----------
      123
(1 row)

=> SELECT (2 + 2)::LONG VARCHAR
 ?column?
----------
 4
(1 row)

=> SELECT '2.2' + 2;
   ERROR:  invalid input syntax for integer: "2.2"

=> SELECT FLOAT '2.2' + 2;
 ?column?
----------
      4.2
(1 row)

See also

4.4.1 - Cast failures

When you invoke data type coercion (casting) by an explicit cast and the cast fails, the result returns either an error or NULL.

When you invoke data type coercion (casting) by an explicit cast and the cast fails, the result returns either an error or NULL. Cast failures commonly occur when you try to cast conflicting conversions, such as coercing a VARCHAR expression that contains letters to an integer.

When a cast fails, the result returned depends on the data type.

Data type Cast failure default
date, time NULL
literals error
all other types error

Enabling strict time casts

You can enable all cast failures to result in an error, including those for date/time data types. Doing so lets you see the reason why some or all of the cast failed. To return an error instead of NULL, set the configuration parameter EnableStrictTimeCasts to 1:

ALTER SESSION SET EnableStrictTimeCasts=1;

By default, EnableStrictTimeCasts is set to 0. Thus, the following attempt to cast a VARCHAR to a TIME data type returns NULL:


==> SELECT current_value from configuration_parameters WHERE parameter_name ilike '%EnableStrictTimeCasts%';
 current_value
---------------
 0
(1 row)

=> CREATE TABLE mytable (a VARCHAR);
CREATE TABLE
=> INSERT INTO mytable VALUES('one');
OUTPUT
--------
1
(1 row)

=> INSERT INTO mytable VALUES('1');
OUTPUT
--------
1
(1 row)

=> COMMIT;
COMMIT
=> SELECT a::time FROM mytable;
 a
---


(2 rows)

If EnableStrictTimeCasts is enabled, the cast failure returns an error:


=> ALTER SESSION SET EnableStrictTimeCasts=1;
ALTER SESSION
=> SELECT a::time FROM mytable;
ERROR 3679:  Invalid input syntax for time: "1"

Returning all cast failures as NULL

To explicitly cast an expression to a requested data type, use the following construct:

SELECT expression::data-type

Using this command to cast any values to a conflicting data type returns the following error:

=> SELECT 'one'::time;
ERROR 3679:  Invalid input syntax for time: "one"

Vertica also supports the use of the coercion operator ::!, which is useful when you want to return:

  • NULL instead of an error for any non-date/time data types

  • NULL instead of an error after setting EnableStrictTimeCasts

Returning all cast failures as NULL allows those expressions that succeed during the cast to appear in the result. Those expressions that fail during the cast, however, have a NULL value.

The following example queries mytable using the coercion operator ::!. The query returns NULL where column a contains the string one, and returns 1 where the column contains 1. Query results are identical no matter how EnableStrictTimeCasts is set:

=> SELECT current_value FROM configuration_parameters WHERE parameter_name ilike '%EnableStrictTimeCasts%';
 current_value
---------------
 0
(1 row)

=> SELECT a::!int FROM mytable;
a
---

1
(2 rows)

ALTER SESSION SET EnableStrictTimeCasts=1;
ALTER SESSION
=> SELECT a::!int FROM mytable;
 a
---

 1
(2 rows)

You can use ::! for casts of arrays and sets. The cast resolves each element individually, producing NULL for elements that cannot be cast.

Note that this functionality only applies to table data. It does not work on inline constant casts and in expressions automatically reduced to such. For example,

SELECT constant ::! FLOAT from (select 'some string' as constant) a;

results in ERROR 2826: Could not convert "some string" to a float8. However, the following returns cast failures as NULL as described:

SELECT string_field ::! float FROM (SELECT 'a string' as string_field UNION ALL SELECT 'another string' ) a;

4.5 - Date/time operators

Vertica supports usage of arithmetic operators on DATE/TIME operands:.

Vertica supports usage of arithmetic operators on DATE/TIME operands:

  • + (addition)
  • - (subtraction)
  • * (multiplication)
  • / (division)

Examples

The operators described below that take TIME or TIMESTAMP input have two variants:

  • Operators that take TIME WITH TIME ZONE or TIMESTAMP WITH TIME ZONE input.

  • Operators that take TIME WITHOUT TIME ZONE or TIMESTAMP WITHOUT TIME ZONE input.

For brevity, these variants are not shown separately.

The + and * operators come in commutative pairs—for example, both DATE + INTEGER and INTEGER + DATE. Only one of each pair is shown.

Example Result Type Result
DATE '2001-09-28' + INTEGER '7' DATE '2001-10-05'
DATE '2001-09-28' + INTERVAL '1 HOUR' TIMESTAMP '2001-09-28 01:00:00'
DATE '2001-09-28' + TIME 
'03:00'
TIMESTAMP '2001-09-28 03:00:00'
INTERVAL '1 DAY' + INTERVAL 
'1 HOUR'
INTERVAL '1 DAY 01:00:00'
TIMESTAMP '2001-09-28 01:00' 
+ INTERVAL '23 HOURS'
TIMESTAMP '2001-09-29 00:00:00'
TIME '01:00' + INTERVAL 
'3 HOURS'
TIME '04:00:00'
- INTERVAL '23 HOURS' INTERVAL '-23:00:00'
DATE '2001-10-01' – DATE 
'2001-09-28'
INTEGER '3'
DATE '2001-10-01' – INTEGER '7' DATE '2001-09-24'
DATE '2001-09-28' – INTERVAL 
'1 HOUR'
TIMESTAMP '2001-09-27 23:00:00'
TIME '05:00' – TIME '03:00' INTERVAL '02:00:00'
TIME '05:00'  INTERVAL 
'2 HOURS'
TIME '03:00:00'
TIMESTAMP '2001-09-28 23:00' 
– INTERVAL '23 HOURS'
TIMESTAMP '2001-09-28 00:00:00'
INTERVAL '1 DAY' – INTERVAL 
'1 HOUR'
INTERVAL '1 DAY -01:00:00'
TIMESTAMP '2001-09-29 03:00' 
– TIMESTAMP '2001-09-27 12:00'
INTERVAL '1 DAY 15:00:00'
900 * INTERVAL '1 SECOND' INTERVAL '00:15:00'
21 * INTERVAL '1 DAY' INTERVAL '21 DAYS'
DOUBLE PRECISION '3.5' 
* INTERVAL '1 HOUR' 
INTERVAL '03:30:00'
INTERVAL '1 HOUR' / 
DOUBLE PRECISION '1.5'
INTERVAL '00:40:00'

4.6 - Mathematical operators

Mathematical operators are provided for many data types.

Mathematical operators are provided for many data types.

Operator Description Example Result
! Factorial 5 ! 120
+ Addition 2 + 3 5
Subtraction 2 – 3 –1
* Multiplication 2 * 3 6
/ Division (integer division produces NUMERIC results). 4 / 2 2.00...
// With integer division, returns an INTEGER rather than a NUMERIC. 117.32 // 2.5 46
% Modulo (remainder). For details, see MOD. 5 % 4 1
^ Exponentiation 2.0 ^ 3.0 8
|/ Square root |/ 25.0 5
||/ Cube root ||/ 27.0 3
!! Factorial (prefix operator) !! 5 120
@ Absolute value @ -5.0 5

Factorial operator support

Vertica supports use of factorial operators on positive and negative floating point (DOUBLE PRECISION) numbers and integers. For example:

=> SELECT 4.98!;
   ?column?
------------------
 115.978600750905
(1 row)

Factorial is defined in terms of the gamma function, where (-1) = Infinity and the other negative integers are undefined. For example:

(–4)! = NaN
–(4!) = –24

Factorial is defined as follows for all complex numbers z:

z! = gamma(z+1)

For details, see Abramowitz and Stegun: Handbook of Mathematical Functions.

4.7 - NULL operators

To check whether a value is or is not NULL, use the following equivalent constructs:.

To check whether a value is or is not NULL, use the following equivalent constructs:

Standard:

[expression IS NULL | expression IS NOT NULL]

Non-standard:

[expression ISNULL | expression NOTNULL]

Do not write expression = NULL: NULL represents an unknown value, and two unknown values are not necessarily equal. This behavior conforms to the SQL standard.

4.8 - String concatenation operators

To concatenate two strings on a single line, use the concatenation operator (two consecutive vertical bars).

To concatenate two strings on a single line, use the concatenation operator (two consecutive vertical bars).

Syntax

string || string

Parameters

string Expression of type CHAR or VARCHAR

Notes

  • || is used to concatenate expressions and constants. The expressions are cast to VARCHAR if possible, otherwise to VARBINARY, and must both be one or the other.
  • Two consecutive strings within a single SQL statement on separate lines are automatically concatenated

Examples

The following example is a single string written on two lines:

=> SELECT E'xx'-> '\\';
 ?column?
----------
 xx\
(1 row)

The following examples show two strings concatenated:

=> SELECT E'xx' ||-> '\\';
 ?column?
----------
 xx\\
(1 row)

=> SELECT 'auto' || 'mobile';
 ?column?
----------
 automobile
(1 row)

=> SELECT 'auto'-> 'mobile';
 ?column?
----------
 automobile
(1 row)

=> SELECT 1 || 2;
 ?column?
----------
 12
(1 row)


=> SELECT '1' || '2';
 ?column?
----------
 12
(1 row)
=> SELECT '1'-> '2';
 ?column?
----------
 12
(1 row)

5 - Expressions

SQL expressions are the components of a query that compare a value or values against other values.

SQL expressions are the components of a query that compare a value or values against other values. They can also perform calculations. An expression found inside a SQL statement is usually in the form of a conditional statement.

Some functions also use Lambda functions.

Operator precedence

The following table shows operator precedence in decreasing (high to low) order.

When an expression includes more than one operator, specify the order of operation using parentheses, rather than relying on operator precedence.

Operator/Element Associativity Description
. left table/column name separator
:: left typecast
[ ] left array element selection
- right unary minus
^ left exponentiation
* / % left multiplication, division, modulo
+ - left addition, subtraction
IS IS TRUE, IS FALSE, IS UNKNOWN, IS NULL
IN set membership
BETWEEN range containment
OVERLAPS time interval overlap
LIKE string pattern matching
< > less than, greater than
= right equality, assignment
NOT right logical negation
AND left logical conjunction
OR left logical disjunction

Expression evaluation rules

The order of evaluation of subexpressions is not defined. In particular, the inputs of an operator or function are not necessarily evaluated left-to-right or in any other fixed order. To force evaluation in a specific order, use a CASE construct. For example, this is an untrustworthy way of trying to avoid division by zero in a WHERE clause:

=> SELECT x, y WHERE x <> 0 AND y/x > 1.5; --- unsafe

But this is safe:

=> SELECT x, y
   WHERE
     CASE
       WHEN x <> 0 THEN y/x > 1.5
       ELSE false
     END;

A CASE construct used in this fashion defeats optimization attempts, so use it only when necessary. (In this particular example, it would be best to avoid the issue by writing y > 1.5*x instead.)

Limits to SQL expressions

Expressions are limited by the available stack. Vertica requires at least 100KB of free stack. If this limit is exceeded then the error "The query contains an expression that is too complex to analyze" might be thrown. Adding physical memory and/or increasing the value of ulimit -s can increase the available stack and prevent the error.

Analytic expressions have a maximum recursion depth of 2000. If this limit is exceeded then the error "The query contains an expression that is too complex to analyze" might be thrown. This limit cannot be increased.

5.1 - Aggregate expressions

An aggregate expression applies an aggregate function across the rows or groups of rows selected by a query.

An aggregate expression applies an aggregate function across the rows or groups of rows selected by a query.

An aggregate expression only can appear in the select list or HAVING clause of a SELECT statement. It is invalid in other clauses such as WHERE, because those clauses are evaluated before the results of aggregates are formed.

Syntax

An aggregate expression has the following format:

aggregate-function ( [ * ] [ ALL | DISTINCT ] expression )

Parameters

aggregate-function A Vertica function that aggregates data over groups of rows from a query result set.
ALL | DISTINCT

Specifies which input rows to process:

  • ALL (default): Invokes aggregate-function across all input rows where expression evaluates to a non-null value.

  • DISTINCT: Invokes aggregate-function across all input rows where expression evaluates to a unique non-null value.

expression A value expression that does not itself contain an aggregate expression.

Examples

The AVG aggregate function returns the average income from the customer_dimension table:

=> SELECT AVG(annual_income) FROM customer_dimension;
 AVG
--------------
 2104270.6485
(1 row)

The following example shows how to use the COUNT aggregate function with the DISTINCT keyword to return all distinct values of evaluating the expression x+y for all inventory_fact records.

=> SELECT COUNT (DISTINCT date_key + product_key) FROM inventory_fact;
COUNT
-------
21560
(1 row)

5.2 - CASE expressions

The CASE expression is a generic conditional expression that can be used wherever an expression is valid.

The CASE expression is a generic conditional expression that can be used wherever an expression is valid. It is similar to CASE and IF/THEN/ELSE statements in other languages.

Syntax (form 1)

CASE
  WHEN condition THEN result
  [ WHEN condition THEN result ]
  ...
  [ ELSE result ]
END

Parameters

*condition An expression that returns a Boolean (true/false) result. If the result is false, subsequent WHEN clauses are evaluated in the same way.
*result Specifies the value to return when the associated condition is true.
ELSE result If no condition is true then the value of the CASE expression is the result in the ELSE clause. If the ELSE clause is omitted and no condition matches, the result is NULL.

Syntax (form 2)

CASE expression
 WHEN value THEN result
 [ WHEN value THEN result ]
 ...
 [ ELSE result ]
END

Parameters

*expression An expression that is evaluated and compared to all the value specifications in WHEN clauses until one is found that is equal.
*value Specifies a value to compare to the expression.
*result Specifies the value to return when the expression is equal to the specified value.
ELSE result Specifies the value to return when the expression is not equal to any value; if no ELSE clause is specified, the value returned is null.

Notes

The data types of all result expressions must be convertible to a single output type.

Examples

The following examples show two uses of the CASE statement.

=> SELECT * FROM test;
 a
---
 1
 2
 3
=> SELECT a,
     CASE WHEN a=1 THEN 'one'
          WHEN a=2 THEN 'two'
          ELSE 'other'
     END
   FROM test;
 a | case
---+-------
 1 | one
 2 | two
 3 | other
=> SELECT a,
     CASE a WHEN 1 THEN 'one'
            WHEN 2 THEN 'two'
            ELSE 'other'
     END
   FROM test;
 a | case
---+-------
 1 | one
 2 | two
 3 | other

Special example

A CASE expression does not evaluate subexpressions that are not needed to determine the result. You can use this behavior to avoid division-by-zero errors:

=> SELECT x FROM T1 WHERE
      CASE WHEN x <> 0 THEN y/x > 1.5
      ELSE false
    END;

5.3 - Column references

A column reference cannot contain any spaces.

Syntax

[[[database.]schema.]table-name.]column-name

Parameters

schema Database and schema. The default schema is public. If you specify a database, it must be the current database.
table-name

One of the following:

  • Name of a table

  • Table alias defined in the query's FROM clause

column-name A column name that is unique among all queried tables.

Restrictions

A column reference cannot contain any spaces.

5.4 - Comments

A comment is an arbitrary sequence of characters beginning with two consecutive hyphen characters and extending to the end of the line.

A comment is an arbitrary sequence of characters beginning with two consecutive hyphen characters and extending to the end of the line. For example:

   -- This is a standard SQL comment

A comment is removed from the input stream before further syntax analysis and is effectively replaced by white space.

Alternatively, C-style block comments can be used where the comment begins with /* and extends to the matching occurrence of */.

   /* multiline comment
    * with nesting: /* nested block comment */
    */

These block comments nest, as specified in the SQL standard. Unlike C, you can comment out larger blocks of code that might contain existing block comments.

5.5 - Date/time expressions

Vertica uses an internal heuristic parser for all date/time input support.

Vertica uses an internal heuristic parser for all date/time input support. Dates and times are input as strings, and are broken up into distinct fields with a preliminary determination of what kind of information might be in the field. Each field is interpreted and either assigned a numeric value, ignored, or rejected. The parser contains internal lookup tables for all textual fields, including months, days of the week, and time zones.

Vertica parses date/time type inputs as follows:

  1. Break the input string into tokens and categorize each token as a string, time, time zone, or number.

  2. Numeric token contains:

    • colon (:) — Parse as a time string, include all subsequent digits and colons.

    • dash (-), slash (/), or two or more dots (.) — Parse as a date string which might have a text month.

    • Numeric only — Parse as a single field or an ISO 8601 concatenated date (19990113 for January 13, 1999) or time (141516 for 14:15:16).

  3. Token starts with a plus (+) or minus (–): Parse as a time zone or a special field.

  4. Token is a text string: match up with possible strings.

    • Perform a binary-search table lookup for the token as either a special string (for example, today), day (for example, Thursday), month (for example, January), or noise word (for example, at, on).

    • Set field values and bit mask for fields. For example, set year, month, day for today, and additionally hour, minute, second for now.

    • If not found, do a similar binary-search table lookup to match the token with a time zone.

    • If still not found, throw an error.

  5. Token is a number or number field:

    • If eight or six digits, and if no other date fields were previously read, interpret as a "concatenated date" (19990118 or 990118). The interpretation is YYYYMMDD or YYMMDD.

    • If token is three digits and a year was already read, interpret as day of year.

    • If four or six digits and a year was already read, interpret as a time (HHMM or HHMMSS).

    • If three or more digits and no date fields were found yet, interpret as a year (this forces yy-mm-dd ordering of the remaining date fields).

    • Otherwise the date field ordering is assumed to follow the DateStyle setting: mm-dd-yy, dd-mm-yy, or yy-mm-dd. Throw an error if a month or day field is found to be out of range.

  6. If BC is specified: negate the year and add one for internal storage. (In the Vertica implementation, 1 BC = year zero.)

  7. If BC is not specified, and year field is two digits in length: adjust the year to four digits. If field is less than 70, add 2000, otherwise add 1900.

Month day year ordering

For some formats, ordering of month, day, and year in date input is ambiguous and there is support for specifying the expected ordering of these fields.

Special date/time values

Vertica supports several special date/time values for convenience, as shown below. All of these values need to be written in single quotes when used as constants in SQL statements.

The values INFINITY and -INFINITY are specially represented inside the system and are displayed the same way. The others are simply notational shorthands that are converted to ordinary date/time values when read. (In particular, NOW and related strings are converted to a specific time value as soon as they are read.)

String Valid Data Types Description
epoch DATE, TIMESTAMP 1970-01-01 00:00:00+00 (UNIX SYSTEM TIME ZERO)
INFINITY TIMESTAMP Later than all other time stamps
-INFINITY TIMESTAMP Earlier than all other time stamps
NOW DATE, TIME, TIMESTAMP

Current transaction's start time

Note: NOW is not the same as the NOW function.

TODAY DATE, TIMESTAMP Midnight today
TOMORROW DATE, TIMESTAMP Midnight tomorrow
YESTERDAY DATE, TIMESTAMP Midnight yesterday
ALLBALLS TIME 00:00:00.00 UTC

The following SQL-compatible functions can also be used to obtain the current time value for the corresponding data type:

The latter four accept an optional precision specification. (See Date/time functions.) However, these functions are SQL functions and are not recognized as data input strings.

5.6 - NULL value

NULL is a reserved keyword used to indicate that a data value is unknown.

NULL is a reserved keyword used to indicate that a data value is unknown. It is the ASCII abbreviation for NULL characters (\0).

Usage in expressions

Vertica does not treat an empty string as a NULL value. An expression must specify NULL to indicate that a column value is unknown.

The following considerations apply to using NULL in expressions:

  • NULL is not greater than, less than, equal to, or not equal to any other expression. Use the Boolean to determine whether an expression value is NULL.

  • You can write queries with expressions that contain the <=> operator for NULL=NULL joins. See Equi-joins and non equi-joins.

  • Vertica accepts NULL characters ('\0') in constant strings and does not remove null characters from VARCHAR fields on input or output.

Projection ordering of NULL data

Vertica sorts NULL values in projection columns as follows:

Column data type NULL values placed at...
NUMERIC INTEGER DATE TIME TIMESTAMP INTERVAL Beginning of sorted column (NULLS FIRST)
FLOAT STRING BOOLEAN End of sorted column (NULLS LAST)

See also

NULL-handling functions

6 - Lambda functions

Some SQL functions have arguments that are lambda functions.

Some SQL functions have arguments that are lambda functions. A lambda function is an unnamed inline function that is evaluated by the containing SQL function and returns a value.

Syntax

Lambda with one argument:

argument -> expression

Lambda with more than one argument:

(argument, ...) -> expression

Arguments

argument Name to use for an input value for the expression. The name cannot be a reserved keyword, the name of an argument to a parent or nested lambda, or a column name or alias.
expression Expression that uses the input arguments and returns a result to the containing SQL function. See the documentation of individual SQL functions for restrictions on return values. For example, some functions require a Boolean result.

Examples

The ARRAY_FIND function returns the first index that matches the element being searched for. Instead of a literal element, you can write a lambda function that returns a Boolean. The lambda function is applied to each element in the array until a match is found or all elements have been tested. In the following example, each person in the table has an array of email addresses, and the function locates fake addresses:

=> CREATE TABLE people (id INT, name VARCHAR, email ARRAY[VARCHAR,5]);

=> SELECT name, ARRAY_FIND(email, e -> REGEXP_LIKE(e,'example.com','i'))
   AS 'example.com'
   FROM people;
     name      | example.com
---------------+-------------
 Alice Adams   |           1
 Bob Adams     |           1
 Carol Collins |           0
 Dave Jones    |           0
(4 rows)

The argument e represents the individual element, and the body of the lambda expression is the regular-expression comparison. The input table has four rows; in each row, the lambda function is called once per array element.

In the following example, a schedules table includes an array of events, where each event is a ROW with several fields:

=> CREATE TABLE schedules
       (guest VARCHAR,
       events ARRAY[ROW(e_date DATE, e_name VARCHAR, price NUMERIC(8,2))]);

You can use the CONTAINS function with a lambda expression to find people who have more than one event on the same day. The second argument, idx, is the index of the current element:

=> SELECT guest FROM schedules
WHERE CONTAINS(events, (e, idx) ->
                       (idx < ARRAY_LENGTH(events) - 1)
                       AND (e.e_date = events[idx + 1].e_date));
    guest
-------------
 Alice Adams
(1 row)

7 - Predicates

Predicates are truth-tests.

Predicates are truth-tests. If the predicate test is true, it returns a value. Each predicate is evaluated per row, so that when the predicate is part of an entire table SELECT statement, the statement can return multiple results.

Predicates consist of a set of parameters and arguments. For example, in the following WHERE clause:

WHERE name = 'Smith'
  • name = 'Smith' is the predicate

  • 'Smith' is an expression

7.1 - ANY and ALL

ANY and ALL are logical operators that let you make comparisons on subqueries that return one or more rows.

ANY and ALL are logical operators that let you make comparisons on subqueries that return one or more rows. Both operators must be preceded by a comparison operator and followed by a subquery:

expression comparison-operator { ANY | ALL } (subquery)
  • ANY returns true if the comparison between expression and any value returned by subquery evaluates to true.
  • ALL returns true only if the comparison between expression and all values returned by subquery evaluates to true.

Equivalent operators

You can use the following operators instead of ANY or ALL:

This operator... Is equivalent to:
SOME ANY
IN = ANY
NOT IN <> ALL

NULL handling

Vertica supports multicolumn <> ALL subqueries where the columns are not marked NOT NULL. If any column contains a NULL value, Vertica returns a run-time error.

Vertica does not support ANY subqueries that are nested in another expression if any column values are NULL.

Examples

Examples below use the following tables and data:

=> SELECT * FROM t1 ORDER BY c1;
 c1 | c2
----+-----
  1 | cab
  1 | abc
  2 | fed
  2 | def
  3 | ihg
  3 | ghi
  4 | jkl
  5 | mno
(8 rows)
=> SELECT * FROM t2 ORDER BY c1;
 c1 | c2
----+-----
  1 | abc
  2 | fed
  3 | jkl
  3 | stu
  3 | zzz
(5 rows)

ANY subqueries

Subqueries that use the ANY keyword return true when any value retrieved in the subquery matches the value of the left-hand expression.

ANY subquery within an expression:

=> SELECT c1, c2 FROM t1 WHERE COALESCE((t1.c1 > ANY (SELECT c1 FROM t2)));
 c1 | c2
----+-----
  2 | fed
  2 | def
  3 | ihg
  3 | ghi
  4 | jkl
  5 | mno
(6 rows)

ANY noncorrelated subqueries without aggregates:

=> SELECT c1 FROM t1 WHERE c1 = ANY (SELECT c1 FROM t2) ORDER BY c1;
 c1
----
  1
  1
  2
  2
  3
  3
(6 rows)

ANY noncorrelated subqueries with aggregates:

=> SELECT c1, c2 FROM t1 WHERE c1 <> ANY (SELECT MAX(c1) FROM t2) ORDER BY c1;
 c1 | c2
----+-----
  1 | cab
  1 | abc
  2 | fed
  2 | def
  4 | jkl
  5 | mno
(6 rows)

=> SELECT c1 FROM t1 GROUP BY c1 HAVING c1 <> ANY (SELECT MAX(c1) FROM t2) ORDER BY c1;
 c1
----
  1
  2
  4
  5
(4 rows)

ANY noncorrelated subqueries with aggregates and a GROUP BY clause:

=> SELECT c1, c2 FROM t1 WHERE c1 <> ANY (SELECT MAX(c1) FROM t2 GROUP BY c2) ORDER BY c1;
 c1 | c2
----+-----
  1 | cab
  1 | abc
  2 | fed
  2 | def
  3 | ihg
  3 | ghi
  4 | jkl
  5 | mno
(8 rows)

ANY noncorrelated subqueries with a GROUP BY clause:

=> SELECT c1, c2 FROM t1 WHERE c1 <=> ANY (SELECT c1 FROM t2 GROUP BY c1) ORDER BY c1;
 c1 | c2
----+-----
  1 | cab
  1 | abc
  2 | fed
  2 | def
  3 | ihg
  3 | ghi
(6 rows)

ANY correlated subqueries with no aggregates or GROUP BY clause:

=> SELECT c1, c2 FROM t1 WHERE c1 >= ANY (SELECT c1 FROM t2 WHERE t2.c2 = t1.c2) ORDER BY c1;
 c1 | c2
----+-----
  1 | abc
  2 | fed
  4 | jkl
(3 rows)

ALL subqueries

A subquery that uses the ALL keyword returns true when all values retrieved by the subquery match the left-hand expression, otherwise it returns false.

ALL noncorrelated subqueries without aggregates:

=> SELECT c1, c2 FROM t1 WHERE c1 >= ALL (SELECT c1 FROM t2) ORDER BY c1;
 c1 | c2
----+-----
  3 | ihg
  3 | ghi
  4 | jkl
  5 | mno
(4 rows)

ALL noncorrelated subqueries with aggregates:

=> SELECT c1, c2 FROM t1 WHERE c1 = ALL (SELECT MAX(c1) FROM t2) ORDER BY c1;
 c1 | c2
----+-----
  3 | ihg
  3 | ghi
(2 rows)

=> SELECT c1 FROM t1 GROUP BY c1 HAVING c1 <> ALL (SELECT MAX(c1) FROM t2) ORDER BY c1;
 c1
----
  1
  2
  4
  5
(4 rows)

ALL noncorrelated subqueries with aggregates and a GROUP BY clause:

=> SELECT c1, c2 FROM t1 WHERE c1 <= ALL (SELECT MAX(c1) FROM t2 GROUP BY c2) ORDER BY c1;
 c1 | c2
----+-----
  1 | cab
  1 | abc
(2 rows)

ALL noncorrelated subqueries with a GROUP BY clause:

=> SELECT c1, c2 FROM t1 WHERE c1 <> ALL (SELECT c1 FROM t2 GROUP BY c1) ORDER BY c1;
 c1 | c2
----+-----
  4 | jkl
  5 | mno
(2 rows)

7.2 - BETWEEN

Checks whether an expression is within the range of two other expressions, inclusive.

Checks whether an expression is within the range of two other expressions, inclusive. All expressions must be of the same or compatible data types.

Syntax

WHERE a BETWEEN x AND y

Equivalent predicates

The following BETWEEN predicates can be rewritten in conventional SQL with logical operators AND and OR.

This BETWEEN predicate... Is equivalent to...
WHERE aBETWEENxANDy WHERE a>=xANDa<=y
WHERE aNOT BETWEENxANDy WHERE a<xORa>y

Examples

The BETWEEN predicate can be especially useful for querying date ranges, as shown in the following examples:

=> SELECT NOW()::DATE;
    NOW
------------
 2022-12-15
(1 row)

=> CREATE TABLE t1 (a INT, b varchar(12), c DATE);
CREATE TABLE
=> INSERT INTO t1 VALUES
    (0,'today',NOW()),
    (1,'today+1',NOW()+1),
    (2,'today+2',NOW()+2),
    (3,'today+3',NOW()+3),
    (4,'today+4',NOW()+4),
    (5,'today+5',NOW()+5),
    (6,'today+6',NOW()+6);
 OUTPUT
--------
      7
(1 row)

=> COMMIT;
COMMIT
=> SELECT * FROM t1;
 a |    b    |     c
---+---------+------------
 0 | today   | 2022-12-15
 1 | today+1 | 2022-12-16
 2 | today+2 | 2022-12-17
 3 | today+3 | 2022-12-18
 4 | today+4 | 2022-12-19
 5 | today+5 | 2022-12-20
 6 | today+6 | 2022-12-21
(7 rows)

=> SELECT * FROM t1 WHERE c BETWEEN '2022-12-17' AND '2022-12-20';
 a |    b    |     c
---+---------+------------
 2 | today+2 | 2022-12-17
 3 | today+3 | 2022-12-18
 4 | today+4 | 2022-12-19
 5 | today+5 | 2022-12-20
(4 rows)

Use the NOW and INTERVAL keywords to query a date range:

=> SELECT * FROM t1 WHERE c BETWEEN NOW()::DATE AND NOW()::DATE + INTERVAL '2 days';
 a |    b    |     c
---+---------+------------
 0 | today   | 2022-12-15
 1 | today+1 | 2022-12-16
 2 | today+2 | 2022-12-17
(3 rows)

7.3 - Boolean

Retrieves rows where the value of an expression is true, false, or unknown (NULL).

Retrieves rows where the value of an expression is true, false, or unknown (NULL).

Syntax

expression IS [NOT] TRUE
expression IS [NOT] FALSE
expression IS [NOT] UNKNOWN

Notes

  • NULL input is treated as the value UNKNOWN.
  • IS UNKNOWN and IS NOT UNKNOWN are effectively the same as the NULL predicate, except that the input expression does not have to be a single column value. To check a single column value for NULL, use the NULL predicate.
  • Do not confuse the Boolean predicate with Boolean operators or the Boolean data type, which can have only two values: true and false.

7.4 - EXISTS

EXISTS and NOT EXISTS predicates compare an expression against a subquery:.

EXISTS and NOT EXISTS predicates compare an expression against a subquery:

  • EXISTS returns true if the subquery returns one or more rows.
  • NOT EXISTS returns true if the subquery returns no rows.

Syntax

expression [ NOT ] EXISTS ( subquery )

Usage

EXISTS results only depend on whether any or no records are returned, and not on the contents of those records. Because the subquery output is usually of no interest, EXISTS tests are commonly written in one of the following ways:

EXISTS (SELECT 1 WHERE...)
EXISTS (SELECT * WHERE...)

In the first case, the subquery returns 1 for every record found by the subquery. For example, the following query retrieves a list of all customers whose store purchases were greater than 550 dollars:

=> SELECT customer_key, customer_name, customer_state
   FROM public.customer_dimension WHERE EXISTS
     (SELECT 1 FROM store.store_sales_fact
      WHERE customer_key = public.customer_dimension.customer_key
      AND sales_dollar_amount > 550)
   AND customer_state = 'MA' ORDER BY customer_key;
 customer_key |     customer_name      | customer_state
--------------+------------------------+----------------
            2 | Anna G. Li              | CA
            4 | Daniel I. Fortin        | TX
            7 | David H. Greenwood      | MA
            8 | Wendy S. Young          | IL
            9 | Theodore X. Brown       | MA
      ...
        49902 | Amy Q. Pavlov          | MA
        49922 | Doug C. Carcetti       | MA
        49930 | Theodore G. McNulty    | MA
        49979 | Ben Z. Miller          | MA
(1058 rows)

EXISTS versus IN

Whether you use EXISTS or IN subqueries depends on which predicates you select in outer and inner query blocks. For example, the following query gets a list of all the orders placed by all stores on January 2, 2007 for vendors with records in the vendor table:

=> SELECT store_key, order_number, date_ordered
   FROM store.store_orders_fact WHERE EXISTS
     (SELECT 1 FROM public.vendor_dimension vd JOIN store.store_orders_fact ord ON vd.vendor_key = ord.vendor_key)
   AND date_ordered = '2007-01-02';
 store_key | order_number | date_ordered
-----------+--------------+--------------
       114 |       271071 | 2007-01-02
        19 |       290888 | 2007-01-02
       132 |        58942 | 2007-01-02
       232 |         9286 | 2007-01-02
       126 |       224474 | 2007-01-02
       196 |        63482 | 2007-01-02
  ...
       196 |        83327 | 2007-01-02
       138 |       278373 | 2007-01-02
       179 |       293586 | 2007-01-02
       155 |       213413 | 2007-01-02
(506 rows)

The above query looks for existence of the vendor and date ordered. To return a particular value, rather than simple existence, the query looks for orders placed by the vendor who got the best deal on January 2, 2007:

=> SELECT store_key, order_number, date_ordered, vendor_name
   FROM store.store_orders_fact ord JOIN public.vendor_dimension vd ON ord.vendor_key = vd.vendor_key
   WHERE vd.deal_size IN (SELECT MAX(deal_size) FROM public.vendor_dimension) AND date_ordered = '2007-01-02';
 store_key | order_number | date_ordered |     vendor_name
-----------+--------------+--------------+----------------------
        50 |        99234 | 2007-01-02   | Everything Wholesale
        81 |       200802 | 2007-01-02   | Everything Wholesale
       115 |        13793 | 2007-01-02   | Everything Wholesale
       204 |        41842 | 2007-01-02   | Everything Wholesale
       133 |       169025 | 2007-01-02   | Everything Wholesale
       163 |       208580 | 2007-01-02   | Everything Wholesale
        29 |       154972 | 2007-01-02   | Everything Wholesale
       145 |       236790 | 2007-01-02   | Everything Wholesale
       249 |        54838 | 2007-01-02   | Everything Wholesale
         7 |       161536 | 2007-01-02   | Everything Wholesale
(10 rows)

See also

IN

7.5 - IN

Checks whether a single value is found (or not found) within a set of values.

Checks whether a single value is found (or not found) within a set of values.

Syntax

(column-list) [ NOT ] IN ( values-list )

Arguments

column-list
One or more comma-delimited columns in the queried tables.
values-list
Comma-delimited list of constant values to find in the column-list columns. Each values-list value maps to a column-list column according to their order in values-list and column-list, respectively. Column/value pairs must have compatible data types.

You can specify multiple sets of values as follows:

( (values-list), (values-list)[,...] )

Null handling

Vertica supports multicolumn NOT IN subqueries where the columns are not marked NOT NULL. If one of the columns is found to contain a NULL value during query execution, Vertica returns a run-time error.

Similarly, IN subqueries nested within another expression are not supported if any column values are NULL. For example, if in the following statement column x from either table contains a NULL value, Vertica returns a run-time error:

=> SELECT * FROM t1 WHERE (x IN (SELECT x FROM t2)) IS FALSE;
   ERROR: NULL value found in a column used by a subquery

EXISTS versus IN

Whether you use EXISTS or IN subqueries depends on which predicates you select in outer and inner query blocks. For example, the following query gets a list of all the orders placed by all stores on January 2, 2007 for vendors with records in the vendor table:

=> SELECT store_key, order_number, date_ordered
   FROM store.store_orders_fact WHERE EXISTS
     (SELECT 1 FROM public.vendor_dimension vd JOIN store.store_orders_fact ord ON vd.vendor_key = ord.vendor_key)
   AND date_ordered = '2007-01-02';
 store_key | order_number | date_ordered
-----------+--------------+--------------
       114 |       271071 | 2007-01-02
        19 |       290888 | 2007-01-02
       132 |        58942 | 2007-01-02
       232 |         9286 | 2007-01-02
       126 |       224474 | 2007-01-02
       196 |        63482 | 2007-01-02
  ...
       196 |        83327 | 2007-01-02
       138 |       278373 | 2007-01-02
       179 |       293586 | 2007-01-02
       155 |       213413 | 2007-01-02
(506 rows)

The above query looks for existence of the vendor and date ordered. To return a particular value, rather than simple existence, the query looks for orders placed by the vendor who got the best deal on January 2, 2007:

=> SELECT store_key, order_number, date_ordered, vendor_name
   FROM store.store_orders_fact ord JOIN public.vendor_dimension vd ON ord.vendor_key = vd.vendor_key
   WHERE vd.deal_size IN (SELECT MAX(deal_size) FROM public.vendor_dimension) AND date_ordered = '2007-01-02';
 store_key | order_number | date_ordered |     vendor_name
-----------+--------------+--------------+----------------------
        50 |        99234 | 2007-01-02   | Everything Wholesale
        81 |       200802 | 2007-01-02   | Everything Wholesale
       115 |        13793 | 2007-01-02   | Everything Wholesale
       204 |        41842 | 2007-01-02   | Everything Wholesale
       133 |       169025 | 2007-01-02   | Everything Wholesale
       163 |       208580 | 2007-01-02   | Everything Wholesale
        29 |       154972 | 2007-01-02   | Everything Wholesale
       145 |       236790 | 2007-01-02   | Everything Wholesale
       249 |        54838 | 2007-01-02   | Everything Wholesale
         7 |       161536 | 2007-01-02   | Everything Wholesale
(10 rows)

Examples

The following SELECT statement queries all data in table t11.

=> SELECT * FROM t11 ORDER BY pk;
 pk | col1 | col2 | SKIP_ME_FLAG
----+------+------+--------------
  1 |    2 |    3 | t
  2 |    3 |    4 | t
  3 |    4 |    5 | f
  4 |    5 |    6 | f
  5 |    6 |    7 | t
  6 |      |    8 | f
  7 |    8 |      | t
(7 rows)

The following query specifies an IN predicate, to find all rows in t11 where columns col1 and col2 contain values of (2,3) or (6,7):

=> SELECT * FROM t11 WHERE (col1, col2) IN ((2,3), (6,7)) ORDER BY pk;
 pk | col1 | col2 | SKIP_ME_FLAG
----+------+------+--------------
  1 |    2 |    3 | t
  5 |    6 |    7 | t
(2 rows)

The following query uses the VMart schema to illustrate the use of outer expressions referring to different inner expressions:

=> SELECT product_description, product_price FROM product_dimension
   WHERE (product_dimension.product_key, product_dimension.product_key) IN
      (SELECT store.store_orders_fact.order_number,
         store.store_orders_fact.quantity_ordered
       FROM store.store_orders_fact);
     product_description     | product_price
-----------------------------+---------------
 Brand #73 wheechair         |           454
 Brand #72 box of candy      |           326
 Brand #71 vanilla ice cream |           270
(3 rows)

7.6 - INTERPOLATE

Joins two using some ordered attribute.

Joins two event series using some ordered attribute. Event series joins let you compare values from two series directly, rather than having to normalize the series to the same measurement interval.

An event series join is an extension of a regular outer join. The difference between expressing a regular outer join and an event series join is the INTERPOLATE predicate, which is used in the ON clause (see Examples below). Instead of padding the non-preserved side with null values when there is no match, the event series join pads the non-preserved side with the previous/next values from the table.

Interpolated values come from the table that contains the null, not from the other table.Vertica does not guarantee that the output contains no null values. If there is no previous/next value for a mismatched row, that row is padded with nulls.

Syntax

expression1 INTERPOLATE { PREVIOUS | NEXT } VALUE expression2

Arguments

expression1,expression2

A column reference from one of the tables specified in the FROM clause.

The columns can be of any data type. Because event series are time-based, the type is typically DATE/TIMEor TIMESTAMP.

{ PREVIOUS | NEXT } VALUE

Pads the non-preserved side with the previous/next values when there is no match. If previous is called on the first row (or next on the last row), will pad with null values.

Input rows are sorted in ascending logical order of the join column.

Notes

  • Data is logically partitioned on the table in which it resides, based on other ON clause equality predicates.

  • Event series join requires that the joined tables are both sorted on columns in equality predicates, in any order, followed by the INTERPOLATED column. If data is already sorted in this order, then an explicit sort is avoided, which can improve query performance. For example, given the following tables:

    ask: exchange, stock, ts, pricebid: exchange,
    stock, ts, price
    

    In the query that follows:

    • ask is sorted on exchange, stock (or the reverse), ts

    • bid is sorted on exchange, stock (or the reverse), ts

    SELECT ask.price - bid.price, ask.ts, ask.stock, ask.exchange
    FROM ask FULL OUTER JOIN bid
       ON ask.stock = bid.stock AND ask.exchange =
       bid.exchange AND ask.ts INTERPOLATE PREVIOUS
       VALUE bid.ts;
    

Restrictions

  • Only one INTERPOLATE expression is allowed per join.
  • INTERPOLATE expressions are used only with ANSI SQL-99 syntax (the ON clause), which is already true for full outer joins.
  • INTERPOLATE can be used with equality predicates only.
  • The AND operator is supported but not the OR and NOT operators.
  • Expressions and implicit or explicit casts are not supported, but subqueries are allowed.

Semantics

When you write an event series join in place of normal join, values are evaluated as follows (using the schema in the examples below):

  • t is the outer, preserved table.
  • t1 is the inner, non-preserved table.
  • For each row in outer table t, the ON clause predicates are evaluated for each combination of each row in the inner table t1.
  • If the ON clause predicates evaluate to true for any combination of rows, those combination rows are produced at the output.
  • If the ON clause is false for all combinations, a single output row is produced with the values of the row from t along with the columns of t1 chosen from the row in t1 with the greatest t1.y value such that t1.y < t.x; If no such row is found, pad with nulls.

In the case of a full outer join, all values from both tables are preserved.

Examples

The examples that follow use this simple schema.

CREATE TABLE t(x TIME);
CREATE TABLE t1(y TIME);
INSERT INTO t VALUES('12:40:23');
INSERT INTO t VALUES('13:40:25');
INSERT INTO t VALUES('13:45:00');
INSERT INTO t VALUES('14:49:55');
INSERT INTO t1 VALUES('12:40:23');
INSERT INTO t1 VALUES('14:00:00');
COMMIT;

Normal full outer join

=> SELECT * FROM t FULL OUTER JOIN t1 ON t.x = t1.y;

Notice the null rows from the non-preserved table:

    x     |    y
----------+----------
 12:40:23 | 12:40:23
 13:40:25 |
 13:45:00 |
 14:49:55 |
          | 14:00:00
(5 rows)

Full outer join with interpolation

=> SELECT * FROM t FULL OUTER JOIN t1 ON t.x INTERPOLATE PREVIOUS VALUE t1.y;

In this case, the rows with no entry point are padded with values from the previous row.

    x     |    y
----------+----------
 12:40:23 | 12:40:23
 13:40:25 | 12:40:23
 13:45:00 | 12:40:23
 14:49:55 | 12:40:23
 13:40:25 | 14:00:00
(5 rows)

Likewise, interpolate next is also supported:

=> SELECT * FROM t FULL OUTER JOIN t1 ON t.x INTERPOLATE NEXT VALUE t1.y;

In this case, the rows with no entry point are padded with values from the next row.

    x     |    y
----------+----------
 12:40:23 | 12:40:23
 13:40:25 | 14:00:00
 13:45:00 | 14:00:00
 14:49:55 |
 14:49:55 | 14:00:00
(5 rows)

Normal left outer join

=> SELECT * FROM t LEFT OUTER JOIN t1 ON t.x = t1.y;

Again, there are nulls in the non-preserved table

    x     |    y
----------+----------
 12:40:23 | 12:40:23
 13:40:25 |
 13:45:00 |
 14:49:55 |
(4 rows)

Left outer join with interpolation

=> SELECT * FROM t LEFT OUTER JOIN t1 ON t.x INTERPOLATE PREVIOUS VALUE t1.y;

Nulls have been padded with interpolated values.

    x     |    y
----------+----------
 12:40:23 | 12:40:23
 13:40:25 | 12:40:23
 13:45:00 | 12:40:23
 14:49:55 | 14:00:00
(4 rows)

Likewise, interpolate next is also supported:

=> SELECT * FROM t LEFT OUTER JOIN t1 ON t.x INTERPOLATE NEXT VALUE t1.y;

Nulls have been padded with interpolated values here as well.

    x     |    y
----------+----------
 12:40:23 | 12:40:23
 13:40:25 | 14:00:00
 13:45:00 | 14:00:00
 14:49:55 |
 (4 rows)

Inner joins

For inner joins, there is no difference between a regular inner join and an event series inner join. Since null values are eliminated from the result set, there is nothing to interpolate.

A regular inner join returns only the single matching row at 12:40:23:

=> SELECT * FROM t INNER JOIN t1 ON t.x = t1.y;
    x     |    y
----------+----------
 12:40:23 | 12:40:23
(1 row)

An event series inner join finds the same single-matching row at 12:40:23:

=> SELECT * FROM t INNER JOIN t1 ON t.x INTERPOLATE PREVIOUS VALUE t1.y;
    x     |    y
----------+----------
 12:40:23 | 12:40:23
(1 row)

See also

Event series joins

7.6.1 - Join predicate

Specifies the columns on which records from two or more tables are joined.

Specifies the columns on which records from two or more tables are joined. You can connect multiple join predicates with logical operators AND, OR, and NOT.

Syntax

ON column-ref = column-ref [ {AND | OR | NOT } column-ref = column-ref ]...

Parameters

column-ref Specifies a column in a queried table. For best performance, do not join on LONG VARBINARY and LONG VARCHAR columns.

See also

Joins

7.7 - LIKE

Retrieves rows where a string expression—typically a column—matches the specified pattern or, if qualified by ANY or ALL, set of patterns.

Retrieves rows where a string expression—typically a column—matches the specified pattern or, if qualified by ANY or ALL, set of patterns. Patterns can contain one or more wildcard characters.

If an ANY or ALL pattern is qualified with NOT, the negation is pushed down to each clause. NOT LIKE ANY (a, b) is equivalent to NOT LIKE a OR NOT LIKE b. See the examples.

Syntax

string-expression [ NOT ] { LIKE | ILIKE | LIKEB | ILIKEB }
   { pattern | { ANY | SOME | ALL } ( pattern,... ) } [ ESCAPE 'char' ]

Arguments

string-expression
String expression, typically a column, to test for instances of the specified pattern or patterns.
NOT
Returns true if the LIKE predicate returns false and vice-versa. When used with ANY or ALL, applies to each value individually.
LIKE | ILIKE | LIKEB | ILIKEB
Type of comparison:
  • LIKE: Complies with the SQL standard, case-sensitive, operates on UTF-8 character strings, exact behavior depends on collation parameters such as strength. LIKE is stable for character strings, but immutable for binary strings
  • ILIKE: Same as LIKE but case-insensitive.
  • LIKEB: Performs case-sensitive byte-at-a-time ASCII comparisons, immutable for character and binary strings.
  • ILIKEB: Same as LIKEB but case-insensitive.
pattern
A pattern to test against the expression. Pattern strings can contain the following wildcard characters:
  • _ (underscore): Match any single character.

  • % (percent): Match any string of zero or more characters.

ANY | SOME | ALL
Apply a comma-delimited list of patterns, where:
  • ANY and SOME return true if any pattern matches, equivalent to logical OR. These options are synonyms.

  • ALL returns true only if all patterns match, equivalent to logical AND.

ESCAPE char
Escape character, by default backslash (\), used to escape reserved characters: wildcard characters (underscore and percent), and the escape character itself.

This option is enforced only for non-default collations; it is currently unsupported with ANY/ALL pattern matching.

Substitute symbols

You can substitute the following symbols for LIKE and its variants:

Symbol Eqivalent to:
~~ LIKE
~# LIKEB
~~* ILIKE
~#* ILIKEB
!~~ NOT LIKE
!~# NOT LIKEB
!~~* NOT ILIKE
!~#* NOT ILIKEB

Pattern matching

LIKE and its variants require that the entire string expression match the specified patterns. To match a sequence of characters anywhere within a string, the pattern must start and end with a percent sign.

LIKE does not ignore trailing white space characters. If the data values to match end with an indeterminate amount of white space, append the wildcard character % to pattern.

Locale dependencies

In the default locale, LIKE and ILIKE handle UTF-8 character-at-a-time, locale-insensitive comparisons. ILIKE handles language-independent case-folding.

In non-default locales, LIKE and ILIKE perform locale-sensitive string comparisons, including some automatic normalization, using the same algorithm as the = operator on VARCHAR types.

ESCAPE expressions evaluate to exactly one octet—or one UTF-8 character for non-default locales.

Examples

Basic pattern matching

The following query searches for names with a common prefix:

=> SELECT name FROM people WHERE name LIKE 'Ann%';
   name
-----------
 Ann
 Ann Marie
 Anna
(3 rows)

LIKE ANY/ALL

LIKE operators support the keywords ANY and ALL, which let you specify multiple patterns to test against a string expression. For example, the following query finds all names that begin or end with the letter 'A':

=> SELECT name FROM people WHERE name LIKE ANY ('A%', '%a');
   name
-----------
 Alice
 Ann
 Ann Marie
 Anna
 Roberta
(5 rows)

LIKE ANY usage is equivalent to individual LIKE conditions combined with OR:

=> SELECT name FROM people WHERE name LIKE 'A%' OR name LIKE '%a';
   name
-----------
 Alice
 Ann
 Ann Marie
 Anna
 Roberta
(5 rows)

Similarly, LIKE ALL is equivalent to individual LIKE conditions combined with AND.

NOT LIKE ANY/ALL

You can use NOT with LIKE ANY or LIKE ALL. NOT does not negate the LIKE expression; instead it negates each clause.

Consider a table with the following contents:

=> SELECT name FROM people;
   name
-----------
 Alice
 Ann
 Ann Marie
 Anna
 Richard
 Rob
 Robert
 Roberta
(8 rows)

In the following query, NOT LIKE ANY ('A%', '%a') is equivalent to NOT LIKE 'A%' OR NOT LIKE '%a', so the only result that is eliminated is Anna, which matches both patterns:

=> SELECT name FROM people WHERE name NOT LIKE ANY ('A%', '%a');
   name
-----------
 Alice
 Ann
 Ann Marie
 Richard
 Rob
 Robert
 Roberta
(7 rows)

--- same results:
=> SELECT name FROM people WHERE name NOT LIKE 'A%' OR name NOT LIKE '%a';

NOT LIKE ALL eliminates results that satisfy any pattern:

=> SELECT name FROM people WHERE name NOT LIKE ALL ('A%', '%a');
  name
---------
 Richard
 Rob
 Robert
(3 rows)

--- same results:
=> SELECT name FROM people WHERE name NOT LIKE 'A%' AND name NOT LIKE '%a';

Pattern matching in locales

The following example illustrates pattern matching in locales.

=> \locale default
INFO 2567:  Canonical locale: 'en_US'
Standard collation: 'LEN_KBINARY'
English (United States)
=> CREATE TABLE src(c1 VARCHAR(100));
=> INSERT INTO src VALUES (U&'\00DF'); --The sharp s (ß)
=> INSERT INTO src VALUES ('ss');
=> COMMIT;

Querying the src table in the default locale returns both ss and sharp s.

=> SELECT * FROM src;
 c1
----
 ß
 ss
(2 rows)

The following query combines pattern-matching predicates to return the results from column c1:

=> SELECT c1, c1 = 'ss' AS equality, c1 LIKE 'ss'
   AS LIKE, c1 ILIKE 'ss' AS ILIKE FROM src;
 c1 | equality | LIKE | ILIKE
----+----------+------+-------
 ß  | f        | f    | f
 ss | t        | t    | t
(2 rows)

The next query specifies unicode format for c1:

=> SELECT c1, c1 = U&'\00DF' AS equality,
   c1 LIKE U&'\00DF' AS LIKE,
   c1 ILIKE U&'\00DF' AS ILIKE from src;
 c1 | equality | LIKE | ILIKE
----+----------+------+-------
 ß  | t        | t    | t
 ss | f        | f    | f
(2 rows)

Now change the locale to German with a strength of 1 (ignore case and accents):

=> \locale LDE_S1
INFO 2567:  Canonical locale: 'de'
Standard collation: 'LDE_S1'
German  Deutsch
=> SELECT c1, c1 = 'ss' AS equality,
c1 LIKE 'ss' as LIKE, c1 ILIKE 'ss' AS ILIKE from src;
 c1 | equality | LIKE | ILIKE
----+----------+------+-------
 ß  | t        | t    | t
 ss | t        | t    | t
(2 rows)

This example illustrates binary data types with pattern-matching predicates:

=> CREATE TABLE t (c BINARY(1));
CREATE TABLE
=> INSERT INTO t VALUES (HEX_TO_BINARY('0x00')), (HEX_TO_BINARY('0xFF'));
 OUTPUT
--------
      2
(1 row)

=> COMMIT;
COMMIT
=> SELECT TO_HEX(c) FROM t;
 TO_HEX
--------
 00
 ff
(2 rows)

=> SELECT * FROM t;
  c
------
 \000
 \377
(2 rows)

=> SELECT c, c = '\000', c LIKE '\000', c ILIKE '\000' from t;
  c   | ?column? | ?column? | ?column?
------+----------+----------+----------
 \000 | t        | t        | t
 \377 | f        | f        | f
(2 rows)

=> SELECT c, c = '\377', c LIKE '\377', c ILIKE '\377' FROM t;
  c   | ?column? | ?column? | ?column?
------+----------+----------+----------
 \000 | f        | f        | f
 \377 | t        | t        | t
(2 rows)

7.8 - NULL

Tests for null values.

Tests for null values. The expression can be a column name, literal, or function.

Syntax

value-expression IS [ NOT ] NULL

Examples

Column name:

=> SELECT date_key FROM date_dimension WHERE date_key IS NOT NULL;
 date_key
----------
        1
      366
     1462
     1097
        2
        3
        6
        7
        8
...

Function:

=> SELECT MAX(household_id) IS NULL FROM customer_dimension;
 ?column?
----------
 f
(1 row)

Literal:

=> SELECT 'a' IS NOT NULL;
 ?column?
----------
 t
(1 row)

8 - Hints

Hints are directives that you embed within a query or.

Hints are directives that you embed within a query or directed query. They conform to the following syntax:

/*+hint-name[, hint-name]...*/

Hints are bracketed by comment characters /*+ and */, which can enclose multiple comma-delimited hints. For example:

SELECT /*+syntactic_join,verbatim*/

Restrictions

When embedding hints in a query, be aware of the following restrictions:

  • Do not embed spaces in the comment characters /* and */.

  • In general, spaces are allowed before and after the plus (+) character and hint-name; however, some third-party tools do not support spaces embedded inside /*+.

Supported hints

Vertica supports the following hints:

General hints

Hint Description
ALLNODES Qualifies an EXPLAIN statement to request a query plan that assumes all nodes are active.
EARLY_MATERIALIZATION Specifies early materialization of a table for the current query.
ENABLE_WITH_CLAUSE_MATERIALIZATION Enables and disables WITH clause materialization for a specific query.
LABEL Labels a query so you can identify it for profiling and debugging.
SKIP_STATISTICS Directs the optimizer to produce a query plan that incorporates only minimal statistics.

Eon Mode hints

Hint Description
DEPOT_FETCH Specifies whether a query fetches data to the depot from communal storage when the depot lacks data for this query.
ECSMODE Specifies the elastic crunch scaling (ECS) strategy for dividing shard data among its subscribers.

Join hints

Hint Description
SYNTACTIC_JOIN Enforces join order and enables other join hints.
DISTRIB Sets the input operations for a distributed join to broadcast, resegment, local, or filter.
GBYTYPE Specifies which algorithm—GROUPBY HASH or GROUPBY PIPELINED—the Vertica query optimizer should use to implement a GROUP BY clause.
JTYPE Enforces the join type: merge or hash join.
UTYPE Specifies how to combine UNION ALL input.

Projection hints

Hint Description
PROJS Specifies one or more projections to use for a queried table.
SKIP_PROJS Specifies which projections to avoid using for a queried table.

Directed query hints

The following hints are only supported by directed queries:

Hint Description
:c Marks a query constant that must be included in an input query; otherwise, that input query is disqualified from using the directed query.
:v Maps an input query constant to one or more annotated query constants.
VERBATIM Enforces execution of an annotated query exactly as written.

8.1 - :c

In a directed query, marks a query constant that must be included in an input query; otherwise, that input query is disqualified from using the directed query.

In a directed query, marks a query constant that must be included in an input query; otherwise, that input query is disqualified from using the directed query.

Syntax

/*+:c*/

Usage

By default, optimizer-generated directed queries set ignore constant (:v) hints on predicate constants. You can override this behavior by setting the :c hint on input query constants that must not be ignored. For example, the following statement creates a directed query that can be used only for input queries where the join predicate constant is the same as in the original input query—8:

=> CREATE DIRECTED QUERY OPTIMIZER simpleJoin_KeepPredicateConstant SELECT * FROM S JOIN T ON S.a = T.b WHERE S.a = 8 /*+:c*/;
CREATE DIRECTED QUERY
=> ACTIVATE DIRECTED QUERY simpleJoin_KeepPredicateConstant;

See also

Conserving Predicate Constants in Directed Queries

8.2 - :v

In a directed query, marks an input query constant that the optimizer ignores when it considers whether to use the directed query for a given query.

In a directed query, marks an input query constant that the optimizer ignores when it considers whether to use the directed query for a given query. Use this hint to create a directed query that can be used for multiple variants of an input query.

Vertica also supports IGNORECONST as an alias of :v . Optimizer-generated directed queries automatically mark predicate constants in input and annotated queries with :v hints.

For details, see Ignoring constants in directed queries.

Syntax

/*+:v(arg)*/
/*+IGNORECONST(arg)*/
arg
Integer argument that is used in the directed query to pair each input query :v hint with one or more annotated query :v hints.

Examples

See Ignoring constants in directed queries.

8.3 - ALLNODES

Qualifies an EXPLAIN statement to request a query plan that assumes all nodes are active.

Qualifies an EXPLAIN statement to request a query plan that assumes all nodes are active. If you omit this hint, the EXPLAIN statement produces a query plan that takes into account any nodes that are currently down.

Syntax

EXPLAIN /*+ALLNODES*/

Examples

In the following example, the ALLNODES hint requests a query plan that assumes all nodes are active.

QUERY PLAN DESCRIPTION:
 ------------------------------

 Opt Vertica Options
 --------------------
 PLAN_ALL_NODES_ACTIVE


 EXPLAIN /*+ALLNODES*/ select * from Emp_Dimension;

 Access Path:
 +-STORAGE ACCESS for Emp_Dimension [Cost: 125, Rows: 10K (NO STATISTICS)] (PATH ID: 1)
 |  Projection: public.Emp_Dimension_b0
 |  Materialize: Emp_Dimension.Employee_key, Emp_Dimension.Employee_gender, Emp_Dimension.Courtesy_title, Emp_Dimension.Employee_first_name, Emp_Dimension.Employee_middle_initial, Emp_Dimension.Employee_last_name, Emp_Dimension.Employee_age, Emp_Dimension.Employee_birthdate, Emp_Dimension.Employee_street, Emp_Dimension.Employee_city, Emp_Dimension.Employee_state, Emp_Dimension.Employee_region, Emp_Dimension.Employee_position
 |  Execute on: All Nodes

8.4 - DEPOT_FETCH

Specifies whether a query fetches data to the depot from communal storage when the depot lacks data for this query.

Eon Mode only

Specifies whether a query fetches data to the depot from communal storage when the depot lacks data for this query. This hint overrides configuration parameter DepotOperationsForQuery.

Syntax

SELECT /*+DEPOT_FETCH (option)*/

Arguments

*option*
Specifies behavior when the depot does not contain queried file data, one of the following:
  • ALL (default): Fetch file data from communal storage, if necessary displace existing files by evicting them from the depot.

  • FETCHES: Fetch file data from communal storage only if space is available; otherwise, read the queried data directly from communal storage.

  • NONE: Do not fetch file data to the depot, read the queried data directly from communal storage.

Examples

SELECT /*+DEPOT_FETCH(All)*/ count(*) FROM bar;
SELECT /*+DEPOT_FETCH(FETCHES)*/ count(*) FROM bar;
SELECT /*+DEPOT_FETCH(NONE)*/ count(*) FROM bar;

8.5 - DISTRIB

The DISTRIB hint specifies to the optimizer how to distribute join key data in order to implement a join.

The DISTRIB hint specifies to the optimizer how to distribute join key data in order to implement a join. If a specified distribution is not feasible, the optimizer ignores the hint and throws a warning.

The following requirements apply:

  • Queries that include the DISTRIB hint must also include the SYNTACTIC_JOIN hint. Otherwise, the optimizer ignores the DISTRIB hint and throws a warning.

  • Join syntax must conform with ANSI SQL-92 join conventions.

Syntax

JOIN /*+DISTRIB(outer-join, inner-join)*/

Arguments

outer-join
inner-join
Specifies how to distribute data on the outer and inner joins:
  • L (local): Inner and outer join keys are identically segmented on each node, join locally.

  • R (resegment): Inner and outer join keys are not identically segmented. Resegment join-key data before implementing the join.

  • B (broadcast): Inner and outer join keys are not identically segmented. Broadcast data of this join key to other nodes before implementing the join.

  • F (filter): Join table is unsegmented. Filter data as needed by the other join key before implementing the join.

  • A (any): Let the optimizer choose the distribution method that it considers to be most cost-effective.

Examples

In the following query, the join is qualified with a DISTRIB hint of /*+DISTRIB(L,R)*/. This hint tells the optimizer to resegment data of join key stores.store_key before joining it to the sales.store_key data:

SELECT /*+SYNTACTIC_JOIN*/ sales.store_key, stores.store_name, sales.product_description, sales.sales_quantity, sales.sale_date
FROM (store.storeSales AS sales JOIN /*+DISTRIB(L,R),JTYPE(H)*/ store.store_dimension AS stores ON (sales.store_key = stores.store_key))
WHERE (sales.sale_date = '2014-12-01'::date) ORDER BY sales.store_key, sales.sale_date;

8.6 - EARLY_MATERIALIZATION

Specifies early materialization of a table for the current query.

Specifies early materialization of a table for the current query. A query can include this hint for any number of tables. Typically, the query optimizer delays materialization until late in the query execution process. This hint overrides any choices that the optimizer otherwise would make.

This hint can be useful in cases where late materialization of join inputs precludes other optimizations—for example, pushing aggregation down the joins, or using live aggregate projections. In these cases, qualifying a join input with EARLY_MATERIALIZATION can enable the optimizations.

Syntax

table-name [ [AS] alias ] /*+EARLY_MATERIALIZATION*/

8.7 - ECSMODE

Sets the ECS strategy that the optimizer uses when it divides responsibility for processing shard data among subscriber nodes.

Eon Mode only

Sets the ECS strategy that the optimizer uses when it divides responsibility for processing shard data among subscriber nodes. This hint is applied only if the subcluster uses elastic crunch scaling (ECS).

Syntax

SELECT /*+ECSMODE(option)*/

Arguments

*option*
Specifies the strategy to use when dividing shard data among its subscribing nodes, one of the following:
  • AUTO: The optimizer chooses the strategy to use, useful only if ECS mode is set at the session level (see Setting the ECS Strategy for the Session or Database).

  • IO_OPTIMIZED: Use I/O-optimized strategy.

  • COMPUTE_OPTIMIZED: Use compute-optimized strategy.

  • NONE: Disable use of ECS for this query. Only participating nodes are involved in query execution; collaborating nodes are not.

Example

The following example shows the query plan for a simple single-table query that is forced to use the compute-optimized strategy:

=> EXPLAIN SELECT /*+ECSMode(COMPUTE_OPTIMIZED)*/ employee_last_name,
             employee_first_name,employee_age
             FROM employee_dimension
             ORDER BY employee_age DESC;

                                   QUERY PLAN
--------------------------------------------------------------------------------
 ------------------------------
 QUERY PLAN DESCRIPTION:
 The execution of this query involves non-participating nodes.
 Crunch scaling strategy preserves data segmentation
 ------------------------------
. . .

8.8 - ENABLE_WITH_CLAUSE_MATERIALIZATION

Enables materialization of all queries in the current WITH clause. Otherwise, materialization is set by configuration parameter WithClauseMaterialization, by default set to 0 (disabled). If WithClauseMaterialization is disabled, materialization is automatically cleared when the primary query of the WITH clause returns. For details, see Materialization of WITH clause.

Syntax

WITH /*+ENABLE_WITH_CLAUSE_MATERIALIZATION*/

8.9 - GBYTYPE

Specifies which algorithm—GROUPBY HASH or GROUPBY PIPELINED —the Vertica query optimizer should use to implement a GROUP BY clause.

Specifies which algorithm—GROUPBY HASH or GROUPBY PIPELINED —the Vertica query optimizer should use to implement a GROUP BY clause. If both algorithms are valid for this query, the query optimizer chooses the specified algorithm over the algorithm that the query optimizer might otherwise choose in its query plan.

For more information about both algorithms, see GROUP BY implementation options.

Syntax

GROUP BY /*+GBYTYPE( HASH | PIPE )*/

Arguments

HASH
Use the GROUPBY HASH algorithm.
PIPE
Use the GROUPBY PIPELINED algorithm.

Examples

See Controlling GROUPBY Algorithm Choice.

8.10 - JFMT

Specifies how to size VARCHAR column data when joining tables on those columns, and buffer that data accordingly.

Specifies how to size VARCHAR column data when joining tables on those columns, and buffer that data accordingly. The JFMT hint overrides the default behavior that is set by configuration parameter JoinDefaultTupleFormat, which can be set at database and session levels.

For more information, see Joining variable length string data.

Syntax

JOIN /*+JFMT(format-type)*/

Arguments

format-type
Specifies how to format VARCHAR column data when joining tables on those columns, and buffers the data accordingly. Set to one of the following:
  • f (fixed): Use join column metadata to size column data to a fixed length, and buffer accordingly.

  • v (variable): Use the actual length of join column data, so buffer size varies for each join.

For example:

SELECT /*+SYNTACTIC_JOIN*/ s.store_region, SUM(e.vacation_days) TotalVacationDays
   FROM public.employee_dimension e
   JOIN /*+JFMT(f)*/ store.store_dimension s ON s.store_region=e.employee_region
   GROUP BY s.store_region ORDER BY TotalVacationDays;

Requirements

  • Queries that include the JFMT hint must also include the SYNTACTIC_JOIN hint. Otherwise, the optimizer ignores the JFMT hint and throws a warning.

  • Join syntax must conform with ANSI SQL-92 join conventions.

8.11 - JTYPE

Specifies the join algorithm as hash or merge.

Specifies the join algorithm as hash or merge.

Use the JTYPE hint to specify the algorithm the optimizer uses to join table data. If the specified algorithm is not feasible, the optimizer ignores the hint and throws a warning.

Syntax

JOIN /*+JTYPE(join-type)*/

Arguments

join-type
One of the following:
  • H: Hash join

  • M: Merge join, valid only if both join inputs are already sorted on the join columns, otherwise Vertica ignores it and throws a warning. The optimizer relies upon the query or DDL to verify whether input data is sorted, rather than the actual runtime order of the data.

  • FM: Forced merge join. Before performing the merge, the optimizer re-sorts the join inputs. Join columns must be of the same type and precision or scale, except that string columns can have different lengths.

A value of FM is valid only for simple join conditions. For example:

=> SELECT /*+SYNTACTIC_JOIN*/ * FROM x JOIN /*+JTYPE(FM)*/ y ON x.c1 = y.c1;

Requirements

  • Queries that include the JTYPE hint must also include the SYNTACTIC_JOIN hint. Otherwise, the optimizer ignores the JTYPE hint and throws a warning.

  • Join syntax must conform with ANSI SQL-92 join conventions.

8.12 - LABEL

Assigns a label to a statement so it can easily be identified to evaluate performance and debug problems.

Assigns a label to a statement so it can easily be identified to evaluate performance and debug problems.

LABEL hints are valid in the following statements:

Syntax

statement-name /*+LABEL (label-string)*/

Arguments

label-string
A string that is up to 128 octets long. If enclosed with single quotes, label-string can contain embedded spaces.

Examples

See Labeling statements.

8.13 - PROJS

Specifies one or more projections to use for a queried table.

Specifies one or more projections to use for a queried table.

The PROJS hint can specify multiple projections; the optimizer determines which ones are valid and uses the one that is most cost-effective for the queried table. If no hinted projection is valid, the query returns a warning and ignores projection hints.

Syntax

FROM `*`table-name`*` /*+PROJS( [[`*`database`*`.]`*`schema.`*`]`*`projection`*`[,...] )*/

Arguments

[database.]schema

Database and schema. The default schema is public. If you specify a database, it must be the current database.

projection
The projection to use. You can specify a list of comma-delimited projections.

Examples

The employee_dimension table has two projections: segmented superprojection public.employee_dimension, which includes all table columns; and the unsegmented projection public.employee_dimension_rep, which includes a subset of the columns:


=> SELECT export_objects('','employee_dimension');
                                                      export_objects
--------------------------------------------------------------------------------------------------------------------------

CREATE TABLE public.employee_dimension
(
    employee_key int NOT NULL,
    employee_gender varchar(8),
    courtesy_title varchar(8),
    employee_first_name varchar(64),
    employee_middle_initial varchar(8),
    employee_last_name varchar(64),
    employee_age int,
    hire_date date,
    employee_street_address varchar(256),
    employee_city varchar(64),
    employee_state char(2),
    employee_region char(32),
    job_title varchar(64),
    reports_to int,
    salaried_flag int,
    annual_salary int,
    hourly_rate float,
    vacation_days int,
    CONSTRAINT C_PRIMARY PRIMARY KEY (employee_key) DISABLED
);

CREATE PROJECTION public.employee_dimension
...
AS
 SELECT employee_dimension.employee_key,
        employee_dimension.employee_gender,
        employee_dimension.courtesy_title,
        employee_dimension.employee_first_name,
        employee_dimension.employee_middle_initial,
        employee_dimension.employee_last_name,
        employee_dimension.employee_age,
        employee_dimension.hire_date,
        employee_dimension.employee_street_address,
        employee_dimension.employee_city,
        employee_dimension.employee_state,
        employee_dimension.employee_region,
        employee_dimension.job_title,
        employee_dimension.reports_to,
        employee_dimension.salaried_flag,
        employee_dimension.annual_salary,
        employee_dimension.hourly_rate,
        employee_dimension.vacation_days
 FROM public.employee_dimension
 ORDER BY employee_dimension.employee_key
SEGMENTED BY hash(employee_dimension.employee_key) ALL NODES KSAFE 1;

CREATE PROJECTION public.employee_dimension_rep
...
AS
 SELECT employee_dimension.employee_key,
        employee_dimension.employee_gender,
        employee_dimension.employee_first_name,
        employee_dimension.employee_middle_initial,
        employee_dimension.employee_last_name,
        employee_dimension.employee_age,
        employee_dimension.employee_street_address,
        employee_dimension.employee_city,
        employee_dimension.employee_state,
        employee_dimension.employee_region
 FROM public.employee_dimension
 ORDER BY employee_dimension.employee_key
UNSEGMENTED ALL NODES;

SELECT MARK_DESIGN_KSAFE(1);

(1 row)

The following query selects all table columns from employee_dimension and includes the PROJS hint, which specifies both projections. public.employee_dimension_rep does not include all columns in the queried table, so the optimizer cannot use it. The segmented projection includes all table columns so the optimizer uses it, as verified by the following query plan:

=> EXPLAIN SELECT * FROM employee_dimension /*+PROJS('public.employee_dimension_rep', 'public.employee_dimension')*/;

QUERY PLAN DESCRIPTION:
------------------------------
EXPLAIN SELECT * FROM employee_dimension /*+PROJS('public.employee_dimension_rep', 'public.employee_dimension')*/;

 Access Path:
 +-STORAGE ACCESS for employee_dimension [Cost: 177, Rows: 10K (NO STATISTICS)] (PATH ID: 1)
 |  Projection: public.employee_dimension_b0

8.14 - SKIP_PROJS

Specifies which projections to avoid using for a queried table.

Specifies which projections to avoid using for a queried table. If SKIP_PROJS excludes all available projections that are valid for the query, the optimizer issues a warning and ignores the projection hints.

Syntax

FROM table-name /*+SKIP_PROJS( [[database.]schema.]projection[,...] )*/

Arguments

[database.]schema

Database and schema. The default schema is public. If you specify a database, it must be the current database.

projection
A projection to skip. You can specify a list of comma-delimited projections.

Examples

In this example, the EXPLAIN output shows that the optimizer uses the projection public.employee_dimension_b0 for a given query:


QUERY PLAN DESCRIPTION:
------------------------------

 EXPLAIN SELECT employee_last_name, employee_first_name, employee_city, job_title FROM employee_dimension;

 Access Path:
 +-STORAGE ACCESS for employee_dimension [Cost: 59, Rows: 10K (NO STATISTICS)] (PATH ID: 1)
 |  Projection: public.employee_dimension_b0
 |  Materialize: employee_dimension.employee_first_name, employee_dimension.employee_last_name, employee_dimension.employee_city, employee_dimension.job_title
 |  Execute on: All Nodes

You can use the SKIP_PROJS hint to avoid using this projection. If another projection is available that is valid for this query, the optimizer uses it instead:


QUERY PLAN DESCRIPTION:
------------------------------

EXPLAIN SELECT employee_last_name, employee_first_name, employee_city, job_title FROM employee_dimension /*+SKIP_PROJS('public.employee_dimension')*/;

 Access Path:
 +-STORAGE ACCESS for employee_dimension [Cost: 156, Rows: 10K (NO STATISTICS)] (PATH ID: 1)
 |  Projection: public.employee_dimension_super
 |  Materialize: employee_dimension.employee_first_name, employee_dimension.employee_last_name, employee_dimension.emplo
yee_city, employee_dimension.job_title
 |  Execute on: Query Initiator

8.15 - SKIP_STATISTICS

Directs the optimizer to produce a query plan that incorporates only the minimal statistics that are collected by ANALYZE_ROW_COUNT.

Directs the optimizer to produce a query plan that incorporates only the minimal statistics that are collected by ANALYZE_ROW_COUNT. The optimizer ignores other statistics that would otherwise be used, that are generated by ANALYZE_STATISTICS and ANALYZE_STATISTICS_PARTITION. This hint is especially useful when used in queries on small tables, where the amount of time required to collect full statistics is often greater than actual execution time.

Syntax

SELECT /*+SKIP_STAT[ISTIC]S*/

EXPLAIN output

EXPLAIN returns the following output for a query that includes SKIP_STATISTICS (using its shortened form SKIP_STATS):


=> EXPLAIN SELECT /*+ SKIP_STATS*/ customer_key, customer_name, customer_gender, customer_city||', '||customer_state, customer_age
    FROM customer_dimension WHERE customer_region = 'East' AND customer_age > 60;

 QUERY PLAN DESCRIPTION:
 ------------------------------

 EXPLAIN SELECT /*+ SKIP_STATS*/ customer_key, customer_name, customer_gender, customer_city||', '||customer_state,
 customer_age FROM customer_dimension WHERE customer_region = 'East' AND customer_age > 60;

 Access Path:
 +-STORAGE ACCESS for customer_dimension [Cost: 2K, Rows: 10K (STATISTICS SKIPPED)] (PATH ID: 1)
 |  Projection: public.customer_dimension_b0
 |  Materialize: public.customer_dimension.customer_age, public.customer_dimension.customer_key, public.customer_dimensi
on.customer_name, public.customer_dimension.customer_gender, public.customer_dimension.customer_city, public.customer_di
mension.customer_state
 |  Filter: (public.customer_dimension.customer_region = 'East')
 |  Filter: (public.customer_dimension.customer_age > 60)
 |  Execute on: All Nodes
...

8.16 - SYNTACTIC_JOIN

Enforces join order and enables other join hints.

Enforces join order and enables other join hints.

In order to achieve optimal performance, the optimizer often overrides a query's specified join order. By including the SYNTACTIC_JOIN hint, you can ensure that the optimizer enforces the query's join order exactly as specified. One requirement applies: the join syntax must conform with ANSI SQL-92 conventions.

The SYNTACTIC_JOIN hint must immediately follow SELECT. If the annotated query includes another hint that must also follow SELECT, such as VERBATIM, combine the two hints together. For example:

SELECT /*+ syntactic_join,verbatim*/

Syntax

SELECT /*+SYN[TACTIC]_JOIN*/

Examples

In the following examples, the optimizer produces different plans for two queries that differ only by including or excluding the SYNTACTIC_JOIN hint.

Excludes SYNTACTIC_JOIN:

EXPLAIN SELECT sales.store_key, stores.store_name, products.product_description, sales.sales_quantity, sales.sale_date
FROM (store.store_sales sales JOIN products ON sales.product_key=products.product_key)
JOIN store.store_dimension stores ON sales.store_key=stores.store_key
WHERE sales.sale_date='2014-12-01' order by sales.store_key, sales.sale_date;

 Access Path:
 +-SORT [Cost: 14K, Rows: 100K (NO STATISTICS)] (PATH ID: 1)
 |  Order: sales.store_key ASC, sales.sale_date ASC
 |  Execute on: All Nodes
 | +---> JOIN HASH [Cost: 11K, Rows: 100K (NO STATISTICS)] (PATH ID: 2) Outer (RESEGMENT)(LOCAL ROUND ROBIN) Inner (RESEGMENT)
 | |      Join Cond: (sales.product_key = products.product_key)
 | |      Materialize at Input: sales.store_key, sales.product_key, sales.sale_date, sales.sales_quantity
 | |      Execute on: All Nodes
 | | +-- Outer -> JOIN HASH [Cost: 1K, Rows: 100K (NO STATISTICS)] (PATH ID: 3)
 | | |      Join Cond: (sales.store_key = stores.store_key)
 | | |      Execute on: All Nodes
 | | | +-- Outer -> STORAGE ACCESS for sales [Cost: 1K, Rows: 100K (NO STATISTICS)] (PATH ID: 4)
 | | | |      Projection: store.store_sales_b0
 | | | |      Materialize: sales.store_key
 | | | |      Filter: (sales.sale_date = '2014-12-01'::date)
 | | | |      Execute on: All Nodes
 | | | |      Runtime Filter: (SIP1(HashJoin): sales.store_key)
 | | | +-- Inner -> STORAGE ACCESS for stores [Cost: 34, Rows: 250] (PATH ID: 5)
 | | | |      Projection: store.store_dimension_DBD_10_rep_VMartDesign_node0001
 | | | |      Materialize: stores.store_key, stores.store_name
 | | | |      Execute on: All Nodes
 | | +-- Inner -> STORAGE ACCESS for products [Cost: 3K, Rows: 60K (NO STATISTICS)] (PATH ID: 6)
 | | |      Projection: public.products_b0
 | | |      Materialize: products.product_key, products.product_description
 | | |      Execute on: All Nodes

Includes SYNTACTIC_JOIN:

EXPLAIN SELECT /*+SYNTACTIC_JOIN*/ sales.store_key, stores.store_name, products.product_description, sales.sales_quantity, sales.sale_date
FROM (store.store_sales sales JOIN products ON sales.product_key=products.product_key)
JOIN store.store_dimension stores ON sales.store_key=stores.store_key
WHERE sales.sale_date='2014-12-01' order by sales.store_key, sales.sale_date;

 Access Path:
 +-SORT [Cost: 11K, Rows: 100K (NO STATISTICS)] (PATH ID: 1)
 |  Order: sales.store_key ASC, sales.sale_date ASC
 |  Execute on: All Nodes
 | +---> JOIN HASH [Cost: 8K, Rows: 100K (NO STATISTICS)] (PATH ID: 2)
 | |      Join Cond: (sales.store_key = stores.store_key)
 | |      Execute on: All Nodes
 | | +-- Outer -> JOIN HASH [Cost: 7K, Rows: 100K (NO STATISTICS)] (PATH ID: 3) Outer (BROADCAST)(LOCAL ROUND ROBIN)
 | | |      Join Cond: (sales.product_key = products.product_key)
 | | |      Execute on: All Nodes
 | | |      Runtime Filter: (SIP1(HashJoin): sales.store_key)
 | | | +-- Outer -> STORAGE ACCESS for sales [Cost: 2K, Rows: 100K (NO STATISTICS)] (PATH ID: 4)
 | | | |      Projection: store.store_sales_b0
 | | | |      Materialize: sales.sale_date, sales.store_key, sales.product_key, sales.sales_quantity
 | | | |      Filter: (sales.sale_date = '2014-12-01'::date)
 | | | |      Execute on: All Nodes
 | | | +-- Inner -> STORAGE ACCESS for products [Cost: 3K, Rows: 60K (NO STATISTICS)] (PATH ID: 5)
 | | | |      Projection: public.products_b0
 | | | |      Materialize: products.product_key, products.product_description
 | | | |      Execute on: All Nodes
 | | +-- Inner -> STORAGE ACCESS for stores [Cost: 34, Rows: 250] (PATH ID: 6)
 | | |      Projection: store.store_dimension_DBD_10_rep_VMartDesign_node0001
 | | |      Materialize: stores.store_key, stores.store_name
 | | |      Execute on: All Nodes

8.17 - UTYPE

Specifies how to combine UNION ALL input.

Specifies how to combine UNION ALL input.

Syntax

UNION ALL /*+UTYPE(union-type)*/

Arguments

union-type
One of the following values:
  • U: Concatenates UNION ALL input (default).

  • M: Merges UNION ALL input in the same sort order as the source query results. This option requires all input from the source queries to use the same sort order; otherwise, Vertica throws a warning and concatenates the UNION ALL input.

Requirements

Queries that include the UTYPE hint must also include the SYNTACTIC_JOIN hint. Otherwise, the optimizer ignores the UTYPE hint and throws a warning.

8.18 - VERBATIM

Enforces execution of an annotated query exactly as written.

Enforces execution of an annotated query exactly as written.

VERBATIM directs the optimizer to create a query plan that incorporates all hints in a annotated query. Furthermore, it directs the optimizer not to apply its own plan development processing on query plan components that pertain to those hints.

Usage of this hint varies between optimizer-generated and custom directed queries, as described below.

Syntax

SELECT /*+ VERBATIM*/

Requirements

The VERBATIM hint must immediately follow SELECT. If the annotated query includes another hint that must also follow SELECT, such as SYNTACTIC_JOIN, combine the two hints together. For example:

SELECT /*+ syntactic_join,verbatim*/

Optimizer-generated directed queries

The optimizer always includes the VERBATIM hint in the annotated queries that it generates for directed queries. For example, given the following CREATE DIRECTED QUERY OPTIMIZER statement:


=> CREATE DIRECTED QUERY OPTIMIZER getStoreSales SELECT sales.store_key, stores.store_name, sales.product_description, sales.sales_quantity, sales.sale_date FROM store.storesales sales JOIN store.store_dimension stores ON sales.store_key=stores.store_key WHERE sales.sale_date='2014-12-01' /*+IGNORECONST(1)*/ AND stores.store_name='Store1' /*+IGNORECONST(2)*/ ORDER BY sales.store_key, sales.sale_date;
CREATE DIRECTED QUERY

The optimizer generates an annotated query that includes the VERBATIM hint:

=> SELECT query_name, annotated_query FROM V_CATALOG.DIRECTED_QUERIES WHERE query_name = 'getStoreSales';
-[ RECORD 1 ]---+------
query_name      | getStoreSales
annotated_query | SELECT /*+ syntactic_join,verbatim*/ sales.store_key AS store_key, stores.store_name AS store_name, sales.product_description AS product_description, sales.sales_quantity AS sales_quantity, sales.sale_date AS sale_date
FROM (store.storeSales AS sales/*+projs('store.storeSales')*/ JOIN /*+Distrib(L,L),JType(H)*/ store.store_dimension AS stores/*+projs('store.store_dimension_DBD_10_rep_VMartDesign')*/  ON (sales.store_key = stores.store_key))
WHERE (sales.sale_date = '2014-12-01'::date /*+IgnoreConst(1)*/) AND (stores.store_name = 'Store1'::varchar(6) /*+IgnoreConst(2)*/)
ORDER BY 1 ASC, 5 ASC

When the optimizer uses this directed query, it produces a query plan that is equivalent to the query plan that it used when it created the directed query:

=> ACTIVATE DIRECTED QUERY getStoreSales;
ACTIVATE DIRECTED QUERY

=> EXPLAIN SELECT sales.store_key, stores.store_name, sales.product_description, sales.sales_quantity, sales.sale_date FROM store.storesales sales JOIN store.store_dimension stores ON sales.store_key=stores.store_key WHERE sales.sale_date='2014-12-04' AND stores.store_name='Store14' ORDER BY sales.store_key, sales.sale_date;

 QUERY PLAN DESCRIPTION:
 ------------------------------

 EXPLAIN SELECT sales.store_key, stores.store_name, sales.product_description, sales.sales_quantity, sales.sale_date FROM store.storesales sales JOIN store.store_dimension stores ON sales.store_key=stores.store_key WHERE sales.sale_date='2014-12-04' AND stores.store_name='Store14' ORDER BY sales.store_key, sales.sale_date;

 The following active directed query(query name: getStoreSales) is being executed:
 SELECT /*+syntactic_join,verbatim*/ sales.store_key, stores.store_name, sales.product_description, sales.sales_quantity, sales.sale_date
FROM (store.storeSales sales/*+projs('store.storeSales')*/ JOIN /*+Distrib('L', 'L'), JType('H')*/store.store_dimension stores
/*+projs('store.store_dimension_DBD_10_rep_VMartDesign')*/ ON ((sales.store_key = stores.store_key))) WHERE ((sales.sale_date = '2014-12-04'::date)
AND (stores.store_name = 'Store14'::varchar(7))) ORDER BY sales.store_key, sales.sale_date

 Access Path:
 +-JOIN HASH [Cost: 463, Rows: 622 (NO STATISTICS)] (PATH ID: 2)
 |  Join Cond: (sales.store_key = stores.store_key)
 |  Materialize at Output: sales.sale_date, sales.sales_quantity, sales.product_description
 |  Execute on: All Nodes
 | +-- Outer -> STORAGE ACCESS for sales [Cost: 150, Rows: 155K (NO STATISTICS)] (PATH ID: 3)
 | |      Projection: store.storeSales_b0
 | |      Materialize: sales.store_key
 | |      Filter: (sales.sale_date = '2014-12-04'::date)
 | |      Execute on: All Nodes
 | |      Runtime Filter: (SIP1(HashJoin): sales.store_key)
 | +-- Inner -> STORAGE ACCESS for stores [Cost: 35, Rows: 2] (PATH ID: 4)
 | |      Projection: store.store_dimension_DBD_10_rep_VMartDesign_node0001
 | |      Materialize: stores.store_name, stores.store_key
 | |      Filter: (stores.store_name = 'Store14')
 | |      Execute on: All Nodes

Custom directed queries

The VERBATIM hint is included in a custom directed query only if you explicitly include it in the annotated query that you write for that directed query. When the optimizer uses that directed query, it respects the VERBATIM hint and creates a query plan accordingly.

If you omit the VERBATIM hint when you create a custom directed query, the hint is not stored with the annotated query. When the optimizer uses that directed query, it applies its own plan development processing on the annotated query before it generates a query plan. This query plan might not be equivalent to the query plan that the optimizer would have generated for the Vertica version in which the directed query was created.

9 - Window clauses

When used with an analytic function, window clauses specify how to partition and sort function input, as well as how to frame input with respect to the current row.

When used with an analytic function, window clauses specify how to partition and sort function input, as well as how to frame input with respect to the current row. When used with a single-phase transform function, the PARTITION ROW and PARTITION LEFT JOIN window clauses support single-row partitions for single-phase transform functions, rather than analytic functions.

9.1 - Window partition clause

When specified, a window partition clause divides the rows of the function input based on user-provided expressions.

When specified, a window partition clause divides the rows of the function input based on user-provided expressions. If no expression is provided, the partition clause can improve query performance by using parallelism. If you do not specify a window partition clause, all input rows are treated as a single partition.

Window partitioning is similar to the GROUP BY clause. However, PARTITION BEST and PARTITION NODES may only be used with analytic functions and return only one result per input row, while PARTITION ROW and PARTITION LEFT JOIN can be used for single-phase transform functions and return multiple values per input row.

When used with analytic functions, results are computed per partition and start over again (reset) at the beginning of each subsequent partition.

Syntax

{ PARTITION BY expression[,...] 
  | PARTITION BEST 
  | PARTITION NODES 
  | PARTITION ROW 
  | PARTITION LEFT JOIN }

Arguments

PARTITION BY expression
Expression on which to sort the partition, where expression can be a column, constant, or an arbitrary expression formed on columns. Use PARTITION BY for functions with specific partitioning requirements.
PARTITION BEST
Use parallelism to improve performance for multi-threaded queries across multiple nodes.

OVER(PARTITION BEST) provides the best performance on multi-threaded queries across multiple nodes.

The following considerations apply to using PARTITION BEST:

  • Use PARTITION BEST for analytic functions that have no partitioning requirements and are thread safe—for example, a one-to-many transform.

  • Do not use PARTITION BEST on user-defined transform functions (UDTFs) that are not thread-safe. Doing so can produce an error or incorrect results. If a UDTF is not thread safe, use PARTITION NODES .

PARTITION NODES
Use parallelism to improve performance for single-threaded queries across multiple nodes.

OVER(PARTITION NODES) provides the best performance on single-threaded queries across multiple nodes.

PARTITION ROW, PARTITION LEFT JOIN
Use to feed input partitions of exactly one row. If used, any arbitrary expression may be used in the query target list alongside the UDTF. PARTITION LEFT JOIN returns a row of NULLs if an input row would otherwise produce no output.

May not be used for analytic functions or multi-phase transform functions. Note that only one PARTITION ROW transform function is allowed in the target list for each level of the query.

Examples

See Window partitioning.

9.2 - Window order clause

Specifies how to sort rows that are supplied to an analytic function.

Specifies how to sort rows that are supplied to an analytic function. If the OVER clause also includes a window partition clause, rows are sorted within each partition.

The window order clause only specifies order within a window result set. The query can have its own ORDER BY clause outside the OVER clause. This has precedence over the window order clause and orders the final result set.

A window order clause also creates a default window frame if none is explicitly specified.

Syntax

ORDER BY { expression [ ASC | DESC [ NULLS { FIRST | LAST | AUTO } ] ]
  }[,...]

Arguments

expression
A column, constant, or arbitrary expression formed on columns on which to sort input rows.
ASC | DESC
Sort order: ascending (default) or descending.
NULLS {FIRST | LAST | AUTO}
How to position nulls. NULLS AUTO means to choose the positioning that is most efficient for this query.

ASC defaults to NULLS LAST and DESC defaults to NULLS FIRST.

If you omit all sort qualifiers, Vertica uses ASC NULLS LAST.

For more information, see NULL sort order and Runtime sorting of NULL values in analytic functions.

Examples

See Window ordering.

9.3 - Window frame clause

Specifies a window frame, which comprises a set of rows relative to the row that is currently being evaluated by an analytic function.

Specifies a window frame, which comprises a set of rows relative to the row that is currently being evaluated by an analytic function. After the function processes that row and its window, Vertica advances the current row and adjusts the window boundaries accordingly. If the OVER clause also specifies a partition, Vertica also checks that window boundaries do not cross partition boundaries. This process repeats until the function evaluates the last row of the last partition.

Syntax

{ ROWS | RANGE } { BETWEEN start-point AND end-point } | start-point

start-point | end-point:

  { UNBOUNDED {PRECEDING | FOLLOWING}
    | CURRENT ROW
    | constant-value  {PRECEDING | FOLLOWING}}

Arguments

ROWS | RANGE
Whether to interpret window frame dimensions as physical (ROWS) or logical (RANGE) offsets from the current row. See ROWS versus RANGE below for details.
BETWEEN start-point AND end-point
First and last rows of the window, where start-point and end-point can be one of the following:
  • UNBOUNDED {PRECEDING | FOLLOWING}: The current partition's first (PRECEDING) or last (FOLLOWING) row.

  • CURRENT ROW: The current row or value.

  • constant-value {PRECEDING | FOLLOWING}: A constant value or expression that evaluates to a constant value. This value is interpreted as either a physical or logical offset from the current row, depending on whether you use ROWS or RANGE. See ROWS versus RANGE for other restrictions.

start-point must resolve to a row or value that is less than or equal to end-point.

start-point
If ROWS or RANGE specifies only a start point, Vertica uses the current row as the end point and creates the window frame accordingly. In this case, start-point must resolve to a row that is less than or equal to the current row.

Requirements

In order to specify a window frame, the OVER must also specify a window order (ORDER BY) clause. If the OVER clause omits specifying a window frame, the function creates a default window that extends from the current row to the first row in the current partition. This is equivalent to the following clause:

RANGE UNBOUNDED PRECEDING AND CURRENT ROW

ROWS versus RANGE

The window frame's offset from the current row can be physical or logical:

  • ROWS (physical): the start and end points are relative to the current row. If either is a constant value, it must evaluate to a positive integer.

  • RANGE (logical): the start and end points represent a logical offset, such as time. The range value must match the window order (ORDER BY) clause data type: NUMERIC, DATE/TIME, FLOAT or INTEGER.

When setting constant values for ROWS, the constant must evaluate to a positive INTEGER.

When setting constant values for RANGE, the following requirements apply:

  • The constant must evaluate to a positive numeric value or INTERVAL literal.

  • If the constant evaluates to a NUMERIC value, the ORDER BY column type must be a NUMERIC data type.

  • If the constant evaluates to an INTERVAL DAY TO SECOND subtype, the ORDER BY column type must be one of the following: TIMESTAMP, TIME, DATE, or INTERVAL DAY TO SECOND.

  • If the constant evaluates to an INTERVAL YEAR TO MONTH, the ORDER BY column type must be one of the following: TIMESTAMP, DATE, or INTERVAL YEAR TO MONTH.

  • The window order clause can specify only one expression.

Examples

See Window framing.

9.4 - Window name clause

Defines a named window that specifies window partition and order clauses for an analytic function.

Defines a named window that specifies window partition and order clauses for an analytic function. This window is specified in the function's OVER clause. Named windows can be useful when you write queries that invoke multiple analytic functions with similar OVER clauses, such as functions that use the same partition (PARTITION BY) clauses.

Syntax

WINDOW window-name AS ( window-partition-clause [window-order-clause] )

Arguments

WINDOW window-name
A window name that is unique within the same query.
window-partition-clause [window-order-clause]

Clauses to invoke when an OVER clause references this window.

If the window definition omits a window order clause, the OVER clause can specify its own order clause.

Requirements

  • A WINDOW clause cannot include a window frame clause.

  • Each WINDOW clause within the same query must have a unique name.

  • A WINDOW clause can reference another window that is already named. For example, the following query names window w1 before w2. Thus, the WINDOW clause that defines w2 can reference w1:

    => SELECT RANK() OVER(w1 ORDER BY sal DESC), RANK() OVER w2
       FROM EMP WINDOW w1 AS (PARTITION BY deptno), w2 AS (w1 ORDER BY sal);
    

Examples

See Named windows.

See also

Analytic functions