This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

OpenText Core Analytics Database

This topic provides an overview of OTCAD.

OpenText™ Core Analytics Database (OTCAD) is a managed database service that delivers a unified, high performance advanced analytics and machine learning platform in the cloud. The database gets automatically provisioned in an OpenText AWS account. OpenText manages the database hardware and environment. OTCAD is optimized for efficiency and query performance, ensuring you meet your service level agreements within your budget.

OTCAD enhances the speed and scalability of a database without the need to manage hardware. OTCAD is a data warehouse and data lake platform that enables organizations to deploy databases securely. To get access to OTCAD, contact your sales representative or contact us.

OTCAD UI tour

You can view and manage the provisioned database. After logging in to the OTCAD application, the home page appears. This page displays the three-dot menu and other SQL editor options as shown in this image:

OTCAD user interface

Option Description
a More options: Displays the More options button with options to access the SQL editor, Settings, Data pipelines, User-defined extensions, Database designer, Dashboards, Restore points, and the Integrations page.
b Stop: Displays the option to stop the database.
c History view: Displays the queries run previously.
d Favorite view: Displays the bookmarked SQL queries.
e Create table: Displays options to create the table and schema.
f Notifications: Displays message alert notifications.
g Help: Displays OpenText™ Analytics Database user documentation.
h View profile: Displays the user profile and option to sign out of the application.

More options

When you log in to the application, the More options button is displayed with these options:

  1. SQL editor: Select this option to work with the SQL editor.
  2. Settings: Select this option to configure the database settings.
  3. Data pipeline: Select this option to work with data pipelines.
  4. User-defined extensions: Select this option to work with user-defined extensions.
  5. Database designer: Select this option to work with the database designer.
  6. Dashboards: Select this option to work with dashboards.
  7. Restore points: Select this option to work with restore points.
  8. Integrations: Select this option to display the Integrations page.

Database connection options

The Home page displays the option to stop an active database.

  • To stop OTCAD, select Stop. <--Or-->
  • To re-start an inactive database, select Start.

SQL Editor

Use OTCAD SQL editor to write, edit, and execute SQL statements in a query view. You can write and execute analytical queries without using any external tools.

You can use the SQL editor to:

  • gain valuable insights from your data without needing expertise to set up database connections.
  • write, execute, and save queries that need to be repeated.
  • view sample data returned from your queries.
  • validate the execution of SQL queries.
  • view data about the execution of SQL queries.
  • export queries into a spread sheet.
  • execute SQL commands for OTCAD operations, such as creation of schemas, tables, and so on.

Working with SQL editor

After logging in to the OTCAD application, the home page appears.

To launch the SQL editor, select SQL Editor in the three-dots menu. The Schema view displays with a list of available schemas and tables. You can either select a table or create an SQL query.

Create an SQL query

You create SQL queries either in the Home page or the Schema view.

  1. Select Create a new query in the Home page.

  2. Enter your query in the SQL editor.

  3. From the Run list, do one of the following:

    • Select Run last query to run the SQL query that was last executed. This option is helpful when you have multiple SQL queries but need to run only the latest SQL query.
    • Select Run from cursor to run an SQL query using a database cursor to manage and process the results of a statement one row at a time, rather than as a single set of data.
    • Select Run selection to run only the selected SQL query. This option is helpful when you have multiple SQL queries but need to run only one SQL query.
  4. Select the SQL query in the editor and click Format to arrange and format the SQL query for better readability.

  5. Enter the name of the query at the top.

  6. To view the schema details of a table, select View Schema. The table columns display along with the datatypes.

  7. To filter the SQL query results, select the filter icon. You can add a filter. search the results by the column name and the match type, and select Apply.

  8. To save the SQL query results, do the following: a. Select Export CSV. b. Enter the file name and select Export.

    You can export upto a maximum of 10 thousand records. To import data that exceeds 10 thousand records, modify the SQL query and export again.

History view

The History view tab stores the queries run previously. When you run a query, it appears in the Query History list. Queries of the last 2 weeks only are displayed.

To run a SQL query:

  1. Select .
  2. Select Run.

To remove a SQL query:

  1. Select .
  2. Select Remove.

Favorite view

You can bookmark SQL queries and run them again later in the Favorite view tab. Saved SQL queries display as a list of favorites next to the History view tab.

To bookmark a SQL query as a favorite:

  1. Select the SQL query.
  2. Select Run.
  3. Select Favorite.
  4. Enter a name and select Add.

To run a SQL query from the Favorites view:

  1. Select .
  2. Select Run. After successful execution of the query, the results are displayed in the SQL editor.

To rename a favorite SQL query:

  1. Select the SQL query.
  2. Select .
  3. Select Rename.
  4. Enter the new name and select Update.

To remove a favorite SQL query:

  1. Select the query.
  2. Select .
  3. Select Remove.
  4. When prompted for confirmation, select Remove.

Upload and download scripts

You can create SQL queries offline and upload them to the system.

  1. In the SQL query page, select and Upload script. The Upload script dialog appears.
  2. Select Browse or +.
  3. Select Upload. The file content is loaded to the SQL editor.

You can also download SQL queries in the system.

  1. In the SQL query page, select and Download script. The SQL query is downloaded in a file with .sql extension in the Downloads folder.

Create a table

You can create tables in the database logical schema.

  1. In the Schema view, do one of the following:

    • Select +Create table.

      OR

    • Expand a schema in the Schema view and click + for Tables. The Create table page appears.

  2. In the Table name field, enter the name of the table to create. This name must be unique among names of all sequences, tables, and projections within the schema. Ensure that the table name meets the following requirements:

  3. From the Schema list, select an existing schema for which the table needs to be created. If no schema is specified, the table is created in the public schema.

  4. In the Column name field, enter the name of the column to be created. A table can have up to 9800 columns.

  5. From the Type list, select an internal data type.

  6. In the Length field, enter the maximum number of characters that the column can store. It is a crucial aspect of database design that helps enforce data integrity, optimize storage, and ensure data consistency. For more information about the data types and their corresponding length or size, see data type.

  7. Select the Nullable option to permit this column to store NULL values, representing missing, unknown, or optional data. Selecting this option allows rows to be inserted without providing a value for that specific column.

  8. Select the Primary option to designate this column as the primary key. The primary key is a column in a database table that uniquely identifies each row, ensuring data integrity by prohibiting duplicate or NULL values. It enables efficient data retrieval and acts as a target for foreign keys to establish table relationships. Every table must have one primary key.

  9. Select the Unique option to specify a unique constraint on a column so that each value in that column is unique among all other values.

  10. Select the Order by option to sort the result set of a query in a specific order based on the values in one or more columns. This allows you to control the presentation of the data for better analysis and readability, as database rows are not guaranteed to be in any specific order by default.

  11. Select More. The Advance configuration page appears. You can set the foreign key references to link one table with another in this page. This page is enabled only when you enter the column name.

  12. In the Reference tab, do the following:

    • From the Reference schema list, select a reference schema which is a specification that defines the structure, data types, and relationships for data, often including mechanisms to link to or reuse other schemas or subschemas.
    • From the Reference table list, select a table that links to the primary key of another table. It ensures that the data in both tables stays connected and valid. It acts as a primary source for validating input, mapping values, and normalizing data, allowing other tables to reference it through foreign keys. If you omit this column, the database uses the table's primary key. Only those columns that have at least one primary key in the reference table are listed.
    • From the Reference column list, select a reference column. Updates to the referenced data are automatically reflected in all locations. Primary key columns only are listed.
  13. In the Sequence tab, from the Sequence list, select the name of the sequence. Use sequences to set the default values of columns to sequential integer values.

  14. In the Column constraints tab, enter the boolean expression that evaluates a column's value on each row in the Check constraintsfield.

  15. Select Save.

  16. To import data from a csv file, do the following:

    • Select Import from CSV.
    • Select Browse.
    • Navigate to the folder containing the CSV file and click Open.
    • Select Save.
  17. Select SQL preview. A real-time preview of the SQL statement is displayed. The SQL syntax is validated and errors are displayed. This helps you identify errors and fix them before running the SQL queries.

  18. Click the Download icon to download the SQL query to a file with .sql extension.

  19. In the Partition expression field, specify a partition key that is derived from one or more table columns. Creating partitions decreases the SQL query execution time and improves table efficiency. For information about partitioning tables, see Partition clause.

  20. Select Create. The table is created and displayed under the schema where you created it.

Create a schema

You can create as many schemas as necessary for your database. For example, you can create a schema for each database user. By default, only an administrator can create a schema or give a user the right to create a schema.

The database logical schema comprises a set of tables and referential integrity constraints in the database. The objects in the logical schema are visible to SQL users. The logical schema does not include projections, which make up the physical schema. The physical schema comprises a set of projections used to store data on the disk. The projections in the physical schema are based on the objects in the logical schema.

  1. In the Schema view, expand the +Create table list and select Schema. The Create schema page appears.

  2. In the Schema name field, enter the name of the schema. Ensure that the schema name meets the following requirements:

    • Must be unique among all other schema names in the namespace under which the schema is created.
    • Must comply with keyword restrictions and rules for Identifiers.
    • Cannot begin with v_; this prefix is reserved for OTCAD system tables.
    • If the schema name contains a period, the part before the period cannot be the same as the name of an existing namespace.

    Inheritance of privileges specifies whether to enable or disable default inheritance of privileges for new tables in the schema.

  3. Do one of the following:

    • Select Include to grant tables in the specified schema the same privileges granted to that schema. This option has no effect on existing tables in the schema.

    OR

    • Select Exclude to disable inheritance of schema privileges. By default, inheritance of schema privileges is disabled.
  4. Select Create. The schema is created and displayed in the Schema view.

1 - OpenText Core Analytics Database Settings Overview

This topic provides an overview of database configuration settings in OTCAD.

You can configure the default database configuration settings in OpenText™ Core Analytics Database (OTCAD).

  1. Log in to OTCAD.
  2. Select Settings from More options. The Settings Overview page opens. In this page, you can configure the following OTCAD settings:
    • Connection data: You can configure the default connection data settings in OTCAD.
    • CIDR: CIDR is fundamental to subnetting, the process of dividing a large network into smaller, more manageable subnetworks. By using different prefix lengths, you can create subnets with varying numbers of host addresses, enabling efficient use of IP address space.
    • Scaling: Database scaling is the ability of a database to obtain or reduce resources depending on the system's load, establishing a direct correlation between the allocation of resources and their demand. Scaling ensures consistency in the database server quality while maximizing resource usage.
  3. Select Connection Data.
  4. Copy this information to connect your database with other applications or clients:
    • Host DNS: The DNS host provides the authoritative nameservers that answer or resolve queries for your website. This is how your site is found on the web.
    • Host Port: The host port is a network port on a computer or device that is used to establish a connection with another device or service, often for communication or data transfer.
    • Database Name: The unique name of the database.
    • Refresh Token: The refresh token is a special type of token used in authentication flows, particularly in OAuth 2.0, to obtain new access tokens when the current ones expire.
    • Access Token: An access token is a security credential that allows an application or user to access protected resources on a server. It acts as a digital key, verifying the user's or application's identity and authorization level, and is used to grant access without requiring the user to repeatedly provide their credentials.
  5. Select Close to close the Connection Details page.

1.1 - Connection Data Settings

This topic provides an overview of connection data settings in OTCAD.

You can configure the default connection data settings in OTCAD. You can manage IP addresses with Classless Inter-Domain Routing (CIDR) and scale your plan to suit your requirements.

1.2 - CIDR

This topic provides an overview of managing IP addresses with Classless Inter-Domain Routing (CIDR) in OTCAD.

CIDR

Cloud infrastructure engineers use CIDR notation to manage IP address ranges for cloud-based networks. IP addresses enable resources in your Virtual Private Cloud (VPC) to communicate with each other and with resources over the Internet. CIDR notation is a way to represent an IP address and its network mask.

OpenText supports ranges of IP addresses using Classless Inter-Domain Routing (CIDR) notation. For example, 192.168.1.0/24 represents all IPv4 addresses in the range of 192.168.1.0 to 192.168.1.255.

The format of these addresses is as follows:

  • An individual IPv4 address is 32 bits, with 4 groups of up to 3 decimal digits. For example, 10.0.1.0.
  • An individual IPv6 address is 128 bits, with 8 groups of 4 hexadecimal digits. For example, 2001:0db8:85a3:0000:0000:8a2e:0370:7334.

Public IPv4 and IPv6 addresses

All subnets have an attribute that determines whether a network interface created in the subnet automatically receives a public IPv4 address. Therefore, when you launch an instance into a subnet that has this attribute enabled, a public IP address is assigned to the primary network interface that's created for the instance. You can control whether your instance receives a public IP address by doing the following:

  • Modifying the public IP addressing attribute of your subnet.
  • Enabling or disabling the public IP addressing feature during instance launch, which overrides the subnet's public IP addressing attribute.
  • Unassigning a public IP address from your instance after launch by managing the IP addresses associated with a network interface.

Public IPv6 addresses are IPv6 addresses that can be configured to remain private or configured to be reachable over the Internet.

Private IPv4 and IPv6 addresses

Private IPv4 addresses are not reachable over the Internet and can be used for communication between the instances in your VPC. When you launch an instance into a VPC, a primary private IP address from the IPv4 address range of the subnet is assigned to the primary network interface (for example, eth0) of the instance. Each instance is also given a private (internal) DNS hostname that resolves to the private IP address of the instance.

You cannot share private IPv6 addresses on the Internet. You can use a private IPv6 address if you want your private networks to support IPv6 and you have no intention of routing traffic from these addresses to the Internet. You can connect to the internet from a resource that has a private IPv6 address. But you must route traffic through a resource in another subnet with a public IPv6 address.

Configure CIDR settings

To configure CIDR settings:

  1. In the Settings Overview page, select View CIDR. The CIDR page displays.
  2. Select Private.
  3. To add an IP address, select Add IP.
  4. From the Value drop-down list, select either IPv4 or IPv6 depending on the IP address you want to add. Do one of the following: a. Enter a valid IPv4 or IPv6 address. b. Copy a valid IPv4 or IPv6 address from another source and select Paste.
  5. Select .

1.3 - Connection Data Settings

This topic provides an overview of scaling your database in OTCAD.

Scaling

Workload scaling enhances application performance and resource efficiency in dynamic environments. Scaling ensures that applications can handle varying workloads without performance degradation. OpenText provides the ability to automatically scale resources up or down based on real-time metrics, allowing organizations to respond quickly to changes in traffic. This elasticity not only improves user experience but also optimizes resource utilization, helping to minimize costs associated with underused or overprovisioned resources.

You cam manage compute to your database to improve the performance of complex analytic queries that have been running for a long time. Adding compute helps these queries run faster.

Configure the compute of your database to suit your query requirements and workload performance needs.

To manage compute for your database:

  1. In the Settings Overview page, select Scaling.

  2. Select View Scaling. The horizontal database scaling options are displayed. You can scale the database either manually or automatically. Do one of the following:

    a. To manually scale the database, select Manual. b. Slide across the slider horizontally to scale the database depending on the size - small, small-medium, medium, medium-large, or large. c. Select Apply changes. d. In the View scaling log area, click the Audit log and Scaling log tabs to view the log details.

    <-Or->

    a. To automatically scale the database, select Auto. Select this option only if you need to automatically scale the database based on defined parameters. b. From the Scale up to drop-down list, select either Large or Medium depending on the size of your database.

    c. Use the CPU usage toggle button to scale out the database when the CPU usage exceeds a value in the range 1% to 100%. d. Use the Memory usage toggle button to scale out the database when the memory usage exceeeds a value in the range 1% to 100%. e. Select Apply changes. f. In the View scaling log area, click the Audit log and Scaling log tabs to view the log details. The Audit log provides information about the following:

    • Date & time
    • Event
    • Performed by The Scaling log provides information about the following:
    • Date & time
    • Scale event
    • Compute (units)
    • Triggered
    • Performed by

1.4 - Alerts

This topic provides an overview of alert settings in OTCAD.

You can configure the default alert settings in OpenText™ Core Analytics Database (OTCAD).

Monitoring alerts

OTCAD periodically checks system health and performance. When an important event occurs or system performance fluctuates beyond user or system-defined limits, OTCAD generates a message to alert you about the state of your system. You can view and manage these alerts.

OTCAD sends alerts to you to ensure that you are immediately aware of any changes to your database that might require attention. You receive notifications by email and can also view them from the Bell icon in the upper‑right corner of the application.

Alert categories

Select Alerts in the Settings Overview page.

  • The All categories tab displays all alerts in the system by default.
  • The Queries tab displays pre-configured alerts to monitor queries.
  • The Resource pool tab displays pre-configured alerts to monitor resource pools.
  • The Custom tab displays custom alerts in the system.

1.4.1 - Monitoring alerts

This topic provides an overview of alerts in OTCAD.

Pre-configured alerts

The system provides pre-configured alerts to provide system monitoring capabilities without manual setup. Each alert has a pre-configured threshold that defines the acceptable performance limit. The system then sends a message notification when the database exceeds that threshold.

You can monitor the system using preconfigured alerts without setting them up manually.

Queries

Query performance alerts notify you when queries exceed execution time thresholds. The system provides the following pre-configured alerts to monitor queries:

  • Failed query number - The total number of queries that failed during execution due to errors.
  • Query running time - A query that is neither in the queue nor completed is a running query. The Query Running Time reflects the elapsed time since the query started executing.
  • Queued query number - The number of queries currently waiting in the queue for resources before execution begins.
  • Retried query number - The number of queries that are placed on hold for different reasons, such as limited resources.
  • Spilled query number - The number of queries that were written to disk when the system did not have enough memory.

Resource pool

The system sends alerts when an individual resource pool reaches a specified state or usage level. For details about resource pool configuration parameters, see Built-in resource pools configuration. You can configure the following resource pool alerts:

  • Ended query with queue time exceeding limit - The number of queries that were not executed because their queue time exceeded the allowed limit.
  • Ended query with runtime exceeding limit - The number of queries that were stopped because they ran longer than the time limit set by the resource pool.
  • Maximum allowed resource value - The maximum resource limits (such as memory, CPU) allowed for a query defined in the resource pool.
  • Minimum starting resource value - The minimum resources required for a query to start running.
  • Queries with resource rejections - The number of queries that were rejected due to insufficient resources.
  • Queries reaching the max allowed execution time - Set a time threshold to see how many queries reached that limit during execution.

Custom alerts

Create custom alerts to measure system performance metrics that are not monitored by the pre-configured alerts. Create a dynamic SQL query that triggers an alert when it returns any results. You can configure how often an alert is generated, the alert criticality level, and who receives email alerts. For example, you can create custom alerts that monitor the following:

  • Failed logins within a configurable time period
  • Idle database user sessions using a configurable time limit

Create a custom alert

Create custom alerts to track abnormalities and performance fluctuations for queries and resource pools using your own database schemas and tables. When the database triggers an active alert, you receive notifications according to the alert priority.

  1. In the Settings Overview page, select Alerts. The Alerts page appears.

  2. Choose Custom alert.

  3. In the Alert name field, enter a unqiue name for the custom alert.

  4. In the Description field, enter an optional description for the custom alert.

  5. In the Trigger condition query field, enter the following SQL query and choose Run:

    SELECT
      login_timestamp,
      user_name,
      node_name,
      client_hostname,
      reason
    FROM
      login_failures
    WHERE
      reason in ('INVALID USER', 'FAILED', 'INVALID DATABASE')
      AND login_timestamp > sysdate - INTERVAL '{{Time_Interval}}'
    

    The AND clause of the preceding query encloses the variable of type String named Time_Interval in double curly braces to represent a valid SQL syntax element.

    A variable is a dynamic value that is evaluated at runtime that you can configure after you save the alert. You can add up to 6 variables per custom alert. Variable names may consist of a maximum of 14 alpha-numeric characters, including underscores. Verify that the variable uses the correct data type and format. Variables of type String require single quotes around the curly braces.

    A SQL query triggers an alert if it returns one or more rows.

  6. From the Interval list, choose the time interval at which the SQL query is executed.

  7. In the Criticality level area, choose the alerty priority. By default, the alert criticality is Low.

  8. Choose the option Send an email copy. In the Add new user field, enter the email address of the recipient. For multiple recipients, enter the email addresses separated by a comma.

  9. Choose Create.

  10. Turn the alert on or off using the switch at the right of the alert row.

Edit a custom alert

After creating a custom alert, you can edit the details to suit your requirements.

  1. In the Alerts page, select a custom alert.
  2. Hover over and select Edit. All the details of the custom alert are populated. You cannot edit the name of the custom alert. For more information about editing a custom alert, see Create a custom alert

Clone a custom alert

You can create a clone or replica of an existing custom alert. The settings of the existing custom alert are copied to the cloned custom alert. You can edit these settings in the cloned custom alert.

  1. In the Alerts page, select a custom alert.
  2. Hover over and select Clone. Information in all fields is pre-populated. You can edit this information. For more information about editing a custom alert, see Create a custom alert

Delete a custom alert

You can delete a custom alert that is no longer in use or required.

  1. In the Alerts page, select a custom alert.

  2. Hover over and select Delete.

  3. In the Confirmation dialog, select Delete.

    The custom alert is deleted.

1.4.2 - Alert configuration

This topic provides information about configuring alerts in OTCAD.

Alert configuration

Enable and customize settings for pre-configured or custom alerts for each database. Pre-configured and custom alerts generate messages when the component they measure exceeds the specified threshold.

Configurable settings

Select Alerts in the Settings Overview page. To configure an alert, you must turn the switch on the right of the row to the on position. By default, custom alerts are turned off. Pre-configued and custom alerts have the following settings:

  • Trigger condition - Add lower and upper limits on acceptable operating values for the component. Move the arrow up or down to either increase or decrease the query count.
  • Interval - Choose how often the system executes the underlying SQL for the alert.
  • Resource pool - Choose the resource pool to which you need to add this alert.
  • Criticality level - Choose the alerty priority. By default, the alert criticality is Low.
  • Send an email copy - Choose this option to enter the email address of the recipient. For multiple recipients, enter the email addresses separated by a comma.

Configuring resource pool alerts

Resource pool alerts use the same configurable settings as the other alerts, but you can set alerts per resource pool.

  1. Login to OTCAD.
  2. In the Settings Overview page, select Alerts. The Alerts page appears.
  3. In the Resource pools tab, select the resource pool to which you need to add this alert.
  4. Edit the trigger condition, interval, resource pool, criticality level, and recipient user email address. For more information, see Configurable settings.

Configuring query alerts

You can configure alerts that monitor queries.

  1. Login to OTCAD.
  2. In the Settings Overview page, select Alerts. The Alerts page appears.
  3. In the Queries tab, select the query to which you need to add this alert.
  4. Edit the trigger condition, interval, criticality level, and recipient user email address. For more information, see Configurable settings.

View alerts

The Alerts page organizes alerts to help you effectively monitor the state of your database. Pre-configured and custom alerts generate messages when the component they measure exceeds the specified threshold.

  1. Log in to OTCAD.
  2. Click the Bell icon in the upper‑right corner of the application. The Alerts page appears.
  3. From the Date range list, choose the period to view the alert. Do one of the following:
    • Choose Last 15 min to view only those alerts that were created in the last 15 minutes.
    • Choose Last 1 hour to view only those alerts that were created in the last one hour.
    • Choose Last 7 days to view only those alerts that were created in the last 7 days.
    • Choose Last 14 days to view only those alerts that were created in the last 14 days.
  4. From the Notification type list, choose the type of alert. Do one of the following:
    • Choose All to view both resource pool and query alerts.
    • Choose Queries to view only query alerts.
    • Choose Resource pool to view only resource pool alerts.
  5. In the Severity area, choose the option to view the alerts by severity. Do one of the following:
    • Choose Low to view alerts with low priority.
    • Choose High to view alerts with high priority.
  6. Select Mark as read after you view an alert and do not wish to view the same alert again.

1.5 - Resource pools

This topic provides an overview of resource pools in OTCAD.

Resource pools

A resource pool comprises a pre-allocated subset of the system resources with an associated queue. Super users can view information about resource pool parameters and make changes to existing parameters through the Resource pools page.

View resource pool details

You can view information about the resource pools available in the system. To view resource pool details:

  1. In the Settings page, select Resource pools. The Resource pool page lists the following information:
    • Name - Name of the resource pool.
    • Memory size - Total per-node memory available to the database resource manager that is allocated to this resource pool.
    • Runtime cap (seconds) - Maximum time (in seconds) for which a query in the resource pool can execute. If a query exceeds this setting, it cascades to a secondary pool.
    • Priority - Priority of queries in this resource pool when they compete for resources in the GENERAL pool.
    • Runtime priority - Specifies how the resource manager prioritizes run-time resources (CPU, I/O bandwidth) to queries already running in this resource pool.
    • Queue timeout - The interval that the request waits for resources to become available before being rejected.
    • Max concurrency - Maximum number of concurrent execution slots available to the resource pool across the cluster.
  2. In the Actions column, select and View details. The Pool details page lists pool-specific information.
  3. Select Close.

Create a resource pool

Only super users can create and configure resource pools.

  1. In the Settings page, select Resource pools > Create. The Create resource pools page is displayed.

  2. In the Pool name field, enter a unique name for the resource pool. If you specify a resource pool name with uppercase letters, OTCAD converts them to lowercase letters.

    The following naming requirements apply:

    • Built-in pool names cannot be used for user-defined pools.
    • All user-defined global resource pools must have unique names.
    • Names of global resource pools cannot be the same.
  3. From the Cascade to list, select an existing resource pool that is either a user-defined pool or the GENERAL pool.

  4. From the User list, assign existing users to a resource pool. This resource pool is used to process all queries from its assigned users, allocating resources as set by resource pool parameters. The selected user is added to the resource pool by default.

  5. In the Minimum memory field, enter the total per-node memory available to the database resource manager that is allocated to this resource pool. The default is 0%. From the list, select the percentage of total memory, or amount of memory in kilobytes, megabytes, gigabytes, or terabytes.

  6. In the Maximum memory field, enter the maximum size per node the resource pool can grow by using memory from the GENERAL pool. From the list, select the percentage of total memory, or amount of memory in kilobytes, megabytes, gigabytes, or terabytes.

  7. In the Maximum query memory field, enter the maximum amount of memory this resource pool can allocate at runtime to process a query. If the query requires more memory than this setting, OTCAD stops and returns an error. From the list, select the percentage of maximum memory, or amount of memory in kilobytes, megabytes, gigabytes, or terabytes upto the value of maximum memory.

  8. The CPU affinity mode specifies whether the resource pool has exclusive or shared use of the CPUs specified in the Affinity set field below. Select one of these values:

    • Shared - Queries that run in this resource pool share its Affinity set CPUs with other resource pools.
    • Exclusive - Dedicates Affinity set CPUs to this resource pool only and excludes other resource pools. If Affinity set is set as a percentage, then that percentage of CPU resources available only are assigned to this resource pool.
    • Any - Queries in this resource pool can run on any CPU.
  9. Affinity set - CPUs available to this resource pool. All nodes must have the same number of CPUs. The CPU resources assigned to this set are not available to GENERAL resource pools. From the list, do one of the following:

    • Select Range to dedicate a range of contiguous CPU indexes in the From and To fields to this resource pool.
    • Select Single to dedicate one or more comma-delimited CPUs to this resource pool.
    • Select Percentage to dedicate the percentage of all available CPUs to use for this resource pool. The database rounds this percentage down to include whole CPU units.
  10. In the Minimum concurrency field, enter the preferred number of queries to execute concurrently in the resource pool. Select the Auto option to calculate the value automatically at query runtime. The system sets this parameter to the lower of these two calculations, but never less than 4:

    • Number of logical cores
    • Memory divided by 2GB

    Change this parameter only after evaluating performance over a period of time.

  11. In the Maximum concurrency field, enter the maximum number of concurrent execution slots available to the resource pool.

  12. In the Execution parallelism field, enter the number of threads used to process any single query issued in this resource pool. Select the Auto option to calculate the setting from the number of cores, available memory, and amount of data in the system. Unless memory is limited, or the amount of data is very small, the system sets this parameter to the number of cores on the node.

  13. In the Pool priority field, enter the priority of queries in this resource pool when they compete for resources in the GENERAL pool. Enter either negative or positive integer values, where higher numbers denote higher priority.

    • For user-defined resource pools, enter a value in the range -100 to 100.
    • For built-in resource pools such as SYSQUERY, RECOVERY, and TM, enter a value in the range of -110 to 110.
  14. From the Runtime priority list, select how the resource manager should prioritize dedication of run-time resources (CPU, I/O bandwidth) to queries already running in this resource pool. Select one of Low, Medium, or High. The default is Medium.

  15. In the Runtime priority threshold field, enter the maximum time (in seconds) in which query processing must complete before the resource manager assigns to it the resource pool's RUNTIMEPRIORITY. All queries run with a priority of HIGH. The default value is 2.

  16. In the Queue timeout (seconds) field, enter the maximum time (not more than an year) a request can wait (in seconds) for pool resources before it is rejected. There is no maximum wait time. Requests can be queued indefinitely up to one year.

  17. In the Runtime cap (seconds) field, enter the maximum run time allowed (not more than an year) to queries in this resource pool. If a query exceeds this setting, it cascades to a secondary pool.

Edit a resource pool

After creating a resource pool, you can edit the details to suit your requirements. Only database administrators can edit resource pools.

  1. In the Resource pools page, under the Actions column, choose and Edit. All the details about the resource pool are populated. You cannot edit the following fields:

    • Pool name
    • Affinity set
    • Minimum currency
    • Execution parallelism
  2. From the Cascade to list, choose the secondary resource pool for executing queries that exceed the RUNTIMECAP setting of their assigned resource pool.

  3. From the User list, assign existing users to a resource pool. This resource pool is used to process all queries from its assigned users, allocating resources as set by resource pool parameters. The selected user is added to the resource pool by default.

  4. In the Minimum memory field, enter the total per-node memory available to the database resource manager that is allocated to this resource pool. The default is 0%. From the list, select the percentage of total memory, or amount of memory in kilobytes, megabytes, gigabytes, or terabytes.

  5. In the Maximum memory field, edit the maximum size per node the resource pool can grow by using memory from the GENERAL pool. From the list, select the percentage of total memory, or amount of memory in kilobytes, megabytes, gigabytes, or terabytes.

  6. In the Maximum query memory field, edit the maximum amount of memory this resource pool can allocate at runtime to process a query. If the query requires more memory than this setting, OTCAD stops and returns an error. From the list, select the percentage of maximum memory, or amount of memory in kilobytes, megabytes, gigabytes, or terabytes upto the value of maximum memory.

  7. The CPU affinity mode specifies whether the resource pool has exclusive or shared use of the CPUs specified in the Affinity set field below. Select one of these values:

    • Shared - Queries that run in this resource pool share its Affinity set CPUs with other resource pools.
    • Exclusive - Dedicates Affinity set CPUs to this resource pool only and excludes other resource pools. If Affinity set is set as a percentage, then that percentage of CPU resources available only are assigned to this resource pool.
  8. Affinity set - Edit the CPUs available to this resource pool. All nodes must have the same number of CPUs. The CPU resources assigned to this set are not available to GENERAL resource pools. From the list, do one of the following:

    • Select Range to dedicate a range of contiguous CPU indexes in the From and To fields to this resource pool.
    • Select Single to dedicate one or more comma-delimited CPUs to this resource pool.
    • Select Percentage to dedicate the percentage of all available CPUs to use for this resource pool. The database rounds this percentage down to include whole CPU units.
  9. In the Maximum concurrency field, edit the maximum number of concurrent execution slots available to the resource pool.

  10. In the Pool priority field, edit the priority of queries in this resource pool when they compete for resources in the GENERAL pool. Edit either negative or positive integer values, where higher numbers denote higher priority.

    • For user-defined resource pools, enter a value in the range -100 to 100.
    • For built-in resource pools such as SYSQUERY, RECOVERY, and TM, enter a value in the range of -110 to 110.
  11. From the Runtime priority list, select how the resource manager should prioritize dedication of run-time resources (CPU, I/O bandwidth) to queries already running in this resource pool. Select one of Low, Medium, or High. The default is Medium.

  12. In the Runtime priority threshold field, edit the maximum time (in seconds) in which query processing must complete before the resource manager assigns to it the resource pool's RUNTIMEPRIORITY. All queries run with a priority of HIGH. The default value is 2.

  13. In the Queue timeout (seconds) field, edit the maximum time (not more than an year) a request can wait (in seconds) for pool resources before it is rejected. There is no maximum wait time. Requests can be queued indefinitely up to one year.

  14. In the Runtime cap (seconds) field, edit the maximum run time allowed (not more than an year) to queries in this resource pool. If a query exceeds this setting, it cascades to a secondary pool.

    For more information about editing a resource pool, see Create a resource pool.

Delete a resource pool

You can delete a resource pool that is no longer in use or required.

  1. In the Resource pools page, under the Actions column, choose and Delete.
  2. In the Confirmation dialog, select Delete. Requests that are queued against the deleted pool are transferred to the GENERAL pool according to the priority of the pool compared to the GENERAL pool. If the pool’s priority is higher than the GENERAL pool, the requests are placed at the head of the queue; otherwise, transferred requests are placed at the end of the queue.

2 - Overview of database designer

This topic provides an overview of database designer in OTCAD.

A tool that analyzes a logical schema definition, sample queries, and sample data; and creates a physical schema (projections) that you can deploy manually.

2.1 - Database designer

This topic provides an overview of database designer in OTCAD.

About database designer

Use database designer to create designs that provide excellent performance and specific queries while using disk space efficiently. A design is a physical storage plan that optimizes query performance.

The Database designer:

  • Analyzes your logical schema, sample data, and sample queries.
  • Creates a physical schema design (a set of projections) that can be deployed manually.
  • Does not require specialized database knowledge.
  • Can be run and rerun any time for additional optimization without stopping the database.
  • Uses strategies to provide optimal query performance and data compression.

The database designer recommends projection statements for all tables referenced in the selected queries to ensure optimal query performance. Up to 100 queries can be provided as input.

Select the More options button on the Home page. Choose Database designer. The Database designer page appears with the Projections and Designs tabs.

Only administrators have the privilege to execute the database designer.

View projection details

The Projections tab lists the following information:

  • Projection name - The name of the projection.
  • Type - The method in which the projection was created. By default, projections of type DESIGNER that are created by the database designer are displayed.
  • Table name - The name of the projection's anchor table.
  • Schema - The name of the schema that contains the projection.
  • Creator - The name of the projection's owner.
  • Segmented - Specifies whether the projection is segmented.
  • Up to date - Specifies whether projection data is up to date. Projections that are up to date are available to participate in query execution.

In the Projections tab, you can view the projection details.

  1. Select the Details icon.

    The projection statement is listed.

  2. Select Copy to copy the projection statement to the clipboard and analyze it in the SQL editor.

  3. Select Download to download the projection statement to a file with .sql extension and analyze the projection statements outside the application.

  4. Select >> to close the projection details.

Create a design

To create a design, do the following in the Database design page:

  1. Select +Create a design.

  2. In the Design name field, enter a unique name for the design. All designs must have unique names and should be no more than 32 characters long. Identifiers (names) of objects such as schema, table, projection, column names, and so on can be up to 128 bytes in length.

    The sample SQL queries are listed.

  3. Select one of the options - Last 10, Last 50, or Last 100 to load the last 10, last 50, or last 100 SQL queries respectively.

    Select the required SQL query to view the query details.

  4. Select Create.

    The design is created and displayed in the Designs tab of the database designer page.

View design details

The Designs tab lists the following information:

  • Design name - The unique name of the design.
  • Last updated on - The timestamp at which the design was last updated.
  • Type - The type of the design.
  • Status - The current status of the design. Valid values are Building, Built, and Error.

In the Designs tab, you can view the design details.

  1. Select the Details icon.

    The projection statement is listed in the Projection statement tab.

  2. Select Copy to copy the projection statement to the clipboard and analyze it in the SQL editor.

  3. Select Download to download the projection statement to a file with .sql extension and analyze the projection statements outside the application.

  4. Select the Analysis tab to view information about the events and the timestamp.

  5. Select the Sample queries tab to view information about the SQL query and the query ID.

  6. Select >> to close the projection details.

Delete a design

You can delete a design that is no longer in use or required.

  1. In the Database designer page, select the Designs tab.
  2. Select and Delete.
  3. In the Confirmation dialog, select Delete.

This deletes the selected design and all it's configurations. The projections are not deleted.

3 - Overview of Data Pipelines

This topic provides an overview of data pipelines in OTCAD.

Data pipelines allow you to define where, when, and how you need to load data into OpenText™ Analytics Database with minimal manual intervention.

3.1 - Data Pipelines

This topic provides an overview of data pipelines in OTCAD.

Data pipeline

A data pipeline also known as a data loader is a declarative, automated way to process data continuously from external sources like Amazon S3. They allow you to define where, when, and how you need to load data into OTCAD with minimal manual intervention.

Data pipelines UI tour

You can view and manage data pipelines. After logging in to the OTCAD application, you land on the home page. Select the More options button and Data Pipeline. The Data pipeline page appears as shown in this image:

OTCAD user interface

This page displays the following information in the Overview card:

  • Total number of pipelines - The Total Pipelines card displays the total number of pipelines configured in the system.
  • Pipelines that failed execution - Select the Failed Execution drop-down list and the period for which you need to view the pipelines that failed to execute.
  • Active pipelines - The Active Pipelines card displays the total number of pipelines that are in the Active status. The Pipelines area displays the following information:
  • Pipeline name - The name of the data pipeline.
  • Created by - The user ID of the person who created the data pipeline.
  • Data source - The source location of the files that contain the data to be loaded.
  • Schema - The tables, columns (fields), data types, and relationships among different tables in the database.
  • Destination table - The database table where data is written after it has been processed or transformed from a source table or other data source.
  • Last run on - The timestamp at which the data pipeline was last run.
  • Last run status - Indicates the status of the data pipeline when it was last run.
  • Pipeline status - Indicates the present status of the data pipeline.
Option Description
a Create a pipeline: Select this option to create a data pipeline.
b Filter by: Select this option to filter data by different criteria.
c Search pipeline name or schema: Search for a data pipeline based on the pipeline name or schema.
d View details: Select this option to view the pipeline details.
e Edit pipeline: Select this option to edit the data pipeline.
f Clone pipeline: Select this option to clone a data pipeline.
g Remove pipeline: Select this option to delete a data pipeline.

Create a data pipeline

To create a data pipeline, do the following:

  1. In the Data Pipelines page, select +Create a pipeline. The Create a pipeline page displays.
  2. In the Pipeline name field, enter the name of the pipeline.
  3. In the Access key ID field, enter your AWS account access key id.
  4. In the Secret access key field, enter your AWS account secret access key.
  5. From the Region drop-down list, select the region (geography) of the S3 bucket where the files are present.
  6. In the S3 Bucket/File/Folder path field, enter the name or the folder path where the files are present.
  7. Select the Data is encrypted option to specify the following parameters:
    • Select either AWS Key Management Service Key or Customer managed keys if you wish to encrypt and load data into the S3 bucket.
    • Select Encryption key ID.
  8. Select Next.
  9. In the Retry limit field, specify the number of times for which the system needs to retry a failed file load.
  10. In the Parameters field, specify the copy parameters. For more information, see Parameters.
  11. Select Next.
  12. From the Destination table drop-down list, select the destination table to which you need to load the data.
  13. Select Next.
  14. Specify the schedule at which the data pipeline needs to run. Do one of the following:
    • Select Schedule.
    • From the date pickers, select the Start date and End date.
    • In the Repeat every field, specify the duration at which the data pipeline needs to run.
    • From the Unit drop-down list, select the minute, hour, day, week, or month at which the data pipeline needs to run.
    • Select the On day radio button and specify the day on which the data pipeline needs to run.
    • Select the On radio button and specify the exact day and month on which the data pipeline needs to run.
    • Select the option Trigger when something is added to run the data pipeline when a file is added to the S3 bucket.
    • Enter the SQS credentials in the Access key ID, Secret access key, and Resource URL fields. <--Or-->
    • Select Execute once.
  15. Select Finish. The data pipeline is created and displayed in the Data Pipelines page.

Filter a data pipeline

You can filter data pipelines based on certain criteria. Filter data pipelines in one of these ways:

  1. A date range on which the data pipelines last ran.
    • Select the Filter icon in the Data Pipelines page.
    • Expand the Last run range drop-down list.
    • In the date picker, select a date range (start date and end date) for which you need to view the data pipelines. For example, select a range from 1 May to 31 May to view data pipelines that were created in the month of May.
  2. Schema
    • Select the Filter icon in the Data Pipelines page.
    • Expand the Schema drop-down list.
    • Select the required schema.
  3. Data source
    • Select the Filter icon in the Data Pipelines page.
    • Expand the Data source drop-down list.
    • Select the data source or data sources for which you wish to view the data pipeline.
  4. Pipeline status
    • Select the Filter icon in the Data Pipelines page.
    • Expand the Pipeline status drop-down list.
    • Select Active to view the data pipelines that are active. A data pipeline is in the Active status when the schedule end date is a future date and there are one or more ingestions that are yet to complete.
    • Select Inactive to view the data pipelines that are inactive. A data pipeline is in the Inactive status either when the end date is past or there are no ingestions that are yet to complete.
  5. Id of the person who created the pipeline
    • Select the Filter icon in the Data Pipelines page.
    • Expand the Created by drop-down list.
    • Select the user Id of the person who created the data pipeline.

Search a data pipeline

All data pipelines are displayed by default. You can search data pipelines using specific search criteria. To search data pipelines, do the following:

  1. In the Data pipelines page, select +Search pipeline name or schema.
  2. Enter one of the following search criteria:
    • Pipeline name
    • Owner of the data pipeline (Created by)
    • Schema
    • Destination table
  3. You can sort the data pipeline by the pipeline name or the date on which it was last run. To sort the data pipelines in the ascending or descending order, select the Sort icon for the Pipeline name or Last run on column in the Data Pipelines page.

View pipeline details

You can view pipeline details in the Data pipelines page. To view the details of a data pipeline, do the following:

  1. In the Data pipeline page, mouse over the Pipeline name column and select the +View details icon for a data pipeline. You can view the following details of the data pipeline in this page.
    • Overview - Displays the owner (user ID) of the data pipeline.
    • Configurations - Displays the source and destination paths of the data pipeline. In the Configurations card, click the Edit icon to edit the data source and destination paths of the data pipeline. For more information, see Creating a data pipeline.
    • Total instances - Displays the total number of jobs. Select the drop-down list and view information about the instances for the last 24 hours, last week, last month, last 6 months, last one year, or all time.
    • To view the reasons for the failure of a job, mouse over an instance with job status Failed and select the View error logs icon that appears for a file name. The error log provides information about the row that failed execution, reason for failure, and rejected data. You can troubleshoot the job with this information and ensure that this job is successfully executed.
    • Incident Overview - Displays information about the data pipeline job in a pie chart. Select the drop-down list and view information about the jobs for the last 24 hours, last week, last month, last 6 months, last one year, or all time.
  2. Select Execute pipeline to execute the selected data pipeline. Executing a data pipeline loads all files that have not already been loaded and that have not reached the retry limit. Executing the data pipeline commits the transaction.
  3. Select Edit pipeline to edit the data pipeline.
  4. In the Access key ID field, enter your AWS account access key id.
  5. In the Secret access key field, enter your AWS account secret access key. All the details about the data pipeline are populated, except the Access key ID and Secret access key.
    For more information about editing a data pipeline, see Creating a data pipeline.

Edit a data pipeline

After creating a data pipeline, you can edit the details to suit your requirements. To edit a data pipeline, do the following:

  1. In the Data Pipelines page, mouse over the Pipeline name column and select the +Edit pipeline icon for a data pipeline.
  2. In the Access key ID field, enter your AWS account access key id.
  3. In the Secret access key field, enter your AWS account secret access key. All the details about the data pipeline are populated, except the Access key ID and Secret access key.
    For more information about editing a data pipeline, see Creating a data pipeline.

Clone a data pipeline

You can create a clone or replica of an existing data pipeline. The configurations of the existing data pipeline are copied to the cloned data pipeline. You can edit these configuration settings in the cloned data pipeline.

  1. In the Data Pipelines page, mouse over the Pipeline name column and select the Clone pipeline icon for a data pipeline.
  2. In the Confirmation dialog, select Confirm. The Create a pipeline page displays.
  3. In the Pipeline name field, enter the name of the pipeline.
  4. In the Access key ID field, enter your AWS account access key id.
  5. In the Secret access key field, enter your AWS account secret access key. Information in all other fields is pre-populated. You can edit this information.

Remove a data pipeline

You can delete a data pipeline that is no longer in use or required.

  1. In the Data Pipelines page, mouse over the Pipeline name column and select the +Remove pipeline icon for a data pipeline.
  2. In the Confirmation dialog, select Remove.

4 - Overview of Restore points

This topic provides an overview of restore points in OTCAD.

Restore points are lightweight, copy-free backups for databases that enable you to roll back a database to the state at which the restore point was saved. These restore points are stored in the database and do not require additional data copies to be stored externally, which results in fast restore point creation and low storage overhead.

4.1 - Restore points

This topic provides an overview of restore points in OTCAD.

Restore points

Restore points are useful for cases such as the following:

  • If a user error compromises data integrity, such as unintentionally dropping a table or inserting the wrong data into a table, you can restore the database to a state before the problematic transactions.
  • In the case of a failed upgrade, you can restore the database to its pre-upgrade state.
  • You want to create a number of backups while avoiding the overhead of storing copies of the database externally.

Restore points UI tour

You can view and manage restore points. After logging in to the OTCAD application, the home page appears. Select the More options button and Restore points. The Restore points page appears as shown in this image:

Restore points UI

The Restore points page displays the following information:

Option Description
a +New folder Select this option to create a folder.
b Select this option to add, edit, and delete all restore points.
c Select this option to restore and delete a restore point.

Select the information icon i to view a summary of the restore point such as the folder name, the number of restore points to save, and the owner details.

Create a folder

A folder contains restore points. Before saving restore points, you must first create a folder to store them. You can then save restore points to the folder and select a restore point when restoring the database.

To create a folder, do the following:

  1. In the Restore points page, select + New folder. The Create Restore points page opens.

  2. In the Folder name field, enter a unique name for the folder.

  3. In the Maximum restore points to retain field, enter the maximum number of restore points that can be saved on the folder.

  4. Save the restore point. Do one of the following:

    • Select Schedule.
    • From the date picker, select the Start date and time.
    • In the Repeat every field, specify the duration at which the restore point needs to be saved.
    • From the Unit drop-down list, select the minute, hour, day, week, or month at which the restore point needs to be saved.
    • Select the On day option and specify the day on which the restore point needs to be saved.
    • Select the On option and specify the exact day and month on which the restore point needs to be saved.

    OR

    • Select Execute once to create the restore point manually.
  5. Select Create.

Create a restore point

You can associate a restore point with an existing folder. Before you can save restore points, you must first create a folder to store them. Folders store the restore points in descending chronological order based on the times at which the restore points were saved. You can identify each restore point by its unique identifier or the index in the folder, where an index of 1 specifies the most recently saved restore point.

To create a restore point:

  1. In the Restore points page, select and +Add restore point. The restore point is created and displays the following information:

    • Index - The serial number of the restore point in the folder.
    • Timestamp - The date and time of creation of the restore point.
    • Database version - The database version of the OTCAD application.
    • ID - The unique identifier of the restore point.
    • Status - The present status of the restore point. The status can be either Completed or Saving.

Edit a folder

After creating an restore point, you can edit the details to suit your requirements.

  1. In the Restore points page, select a folder.
  2. Hover over and select Edit. All the details of the restore point are populated. For more information about editing a folder, see Create a folder.
  3. Select Update.

Delete all restore points in a folder

You can delete all restore points in a folder that are no longer in use or required.

  1. In the Restore points page, select a folder.
  2. Hover over for a folder and select Delete all restore points.
  3. In the Confirmation dialog, select Delete.

Restore a database from a restore point

After you have saved restore points to a folder, you can use those restore points to restore a database. The database is restored to the state at which the restore point was saved earlier.

The version of the revived database must be the same as or newer than the version of the database when the restore point was saved. After a database is revived, any restore point that was saved using a version newer than the version of the revived database will be invalidated. You can remove the invalidated restore points from the database.

The revive operation preserves existing folders, but their configuration, including restore point limits, are rolled back to their state when the restore point was saved. If a folder did not exist at the time of the restore point to which the database was revived, that folder is revived with default configuration settings—no restore point limit and dbadmin as the owner.

  1. In the Restore points page, select a folder.
  2. Hover over for a restore point and select Restore.
  3. Do one of the following:
  • Select Full restore to restore the complete database.

  • Select Restore.

  • In the Confirmation dialog, select Confirm. The estimated time to restore the database is displayed. After the database is successfully restored, it is displayed in the Restore points page with the status Completed.

    OR

  • Select Specific objects to restore only specific objects in the database.

  • Select Create or replace to create a missing object or replace an existing object with the restored version.

    OR

  • Select Coexist to restore objects without replacing the existing ones. New schemas store these restored objects.

  • In the Search for schema or tables field, enter the name of the schema or table to search in the current database.

  • Select Add objects to restore deleted objects. The Add deleted objects screen opens.

    • In the Object name field, enter the name of the objects (schemas or tables) that need to be restored separated by a comma.
    • Select Add to list. The deleted object appears as a check box in Deleted items. For example, to restore a deleted schema s2, enter s2. To restore a table t1 in the schema s2, enter s2.t1. Enter multiple deleted objects separated by a comma. * In the Data from current database area, the list of schemas and their corresponding objects is displayed. * Expand either a single schema or multiple schemas to restore objects.
  • Select Restore.
    The objects selected for restoration are successfully restored.

Delete a restore point

You can delete a restore point that is no longer in use or required.

  1. In the Restore points page, select a folder.
  2. Hover over for a restore point and select Delete restore point.
  3. In the Confirmation dialog, select Delete.

5 - Overview of UDx

This topic provides an overview of UDx in OTCAD.

A user-defined extension (UDx) is a component that expands OTCAD functionality—for example, new types of data analysis and the ability to parse and load new types of data.

5.1 - UDx

This topic provides an overview of UDx in OTCAD.

OTCAD enables you to create, list, and delete custom UDx, so that you can integrate them with existing database tables. UDx allow you to reuse and share code that extends built-in functionality. Use UDx to perform specific tasks such as complex calculations, transformations, or custom data manipulations.

You can reuse a UDx several times. A function always returns a value explicitly by specifying an expression. It is a good choice to calculate and return a value. You can use UDFs to extend built-in functions or to encapsulate calculations that are standard for your organization. UDFs you create can be called in a way similar to built-in functions.

You write a UDF’s logic (it's handler) – in python, C++, or java.

About UDx

You can view and manage UDx. After logging in to OTCAD, the home page appears. Select the More options button and UDx. The User-defined extensions page appears with the Functions and UDx libraries tabs.

The Functions tab lists the functions that are available in the system. The Active SDK version label displays the current SDK version of the database. The following columns are displayed:

  • Schema name - The name of the schema.
  • Function name - The name of the function.
  • Function type - The type of the function. Choose from among Scalar, Aggregate, Analytic, and Transform.
  • SDK version - The version of the SDK used to create the library. This is applicable only for C++ and java. For python, this column is empty.
  • Language - The language in which the function is developed. For e.g, Python.
  • Library - The name of the library to which the UDx is associated.
  • Created by - The user ID of the person who created the function.
  • Actions - Option to delete the function. For more information about deleting a function, see Delete a function.

In the Search function name field, enter the name of the function. The function is displayed in the Functions tab of the User-defined extensions page.

The UDx libraries tab lists the UDx along with the status that are available in the system. The following columns are displayed:

  • Library - The name of the library containing the UDx.
  • Language - The language in which the library is developed. For e.g, Python.
  • Functions - The number of functions associated with the UDx.
  • Created by -The user ID of the person who created the library.
  • Creation date - The date and time of creation of the library.
  • Status - The present status of the UDx library.
  • Details - Details about the UDx in the system.
  • Actions - Option to delete the UDx library. For more information about deleting a UDx library, see Delete a UDx library.

In the Search library name field, enter the name of the UDx library. The UDx library is displayed in the UDX libraries tab of the User-defined extensions page.

Create a library

To create a library, do the following in the User-defined extensions page:

  1. Select +Add new.

    The Create a library page appears.

  2. In the Library name field, enter the name of the library.

  3. In the Schema list, choose the required schema.

  4. In the Language list, choose the required programming language of the functions in the library - Python, Java, or C++.

  5. In the File name field, enter the name of the file.

  6. Click Browse and choose the file that contains the file you selected in the previous step.

  7. In the Function area, for the Name field, enter the name of the function.

  8. In the Function area, for the Class field, enter the name of the class that is defined in the file selected in step 5. For e.g, MultiplyAndSumFactory.

  9. From the Type list, select the UDx type:

    • Scalar
    • Aggregate
    • Analytic
    • Transform

    For more information about the UDx types and the supported languages, see Developing user-defined extensions (UDx).

  10. Click +Add to add another function. The newly added functions appear in the UDx libraries tab.

  11. Click Create.

    The library is created and displayed in the UDx libraries tab of the User-defined extensions page.

Delete a function

You can delete functions that are no longer used or needed.

  1. In the Functions tab of the User defined extensions page, select in the Action column.
  2. Select Delete.
  3. In the Confirmation dialog, select Delete.

Delete a UDx library

You can delete UDx libraries that are no longer used or needed.

  1. In the UDx Libraries tab of the User defined extensions page, select in the Action column.
  2. Select Delete.
  3. In the Confirmation dialog, select Delete.

5.2 - Developing user-defined extensions (UDx)

The primary strengths of UDxs are:.

The primary strengths of UDx are:

  • You can use them wherever you use internal functions.

  • They take full advantage of the distributed computing feature of the system. The extensions usually execute in parallel on each node in the cluster.

  • The complicated aspects of developing a distributed piece of analytic code are handled by the system. Your main programming task is to read in data, process it, and then write it out using the SDK APIs.

Fenced mode

Fenced mode runs the UDx code outside of the main system process. Fenced UDx crashes do not impact the core system process. There is a small performance impact when running UDx code in fenced mode. On average, using fenced mode adds about 10% more time to execution.

Fenced mode is currently available for all UDx with the exception of user-defined aggregates. All UDx run in fenced mode, since the Python runtimes cannot run directly within the system process. Using fenced mode does not affect the development of your UDx. Fenced mode is enabled by default for all UDx.

OTCAD does not support the unfenced mode.

Developing with the database SDK

Before you develop your UDx, you need to configure your development environment. The development environment must use the same database version as the production environment. For guidance on obtaining OpenText SDK, contact Technical Support. For more information about configuring your development environment, see Developing with the OpenText Analytics Database SDK.

Database upgrade

When the database is upgraded, the SDK is also upgraded. Your UDx libraries depend on that SDK. If the SDK changes, your existing libraries may no longer work. You need to recompile your library using the new SDK, delete the old library, and upload the newly compiled one. If you do not recompile your library using the new SDK, the error "Library built with incompatible SDK version. Rebuild with SDK version [string] and recreate the library" appears when loading or running the UDx.

If you make any changes to your UDx code, you must compile a new version of the library and upload it again. This also requires removing the existing library and replacing it with your updated one. The database then runs the latest version of your UDx library.

UDx types

OTCAD supports four types of user-defined extensions:

User-defined aggregate functions (UDAFs)

UDAFs allow you to create custom Aggregate functions specific to your needs. They read one column of data and return one output column. UDAFs can be developed in C++. An Aggregate function performs an operation on a set of values and returns one value. The system provides standard built-in aggregate functions such as AVG, MAX, and MIN. For more information about UDAF, see Aggregate functions (UDAFs).

User-defined analytic functions (UDAnFs)

UDAnFs are similar to UDSFs, in that they read a row of data and return a single row. However, the function can read input rows independently of outputting rows, so that the output values can be calculated over several input rows. UDAnFs can be developed in C++ and Java and are used for analytics. UDAnFs must output a single value for each row of data read and can have no more than 9800 arguments. For more information about UDAnF, see Analytic functions (UDAnFs).

User-defined scalar functions (UDSFs)

UDSFs take in a single row of data and return a single value. UDSFs can be developed in C++, python, or java. A user-defined scalar function (UDSF) returns a single value for each row of data it reads. You can use a UDSF anywhere you can use a built-in database function. For more information about UDSF, see Scalar functions (UDSFs).

User-defined transform functions (UDTFs)

UDTFs operate on table partitions and return zero or more rows of data. The data they return can be an entirely new table, unrelated to the schema of the input table, with its own ordering and segmentation expressions. They can only be used in the SELECT list of a query. UDTFs can be developed in C++, python, and java.

To optimize query performance, you can use live aggregate projections to pre-aggregate the data that a UDTF returns. A user-defined transform function (UDTF) lets you transform a table of data into another table. It reads one or more arguments (as a row of data) and returns zero or more rows of data comprising one or more columns. For more information about UDTF, see Transform functions (UDTFs).

While each UDx type has a unique base class, developing them is similar in many ways. Different UDx types can also share the same library.

UDx types and supported languages

The UDx types and their supported languages are as follows:

UDx type Java C++ Python
User-defined scalar functions Yes Yes Yes
User-defined aggregate functions No Yes No
User-defined analytic functions Yes Yes No
User-defined transform functions Yes Yes Yes

For more information, see [Developing user-defined extensions (UDx)] Developing user-defined extensions (UDxs)

6 - Overview of Integrations

This topic provides an overview of third party integrations in OTCAD.

OTCAD integrates with third party applications. This topic provides information about OTCAD integration with third party applications.

6.1 - Integrations

This topic provides an overview of OTCAD integration with third party applications.

OTCAD is a cloud solution that enhances the speed and scalability of an "OpenText™ Analytics Database without the need to manage hardware. Use OTCAD to build databases securely and scale them to large amounts of data in the cloud.

Integrations

When you log in to the application, the More options button is displayed.

  1. Expand More options.
  2. Select Integrations. The Integrations page displays. OTCAD supports machine learning with VerticaPy for data science and AI-driven data science applications. It also supports the following third-party applications:
  3. For each third-party application, select View Docs to view the Connection guide.

7 - Overview of Dashboards

This topic provides an overview of dashboards in OTCAD.

Dashboards identify system bottlenecks on specific clusters and improve the performance of the system.

7.1 - Dashboards

This topic provides an overview of dashboards in OTCAD.

Dashboards

Dashboards enable you to view health metrics for your database.

Dashboards UI tour

You can view and manage dashboards. After signing in to the OTCAD application, the home page appears. Select the More options button and Dashboard. The Dashboard page appears as shown in this image:

Dashboard

The dashboard displays key metrics and events, including details about long-running queries, average query execution time, memory-intensive queries, and scaling activities.

Click the Refresh icon to view the latest data in the dashboards.

This page displays the following information:

Option Description
a More options: Select this option to view the More menu options.
b Dashboard: View the dashboards in OTCAD by different criteria.
c Active users: This card displays the total number of users who are active over the last 24 hours. Select > to view the list of active users. For more information about active users, see Active users.
d Data stored: This card displays the aggregate data storage in the database from the first day of the current month till date.
e Query overview: This card displays the total count of queries that completed and the queries that failed to execute over the last 24 hours.
f

Cluster performance: This card displays information about the overall cluster performance. It provides information about average CPU usage, average memory usage, and disk I/O usage over the last 24 hours.

  • Select Average CPU usage to view the CPU percentage used by the database server across the primary cluster over the last 24 hours. Select > to view the average CPU usage and maximum CPU usage over a duration of time. For more information about CPU usage, see CPU usage.
  • Select Average Memory usage to view the memory percentage usage information over the last 24 hours. Select > to view the average memory usage and maximum usage over a duration of time. For more information about average memory usage, see Memory usage.
  • Select Disk I/O usage to view the percentage of time a disk is actively engaged in processing I/O requests over the last 24 hours. Select > to view the average disk I/O usage and maximum usage over a duration of time. For more information about average disk I/O usage, see Disk I/O usage.
  • Select Scaling activity to view the consumption of compute in units over the last 24 hours. For more information about scaling activity, see Scaling activity.
g

Query insights: This card displays information about open sessions, queued queries, memory intensive queries, and long running queries.

  • Select Open sessions to view information about the top 5 sessions that are active over the last 1 hour. Select > to view all the sessions that are active over the last 1 hour. For more information about open sessions, see Sessions.
  • Select Queued queries to view information about the top 5 queries in the queue over the last 1 hour. Select > to view all the queries in the queue over the last 1 hour. For more information about queued queries, see Queued queries.
  • Select Memory intensive queries to view information about the top 5 queries that consume more than 25% memory over the last 24 hours. Select > to view information about all queries that consume more than 25% memory over the last 24 hours. For more information about memory intensive queries, see Memory intensive queries.
  • Select Long running queries to view information about the top 5 queries that took more than 10 minutes to execute over the last 24 hours. Select > to view information about all the queries that took more than 10 minutes to execute over the last 24 hours. For more information about long running queries, see Queries by duration.

Cluster performance

Expand Cluster performance in the Dashboard page. The following dashboards are displayed:

CPU usage

Identify CPU usage and address system bottlenecks proactively.

The Average CPU Usage card displays information about the average cluster CPU usage trends in the system over a time duration. The Maximum Usage card displays information about the maximum CPU usage over a duration of time. Identify CPU usage and address system bottlenecks proactively.

In the Visualization tab, do the following:

  1. In the Duration list, do one of the following:

    • Choose the time duration in minutes, hours, or days.

    <--or-->

    • Choose Custom. From the date pickers, select the start date and end date. Ensure that the start date does not exceed the end date by more than 30 days.
  2. In the View list, choose one of the values: Average, Minimum, or Maximum. Choose Average to view the average CPU usage for every minute over the selected duration of time. Choose Minimum to view the minimum CPU usage for every minute over the selected duration of time. Choose Maximum to view the maximum CPU usage for every minute over the selected duration of time.

  3. In the Axis Interval list, choose the duration for which you need to view the data in the graph. Choose to view the data for 1 minute, 5 minutes, 15 minutes, 1 hour, or 1 day. By default, line charts are plotted for minute-wise data. Every minute has one data point. For example, if you select 1 day, a total of 1440 data points are plotted on the graph with an axis interval of 1 minute.

In the List view tab, the Duration list is displayed. Do one of the following:

  • Choose the time duration in minutes, hours, or days to view the average CPU usage.

    <--or-->

  • Choose Custom. From the date pickers, select the start date and end date for which you need to average CPU usage. Ensure that the start date does not exceed the end date by more than 30 days.

In the List view tab, information about the minute-wise time, average CPU usage, maximum CPU usage, and minimum CPU usage is displayed over the selected duration. Choose the Download button to export this information in a spreadsheet.

Memory usage

This helps you compare memory that OTCAD uses versus memory in use by the entire system.

The Average Memory Usage card displays information about the average memory usage trends in the system over the selected duration. The Maximum Usage card displays information about the maximum memory usage over the selected duration of time.

In the Visualization tab, do the following:

  1. In the Duration list, do one of the following:

    • Choose the time duration in minutes, hours, or days to visualize the memory usage.

    <--or-->

    • Choose Custom. From the date pickers, select the start date and end date for which you need to visualize the memory usage. Ensure that the start date does not exceed the end date by more than 30 days.
  2. In the View list, choose one of the values: Average, Minimum, or Maximum. Choose Average to view the memory usage for every minute over the selected duration. Choose Minimum to view the minimum memory usage for every minute over the selected duration of time. Choose Maximum to view the maximum memory usage for every minute over the selected duration of time.

  3. In the Axis Interval list, choose the duration for which you need to view the data in the graph. Choose to view the data for 1 minute, 5 minutes, 15 minutes, 1 hour, or 1 day. By default, line charts are plotted for minute-wise data. Every minute has one data point. For example, if you select 1 day, a total of 1440 data points are plotted on the graph with an axis interval of 1 minute.

In the List view tab, the Duration list is displayed. Do one of the following:

  • Choose the time duration in minutes, hours, or days to view the memory usage.

    <--or-->

  • Choose Custom. From the date pickers, select the start date and end date for which you need to memory usage. Ensure that the start date does not exceed the end date by more than 30 days.

In the List view tab, information about the minute-wise time, average memory usage, maximum memory usage, and minimum memory usage is displayed over the selected duration. Choose the Download button to export this information in a spreadsheet.

Disk I/O usage

Identify and address disk I/O bottlenecks for resource optimization, capacity planning, and so on.

The Disk I/O Usage card displays information about the disk I/O usage in the system over the selected duration. Identify and address disk I/O bottlenecks for resource optimization, capacity planning, and so on. The Maximum Usage card displays information about the maximum disk I/O usage over the selected duration.

In the Visualization tab, do the following:

  1. In the Duration list, do one of the following:

    • Choose the time duration in minutes, hours, or days to visualize the disk I/O usage.

    <--or-->

    • Choose Custom. From the date pickers, select the start date and end date for which you need to visualize the disk I/O usage. Ensure that the start date does not exceed the end date by more than 30 days.
  2. In the View list, choose one of the values: Average, Minimum, or Maximum. Choose Average to view the average disk I/O usage for every minute over the selected duration. Choose Minimum to view the minimum disk I/O usage for every minute over the selected duration of time. Choose Maximum to view the maximum disk I/O usage for every minute over the selected duration.

  3. In the Axis Interval list, choose the duration for which you need to view the data in the graph. Choose to view the data for 1 minute, 5 minutes, 15 minutes, 1 hour, or 1 day. By default, line charts are plotted for minute-wise data. Every minute has one data point. For example, if you select 1 day, a total of 1440 data points are plotted on the graph with an axis interval of 1 minute.

In the List view tab, the Duration list is displayed. Do one of the following:

  • Choose the time duration in minutes, hours, or days to view the disk I/O usage.

    <--or-->

  • Choose Custom. From the date pickers, select the start date and end date for which you need to memory usage. Ensure that the start date does not exceed the end date by more than 30 days.

In the List view tab, information about the minute-wise time, average disk I/O usage, maximum disk I/O usage, and minimum disk I/O usage is displayed over the selected duration of time. Choose the Download button to export this information in a spreadsheet.

Scaling activity

Identify and analyze the potential cost impact due to scaling. View the utilization of additional compute units over the last 24 hours.

The Current Scale Instance card displays information about the current compute units in the system over the selected duration of time. The Maximum Scale card displays information about the maximum compute units over the selected duration of time.

In the Visualization tab, do the following:

  1. In the Duration list, do one of the following:

    • Choose the time duration in minutes, hours, or days to visualize the scaling activity.

    <--or-->

    • Choose Custom. From the date pickers, select the start date and end date for which you need to visualize the scaling activity. You can select a start date that is more than one month prior to the end date.
  2. In the List view tab, information about the date and time, scale event, compute (units), trigger, and user ID of the person who performed the scaling is displayed. Choose Download to export this information in a spreadsheet.

Query insights

Expand Query insights in the Dashboards page. The following dashboards are displayed:

Queries by volume

Identify the query volumes over a duration to understand database performance, identify bottlenecks, and optimize resource utilization. Identify slow or frequently used queries that require optimization. If query volumes increase quickly, optimize the database infrastructure to handle the increased workload.

The Successful queries card displays information about queries that are successfully executed over the last 24 hours. The Queued queries card displays information about SQL queries that are queued for execution over the last 24 hours. The Query status card displays the number of SQL queries that completed successfully and failed over the last 24 hours.

In the Visualization tab, do the following:

  1. In the Duration list, do one of the following:

    • Choose the time duration in minutes, hours, or days to visualize the disk I/O usage.

    <--or-->

    • Choose Custom. From the date pickers, select the start date and end date for which you need to visualize the SQL queries by volume. Ensure that the start date does not exceed the end date by more than 30 days.
  2. In the View list, choose one of the values: Volume, Success rate, or Failure rate. Choose Volume to view the number of SQL queries executed over the selected duration. Choose Success rate to view the percentage of SQL queries that completed over the selected duration. Choose Failure rate to view the percentage of SQL queries that failed over the selected duration.

  3. In the Axis Interval list, choose the duration for which you need to view the data in the graph. Choose to view the data for 1 minute, 5 minutes, 15 minutes, 1 hour, or 1 day. By default, line charts are plotted for minute-wise data. Every minute has one data point. For example, if you select 1 day, a total of 1440 data points are plotted on the graph with an axis interval of 1 minute.

In the List view tab, the Duration list is displayed. Do one of the following:

  • Choose the time duration in minutes, hours, or days to visualize the SQL queries by volume.

    <--or-->

  • Choose Custom. From the date pickers, select the start date and end date for which you need to visualize the SQL queries by volume. Ensure that the start date does not exceed the end date by more than 30 days.

Information about the time, user name, request status, query, query label, query duration, memory acquired, rows processed, user type, client type, session ID, statement ID, and transaction ID is displayed. Choose Download to export this information in a spreadsheet. Select the Copy icon to copy the query to the clipboard.

Filter criteria

You can filter the dashboards in the Query insights folder based on certain criteria. Choose the Filter icon in each dashboard and filter data using the following criteria:

  1. Session ID - Choose this option to filter data by the selected session ID. The Session ID is an identifier that closes or interrupts a session. This identifier is unique at any point in time.
  2. User name - Choose this option to filter data for the selected user. The User name specifies the name of the user logged in to the database. For internal sessions, this value is NULL.
  3. Transaction ID - Choose this option to filter data by the transaction ID. The Transaction ID is an identifier for the transaction within the session. If a session is active but no transaction has begun, this value is NULL.
  4. Request status
    • Expand the Request status list.
    • Choose Completed to view the SQL queries that executed succesfully.
    • Choose Failed to view the SQL queries that failed to execute.
  5. Duration
    • Choose the required duration from the options available to view the SQL queries for the selected duration.
    • Choose Custom and enter the minimum and maximum time in seconds or minutes to view the SQL queries for the selected duration.
  6. Client type - The client type is categorized by the client application. Different client types are displayed depending on the client application. A few examples of client types are given below:
    • JDBC driver - Select this option to view SQL queries executed through the client type JDBC driver.
    • ODBC driver - Select this option to view SQL queries executed through the client type ODBC driver.
    • ADO.NET driver - Select this option to view SQL queries for the client type ADO.NET driver.
    • vsql - Select this option to view SQL queries for the client type vsql.
  7. Rows processed
    • Choose the required options to view SQL queries by the number of rows selected.
  8. Memory acquired
    • Choose the required options to view the memory acquired for executing the SQL queries.
  9. Currently running
    • Choose Yes to view the SQL queries that are currently running.
  10. System v/s user
    • Choose System to view the SQL queries executed by the system.
    • Choose User to view the SQL queries executed by specific users.

Queries by duration

Analyze the count of queries that executed over the last 24 hours. Slow-running queries hinder overall database efficiency. Analyze the count of queries in the given interval by execution time. This dashboard displays only those queries that took more than 10 minutes to execute over the last 24 hours.

The Average query duration card displays the mean time taken per query for a selected duration.

In the Visualization tab, the Duration list is displayed.

  1. Do one of the following:

    • Choose the time duration in minutes, hours, or days. <--or-->

    • Choose Custom. From the date pickers, select the start date and end date for which you need to visualize the SQL queries by duration. Ensure that the start date does not exceed the end date by more than 30 days.

  2. In the View list, choose one of the values: Query Id, Query phase, Query duration, or Query type. Choose Query Id to view the SQL queries based on the query ID over the selected duration. Choose Query phase to view the SQL queries based on the query phase over the selected duration. Choose Query duration to view the SQL queries over specific durations of time. Choose Query type to view the SQL queries by type over the selected duration.

    In the List view tab, do one of the following:

    • Choose the time duration in minutes, hours, or days. <--or-->

    • Choose Custom. From the date pickers, select the start date and end date for which you need to view the SQL queries by duration. Ensure that the start date does not exceed the end date by more than 30 days.

Information about the time, user name, request status, query, query label, query duration, memory acquired, rows processed, user type, client type, session ID, statement ID, and transaction ID is displayed. Choose Download to export this information in a spreadsheet. Select the Copy icon to copy the query to the clipboard.

Queries by memory usage

Analyze the count of queries over the last 24 hours by memory consumption. When a query uses a large share of memory, it can hinder the execution of other simultaneous queries by restricting access to shared resources.

The Average memory usage card displays information about the average memory that is used by all the queries over the selected duration. The Average memory acquired card displays information about memory acquired for executing the SQL queries over the selected duration.

In the Visualization tab, the Duration list is displayed.

  1. Do one of the following:

    • Choose the time duration in minutes, hours, or days. <--or-->

    • Choose Custom. From the date pickers, select the start date and end date for which you need to visualize the SQL queries by duration. Ensure that the start date does not exceed the end date by more than 30 days.

  2. In the View list, choose either Memory usage or Acquired memory. Choose Memory usage to view the average memory that is used by all the queries over the selected duration. Choose Acquired memory to view the memory acquired for executing the SQL queries over the selected duration.

  3. In the Axis Interval list, choose the duration for which you need to view the data in the graph. Choose to view the data for 1 minute, 5 minutes, 15 minutes, 1 hour, or 1 day. By default, line charts are plotted for minute-wise data at an interval of 5 minutes. Every minute has one data point. For example, if you select 1 day at an interval of 1 minute, a total of 1440 data points are plotted on the graph.

    In the List view tab, do one of the following:

    • Choose the time duration in minutes, hours, or days. <--or-->

    • Choose Custom. From the date pickers, select the start date and end date for which you need to view the queries by memory usage. Ensure that the start date does not exceed the end date by more than 30 days.

    Information about the time, user name, request status, query, query label, query duration, memory acquired, rows processed, user type, client type, session ID, statement ID, and transaction ID is displayed. Choose Download to export this information in a spreadsheet. Select the Copy icon to copy the query to the clipboard.

Queries by user

Identify users who generate a high volume of queries over the last 24 hours. See how users work with the database and use these insights to plan resources and manage capacity effectively.

The Average query count card displays information about the total number of queries executed by different users over the selected duration.

In the Visualization tab, the Duration list is displayed.

  1. Do one of the following:
    • Choose the time duration in minutes, hours, or days. <--or-->

    • Choose Custom. From the date pickers, select the start date and end date for which you need to visualize the SQL queries by user. Ensure that the start date does not exceed the end date by more than 30 days.

  2. In the View list, choose one of the values: All users, Completed v/s failed, or System v/s user. Choose All users to view the total queries executed by each user over the selected duration. Choose Completed v/s failed to view a comparison of total queries completed v/s total queries failed over the selected duration. Choose System v/s user to view a comparison of total queries completed by all users v/s total queries executed by the system over the selected duration.

In the List view tab, the Duration list is displayed. Do one of the following:

  • Choose the time duration in minutes, hours, or days. <--or-->

  • Choose Custom. From the date pickers, select the start date and end date for which you need to view the queries by user. Ensure that the start date does not exceed the end date by more than 30 days.

Information about the time, user name, request status, query, query label, query duration, memory acquired, rows processed, user type, client type, session ID, statement ID, and transaction ID is displayed. Choose Download to export this information in a spreadsheet. Select the Copy icon to copy the query to the clipboard.

Queries in queue

Identify queries awaiting allocation of system resources in the last 1 hour. Analyze query queue patterns and trends to plan future capacity needs. This ensures the system scales appropriately to handle anticipated growth in query volume.

The Average query duration card displays the mean time taken per query during the selected time range.

In the Visualization tab, the Duration list is displayed.

  1. Do one of the following:
    • Choose the time duration in minutes, hours, or days. <--or-->

    • Choose Custom. From the date pickers, select the start date and end date for which you need to visualize the SQL queries in the queue. Ensure that the start date does not exceed the end date by more than 30 days.

  2. In the Axis Interval list, choose the duration for which you need to view the data in the graph. Choose to view the data for 1 minute, 5 minutes, 15 minutes, 1 hour, or 1 day. By default, line charts are plotted for minute-wise data at an interval of 5 minutes. Every minute has one data point. For example, if you select 1 day at an interval of 1 minute, a total of 1440 data points are plotted on the graph.

In the List view tab, the Duration list is displayed. Do one of the following:

  • Choose the time duration in minutes, hours, or days. <--or-->

  • Choose Custom. From the date pickers, select the start date and end date for which you need to view the SQL queries in the queue. Ensure that the start date does not exceed the end date by more than 30 days.

Information about the time, user name, request status, query, query label, query duration, memory acquired, rows processed, user type, client type, session ID, statement ID, and transaction ID is displayed. Choose Download to export this information in a spreadsheet. Select the Copy icon to copy the query to the clipboard.

Queries in queue by duration

Identify the queries that are queued for resource allocation. Use query queue patterns and trends to plan capacity. Scale the system as needed to manage expected growth in query volume.

The Average query duration card displays the mean time taken per query during the selected time range.

In the Visualization tab, the Duration list is displayed. Do one of the following:

  • Choose the time duration in minutes, hours, or days. <--or-->

  • Choose Custom. From the date pickers, select the start date and end date for which you need to visualize the SQL queries in queue by duration. Ensure that the start date does not exceed the end date by more than 30 days.

In the List view tab, the Duration list is displayed. Do one of the following:

  • Choose the time duration in minutes, hours, or days. <--or-->

  • Choose Custom. From the date pickers, select the start date and end date for which you need to view the queries in queue by duration. Ensure that the start date does not exceed the end date by more than 30 days.

Information about the query entry time, user name, query, query label, time in queue, priority, memory requested, pool name, client type, statement ID, and transaction ID is displayed. Choose Download to export this information in a spreadsheet. Select the Copy icon to copy the query to the clipboard.

Requests by query type

Analyze request volumes for a specific duration to assess database performance, identify bottlenecks, and optimize resource use. Review request types to manage resources effectively and ensure the database handles varying loads without slowing down.

The Average query duration card displays the mean time taken per query during the selected time range over the last 24 hours.

In the Visualization tab, the Duration list is displayed.

  1. Do one of the following:
    • Choose the time duration in minutes, hours, or days. <--or-->

    • Choose Custom. From the date pickers, select the start date and end date for which you need to visualize the requests by query type. Ensure that the start date does not exceed the end date by more than 30 days.

  2. In the Axis Interval list, choose the duration for which you need to view the data in the graph. Choose to view the data for 1 minute, 5 minutes, 15 minutes, 1 hour, or 1 day. By default, line charts are plotted for minute-wise data at an interval of 5 minutes. Every minute has one data point. For example, if you select 1 day at an interval of 1 minute, a total of 1440 data points are plotted on the graph.

In the List view tab, the Duration list is displayed. Do one of the following:

  • Choose the time duration in minutes, hours, or days. <--or-->

  • Choose Custom. From the date pickers, select the start date and end date for which you need to view the requests by query type. Ensure that the start date does not exceed the end date by more than 30 days.

Information about the time, user name, request status, query, query label, query duration, memory acquired, rows processed, user type, client type, session ID, statement ID, and transaction ID is displayed. Choose Download to export this information in a spreadsheet. Select the Copy icon to copy the query to the clipboard.

System metrics

Expand System metrics in the Dashboards page. The following dashboards are displayed:

Connections

Identify the total database connections over the last 24 hours to understand the database performance, identify bottlenecks, and optimize resource utilization. Find sessions that are stuck or failing to troubleshoot quickly. Detect deadlocked sessions to fix them and enhance database performance.

The Average connection count card displays the average number of active database connections recorded over the selected duration. It provides a quick overview of connection trends, helping users monitor database usage and identify potential performance issues or unusual activity.

In the Visualization tab, the Duration list is displayed. Do one of the following:

  • Choose the time duration in minutes, hours, or days. <--or-->

  • Choose Custom. From the date pickers, select the start date and end date for which you need to visualize the total database connections. Ensure that the start date does not exceed the end date by more than 30 days.

  • In the View list, choose Choose Open vs closed to view connections that are open vs connections that are closed. Choose Internal vs user to view connections that are internal to the system vs connections specific to users.

In the List view tab, the Duration list is displayed. Do one of the following:

  • Choose the time duration in minutes, hours, or days. <--or-->

  • Choose Custom. From the date pickers, select the start date and end date for which you need to view the total database connections. Ensure that the start date does not exceed the end date by more than 30 days.

Information about the connection time, connection duration, user name, client host name, session status, connection type, request, is currently running, request duration, transaction ID, statement ID, client type, client label, client OS, and client PID is displayed. Choose Download to export this information in a spreadsheet. Select the Copy icon to copy the query to the clipboard.

Filter criteria

You can filter the Connections and Connections by duration dashboards in the System metrics folder based on certain criteria. Choose the Filter icon and filter data using the following criteria:

  1. User name - Choose this option to filter data for the selected user. The User name specifies the name of the user logged in to the database. For internal sessions, this value is NULL.
  2. Transaction ID - Choose this option to filter data by the transaction ID. The Transaction ID is an identifier for the transaction within the session. If a session is active but no transaction has begun, this value is NULL.
  3. connectionDuration
    • Choose the required duration from the options available to view the SQL queries for the selected duration.
    • Choose Custom and enter the minimum and maximum time in seconds or minutes to view the SQL queries for the selected duration.
  4. Connection status - Choose Open to view the connections that are open. Choose Closed to view the connections that are closed.
  5. Connection type
    • Choose Internal to view the SQL queries executed by the system.
    • Choose User to view the SQL queries executed by specific users.
  6. Client type - The client type is categorized by the client application. Different client types are displayed depending on the client application. A few examples of client types are given below:
    • JDBC driver - Select this option to view SQL queries executed through the client type JDBC driver.
    • ODBC driver - Select this option to view SQL queries executed through the client type ODBC driver.
    • ADO.NET Driver - Select this option to view SQL queries for the client type ADO.NET driver.
    • vsql - Select this option to view SQL queries for the client type vsql.

Connections by duration

Find the total number of database connections for the selected duration to assess database performance, locate bottlenecks, and optimize resource use.

The Average connection count card displays the average number of active database connections recorded over the selected duration. It provides a quick overview of connection trends, helping users monitor database usage and identify potential performance issues or unusual activity.

In the Visualization tab, the Duration list is displayed. Do one of the following:

  • Choose the time duration in minutes, hours, or days. <--or-->

  • Choose Custom. From the date pickers, select the start date and end date for which you need to visualize the SQL queries in the connection by duration. Ensure that the start date does not exceed the end date by more than 30 days.

In the List view tab, the Duration list is displayed. Do one of the following:

  • Choose the time duration in minutes, hours, or days. <--or-->

  • Choose Custom. From the date pickers, select the start date and end date for which you need to view the SQL queries in the connection by duration. Ensure that the start date does not exceed the end date by more than 30 days.

Information about the connection time, connection duration, user name, client host name, session status, connection type, request, is currently running, request duration, transaction ID, statement ID, client type, client label, client OS, and client PID is displayed. Choose Download to export this information in a spreadsheet. Select the Copy icon to copy the query to the clipboard.

Data storage

Detect trends in the amount of data stored within the database. Prevent over-provisioning and lower costs from unused storage. Optimize storage allocation to minimize waste and cost.

The Data storage card displays the average data stored for the selected duration over the last month.

In the Visualization tab, the Duration list is displayed.

  1. Do one of the following:
    • Choose the time duration. Choose one of the following:

      • Month to date (MTD)- the period from the beginning of the current calendar month up to but not including the current date. For example, if today is 10 November, MTD covers data from November 1 to November 9.
      • Last month
      • Quarter to date (QTD) - the period from the first day of the calendar quarter up to the current date. For example, if the current date is November 22, QTD covers data from October 1 to November 22.
      • Last quarter <--or-->
    • Choose Custom. From the date pickers, select the start date and end date for which you need to visualize the requests by query type. Ensure that the start date does not exceed the end date by more than 30 days.

  2. In the Axis Interval list, choose the duration for which you need to view the data in the graph. Choose to view the data by days, months or quarters. By default, line charts are plotted for day-wise data.

In the List view tab, the date of data storage in the database and data storage (in GB and TB) is displayed. Choose Download to export this information in a spreadsheet. Select the Copy icon to copy the query to the clipboard.

Data transfer

Monitor outbound data transfer trends from the database to detect unusual activity. Identify and block unauthorized transfers to prevent data leakage. Timely investigation and mitigation helps you identify and fix irregular transfer patterns that indicate security breaches or unauthorized access.

The Average data transfer card displays the average data that is transferred for the selected duration over the last month.

In the Visualization tab, the Duration list is displayed.

  1. Do one of the following:
    • Choose the time duration. Choose one of the following:

      • Month to date (MTD)- the period from the beginning of the current calendar month up to but not including the current date. For example, if today is 10 November, MTD covers data from November 1 to November 9.
      • Last month
      • Quarter to date (QTD) - the period from the first day of the calendar quarter up to the current date. For example, if the current date is November 22, QTD covers data from October 1 to November 22.
      • Last quarter

      <--or-->

    • Choose Custom. From the date pickers, select the start date and end date for which you need to visualize the outbound data transfer trends from the database. Ensure that the start date does not exceed the end date by more than 30 days.

  2. In the Axis Interval list, choose the duration for which you need to view the data in the graph. Choose to view the data by days, months or quarters. By default, line charts are plotted for day-wise data.

In the List view tab, the date of outbound data transfer from the database and data storage (in GB and TB) is displayed. Choose Download to export this information in a spreadsheet. Select the Copy icon to copy the query to the clipboard.

Active users

Identify active users logged into the database over the last 24 hours. Keep the system responsive and maintain a smooth user experience during peak loads. Identify abnormal patterns that may signal security threats. Confirm active sessions belong to authorized users to safeguard the database.

In the List view tab, the user name, super user, roles, and last login time is displayed. Choose Download to export this information in a spreadsheet.