SQL Adapters

In this section:

 

This section provides detailed descriptions of new features for SQL adapters.


Top of page

x
All SQL Adapters

In this section:

 

This section provides detailed descriptions of new features for all SQL adapters.



x
Creating Indexes for an Existing Table

A synonym for a table in a relational data source can contain declarations in the Access File specifying which columns should have indexes.

When you issue a CREATE FILE command for a table, an index is created in the relational DBMS for the primary key and for each index specified in the Access File.

If you want to create a table without creating all of the indexes at that time, you can omit the index declarations from the Access File and issue the CREATE FILE command to create only the table and the index on the primary key. Then, at a later time, you can add the index declarations to the Access File and issue the CREATE FILE command with the INDEXESONLY phrase. The option of adding the indexes later makes loading data into a relational table much faster.



x
Syntax: How to Create Indexes for an Existing Table
CREATE FILE app/synonym INDEXESONLY 

where:

app

Is the application folder in which the synonym resides.

synonym

Is the synonym for the existing table.



x
Reference: Usage Notes for CREATE FILE With INDEXESONLY
  • CREATE FILE always creates an index on the primary key.
  • CREATE FILE with INDEXESONLY recreates the index on the primary key and creates all indexes declared in the Access File.


x
Optimization of Full Outer Joins

The WebFOCUS join command and conditional join command have a FULL OUTER join option.

A full outer join returns all rows from the source data source and all rows from the target data source. Where values do not exist for the rows in either data source, null values are returned. WebFOCUS substitutes default values on the report output (blanks for alphanumeric columns, the NODATA symbol for numeric columns).

The full outer join is only supported for use with those relational data sources that support this type of join, in which case the WebFOCUS join syntax is optimized (translated to the full outer join SQL syntax supported by the RDBMS). Use of this syntax for any data source that does not support a full outer join, or the failure of the request to be optimized to the engine, produces an error message.



x
Syntax: How to Specify a Full Outer Join

The following syntax generates a full outer equijoin based on real fields:

JOIN FULL_OUTER hfld1 [AND hfld2 ...] IN table1 [TAG tag1] TO {UNIQUE|MULTIPLE} cfld [AND cfld2 ...] IN table2 [TAG tag2] [AS joinname]
END

where:

hfld1

Is the name of a field in the host table containing values shared with a field in the cross-referenced table. This field is called the host field.

AND hfld2...

Can be an additional field in the host table. The phrase beginning with AND is required when specifying multiple fields.

  • For relational adapters that support multi-field and concatenated joins, you can specify up to 16 fields. See your adapter documentation for specific information about supported join features.
IN table1

Is the name of the host table.

TAG tag1

Is a tag name of up to 66 characters (usually the name of the Master File), which is used as a unique qualifier for fields and aliases in the host table.

The tag name for the host table must be the same in all the JOIN commands of a joined structure.

TO [UNIQUE|MULTIPLE] crfld1

Is the name of a field in the cross-referenced table containing values that match those of hfld1 (or of concatenated host fields). This field is called the cross-referenced field.

Note: UNIQUE returns only one instance and, if there is no matching instance in the cross-referenced table, it returns null values.

Use the MULTIPLE parameter when crfld1 may have multiple instances in common with one value in hfld1. Note that ALL is a synonym for MULTIPLE, and omitting this parameter entirely is a synonym for UNIQUE.

AND crfld2...

Is the name of a field in the cross-referenced table with values in common with hfld2.

Note: crfld2 may be qualified. This field is only available for adapters that support multi-field joins.

IN crfile

Is the name of the cross-referenced table.

TAG tag2

Is a tag name of up to 66 characters (usually the name of the Master File), which is used as a unique qualifier for fields and aliases in cross-referenced tables. In a recursive join structure, if no tag name is provided, all field names and aliases are prefixed with the first four characters of the join name.

The tag name for the host table must be the same in all the JOIN commands of a joined structure.

AS joinname

Is an optional name of up to eight characters that you may assign to the join structure. You must assign a unique name to a join structure if:

  • You want to ensure that a subsequent JOIN command does not overwrite it.
  • You want to clear it selectively later.
  • The structure is recursive.

Note: If you do not assign a name to the join structure with the AS phrase, the name is assumed to be blank. A join without a name overwrites an existing join without a name.

END

Required when the JOIN command is longer than one line. It terminates the command and must be on a line by itself.

The following syntax generates a DEFINE-based full outer join:

JOIN FULL_OUTER deffld WITH host_field ... 
     IN table1 [TAG tag1]
     TO [UNIQUE|MULTIPLE] 
     cr_field IN table2 [TAG tag2] [AS joinname]
END

where:

deffld

Is the name of a virtual field for the host file (the host field). The virtual field can be defined in the Master File or with a DEFINE command.

WITH host_field

Is the name of any real field in the host segment with which you want to associate the virtual field. This association is required to locate the virtual field.

The WITH phrase is required unless the KEEPDEFINES parameter is set to ON and deffld was defined prior to issuing the JOIN command.

To determine which segment contains the virtual field, use the ? DEFINE query after issuing the DEFINE command.

IN table1

Is the name of the host table.

TAG tag1

Is a tag name of up to 66 characters (usually the name of the Master File), which is used as a unique qualifier for fields and aliases in host tables.

The tag name for the host table must be the same in all JOIN commands of a joined structure.

TO [UNIQUE|MULTIPLE] crfld1

Is the name of a real field in the cross-referenced table whose values match those of the virtual field. This must be a real field declared in the Master File.

Note: UNIQUE returns only one instance and, if there is no matching instance in the cross-referenced table, it returns null values.

Use the MULTIPLE parameter when crfld1 may have multiple instances in common with one value in hfld1. Note that ALL is a synonym for MULTIPLE, and omitting this parameter entirely is a synonym for UNIQUE.

IN crfile

Is the name of the cross-referenced table.

TAG tag2

Is a tag name of up to 66 characters (usually the name of the Master File), which is used as a unique qualifier for fields and aliases in cross-referenced tables. In a recursive joined structure, if no tag name is provided, all field names and aliases are prefixed with the first four characters of the join name.

The tag name for the host file must be the same in all JOIN commands of a joined structure.

AS joinname

Is an optional name of up to eight characters that you may assign to the joined structure. You must assign a unique name to a join structure if:

  • You want to ensure that a subsequent JOIN command does not overwrite it.
  • You want to clear it selectively later.
  • The structure is recursive, and you do not specify tag names.

If you do not assign a name to the joined structure with the AS phrase, the name is assumed to be blank. A join without a name overwrites an existing join without a name.

END

Required when the JOIN command is longer than one line. It terminates the command and must be on a line by itself.

The following syntax generates a full outer conditional join:

JOIN FULL_OUTER FILE table1 AT hfld1 [WITH hfld2] [TAG tag1]
     TO {UNIQUE|MULTIPLE} 
     FILE table2 AT crfld [TAG tag2] [AS joinname]
     [WHERE expression1;
     [WHERE expression2;
     ...]
END

where:

table1

Is the host Master File.

AT

Links the correct parent segment or host to the correct child or cross-referenced segment. The field values used as the AT parameter are not used to cause the link. They are used as segment references.

hfld1

Is the field name in the host Master File whose segment will be joined to the cross-referenced table. The field name must be at the lowest level segment in its data source that is referenced.

tag1

Is the optional tag name that is used as a unique qualifier for fields and aliases in the host table.

hfld2

Is a table column with which to associate a DEFINE-based conditional JOIN. For a DEFINE-based conditional join, the KEEPDEFINES setting must be ON, and you must create the virtual fields before issuing the JOIN command.

MULTIPLE

Specifies a one-to-many relationship between table1 and table2. Note that ALL is a synonym for MULTIPLE.

UNIQUE

Specifies a one-to-one relationship between table1 and table2. Note that ONE is a synonym for UNIQUE.

Note: The join to UNIQUE will return only one instance of the cross-referenced table, and if this instance does not match based on the evaluation of the WHERE expression, null values are returned.

crfile

Is the cross-referenced Master File.

crfld

Is the join field name in the cross-referenced Master File. It can be any field in the segment.

tag2

Is the optional tag name that is used as a unique qualifier for fields and aliases in the cross-referenced table.

joinname

Is the name associated with the joined structure.

expression1, expression2

Are any expressions that are acceptable in a DEFINE FILE command. All fields used in the expressions must lie on a single path.

END

The END command is required to terminate the command and must be on a line by itself.



Example: Optimizing a Full Outer Join of Microsoft SQL Server Tables

The following requests generate two Microsoft SQL Server tables to join, and then issues a request against the join. The tables are generated using the wf_retail sample that you can create using the WebFOCUS - Retail Demo tutorial in the server Web Console.

The following request generates the WF_SALES table. The field ID_PRODUCT will be used in the full outer join command. The generated table will contain ID_PRODUCT values from 2150 to 4000:

TABLE FILE WF_RETAIL_LITE
SUM GROSS_PROFIT_US PRODUCT_CATEGORY PRODUCT_SUBCATEG
BY ID_PRODUCT
WHERE ID_PRODUCT FROM 2150 TO 4000
ON TABLE HOLD AS WF_SALES FORMAT SQLMSS
END

The following request generates the WF_PRODUCT table. The field ID_PRODUCT will be used in the full outer join command. The generated table will contain ID_PRODUCT values from 3000 to 5000:

TABLE FILE WF_RETAIL_LITE
SUM PRICE_DOLLARS PRODUCT_CATEGORY PRODUCT_SUBCATEG PRODUCT_NAME
BY ID_PRODUCT
WHERE ID_PRODUCT FROM 3000 TO 5000
ON TABLE HOLD AS WF_PRODUCT FORMAT SQLMSS
END

The following request issues the JOIN command and displays values from the joined tables:

SET TRACEUSER=ON
SET TRACESTAMP=OFF
SET TRACEOFF=ALL
SET TRACEON = STMTRACE//CLIENT
JOIN FULL_OUTER ID_PRODUCT IN WF_PRODUCT TAG T1 TO ALL ID_PRODUCT IN WF_SALES TAG T2
 
TABLE FILE WF_PRODUCT
PRINT T1.ID_PRODUCT AS 'Product ID' 
PRICE_DOLLARS AS Price
T2.ID_PRODUCT AS 'Sales ID' 
GROSS_PROFIT_US
BY T1.ID_PRODUCT NOPRINT
ON TABLE SET PAGE NOPAGE
END

The trace shows that the full outer join was optimized (translated to SQL) so that SQL Server could process the join:

SELECT   
T1."ID_PRODUCT",  
T1."PRICE_DOLLARS",  
T2."ID_PRODUCT",  
T2."GROSS_PROFIT_US"  
FROM   
( WF_PRODUCT T1  
FULL OUTER JOIN WF_SALES T2  
ON T2."ID_PRODUCT" = T1."ID_PRODUCT" )  
ORDER BY   
T1."ID_PRODUCT";  

The output has a row for each ID_PRODUCT value that is in either table. Rows with ID_PRODUCT values from 2150 to 2167 are only in the WF_SALES table, so the columns from WF_PRODUCT display the NODATA symbol. Rows with ID_PRODUCT values above 4000 are only in the WF_PRODUCT table, so the columns from WF_SALES display the NODATA symbol. Rows with ID_PRODUCT values from 2000 to 4000 are in both tables, so all columns have values, as shown in the following image.



x
Adding a New Fact To Multi-Fact Synonyms: JOIN AS_ROOT

The JOIN AS_ROOT command adds a new fact table as an additional root to an existing fact-based cluster (star schema). The source Master File has a parent fact segment and at least one child dimension segment. The JOIN AS_ROOT command supports a unique join from a child dimension segment (at any level) to an additional fact parent.



x
Syntax: How to Add an Additional Parent Segment
JOIN AS_ROOT sfld1 [AND sfld2 ...] IN [app1/]sfile TO UNIQUE tfld1 [AND tfld2 ...] IN [app2/]tfile AS jname    
END

where:

sfld1 [AND sfld2 ...]

Are fields in the child (dimension) segment of the source file that match values of fields in the target file.

[app1/]sfile

Is the source file.

TO UNIQUE tfld1 [AND tfld2 ...]

Are fields in the target file that match values of fields in the child segment of the source file. The join must be unique.

[app2/]tfile

Is the target file.

jname

Is the join name.

END

Is required to end the JOIN command.



Example: Joining AS_ROOT From the WebFOCUS Retail Data Source to an Excel File

The following request joins the product category and product subcategory fields in the WebFOCUS Retail data source to an Excel file named PROJECTED.

To generate the WebFOCUS Retail data source in the Web Console, click Tutorials from the Applications page.

Select WebFOCUS - Retail Demo. Select your configured relational adapter (or select the flat file option if you do not have a relational adapter configured), check Limit Tutorial Data, and then click Create.

The Master File for the Excel File is:

FILENAME=PROJECTED, SUFFIX=DIREXCEL,
 DATASET=app2/projected.xlsx, $
  SEGMENT=PROJECTED, SEGTYPE=S0, $
    FIELDNAME=PRODUCT_CATEGORY, ALIAS='Product  Category', USAGE=A16V, ACTUAL=A16V,
      MISSING=ON,
      TITLE='Product  Category',
      WITHIN='*PRODUCT', $
    FIELDNAME=PRODUCT_SUBCATEGORY, ALIAS='Product     Subcategory', USAGE=A25V, ACTUAL=A25V,
      MISSING=ON,
      TITLE='Product     Subcategory',
      WITHIN=PRODUCT_CATEGORY, $
    FIELDNAME=PROJECTED_COG, ALIAS='              Projected COG', USAGE=P15.2C, ACTUAL=A15,
      MISSING=ON,
      TITLE='              Projected COG', MEASURE_GROUP=PROJECTED,
      PROPERTY=MEASURE,  $
    FIELDNAME=PROJECTED_SALE_UNITS, ALIAS='             Projected Sale Units', USAGE=I9, ACTUAL=A11,
      MISSING=ON,
      TITLE='             Projected Sale Units', MEASURE_GROUP=PROJECTED,
      PROPERTY=MEASURE,  $
 MEASUREGROUP=PROJECTED, CAPTION='PROJECTED', $
 DIMENSION=PRODUCT, CAPTION='Product', $
  HIERARCHY=PRODUCT, CAPTION='Product', HRY_DIMENSION=PRODUCT, HRY_STRUCTURE=STANDARD, $

The following image shows the data in the Excel file.

The following request joins from the wf_retail_product segment of the wf_retail data source to the excel file as a new root and reports from both parent segments:

JOIN AS_ROOT PRODUCT_CATEGORY AND PRODUCT_SUBCATEG IN WF_RETAIL
  TO UNIQUE PRODUCT_CATEGORY AND PRODUCT_SUBCATEGORY IN PROJECTED 
  AS J1.
END
TABLE FILE WF_RETAIL
SUM PROJECTED_SALE_UNITS REVENUE_US 
BY PRODUCT_CATEGORY
ON TABLE SET PAGE NOPAGE
END

The output is:



x
Joining From a Multi-Fact Synonym

Multi-parent synonyms are now supported as the source for a join to a single segment in a target synonym.

A join from a multi-parent synonym is subject to the following conditions:



Example: Joining From a Multi-Fact Synonym

The following Master File describes a multi-parent structure based on the WebFOCUS Retail tutorial. The two fact tables wf_retail_sales and wf_retail_shipments are parents of the dimension table wf_retail_product.

FILENAME=WF_RETAIL_MULTI_PARENT, $
  SEGMENT=WF_RETAIL_SHIPMENTS, CRFILE=WFRETAIL/FACTS/WF_RETAIL_SHIPMENTS, CRINCLUDE=ALL,
    DESCRIPTION='Shipments Fact', $
  SEGMENT=WF_RETAIL_SALES, PARENT=., CRFILE=WFRETAIL/FACTS/WF_RETAIL_SALES, CRINCLUDE=ALL,
    DESCRIPTION='Sales Fact', $
  SEGMENT=WF_RETAIL_PRODUCT, CRFILE=WFRETAIL/DIMENSIONS/WF_RETAIL_PRODUCT, CRINCLUDE=ALL,
    DESCRIPTION='Product Dimension', $
   PARENT=WF_RETAIL_SHIPMENTS, SEGTYPE=KU,
    JOIN_WHERE=WF_RETAIL_SHIPMENTS.ID_PRODUCT EQ WF_RETAIL_PRODUCT.ID_PRODUCT;, $
   PARENT=WF_RETAIL_SALES, SEGTYPE=KU,
    JOIN_WHERE=WF_RETAIL_SALES.ID_PRODUCT EQ WF_RETAIL_PRODUCT.ID_PRODUCT;, $

The following image shows the joins between these tables in the Synonym Editor of the Data Management Console (DMC).

The following request joins the product segment to the dimension table wf_retail_vendor based on the vendor ID and issues a request against the joined structure:

JOIN ID_VENDOR IN WF_RETAIL_MULTI_PARENT TO ID_VENDOR IN WF_RETAIL_VENDOR AS J1
TABLE FILE WF_RETAIL_MULTI_PARENT
SUM COGS_US DAYSDELAYED
BY PRODUCT_CATEGORY
BY VENDOR_NAME
WHERE PRODUCT_CATEGORY LT 'S'
ON TABLE SET PAGE NOPAGE
END

The output is:



x
Improved Handling of a Star Schema With Fan Trap

When a star schema contains a segment with aggregated facts and a lower-level segment with the related detail-level facts, a request that performs aggregation on both levels and returns them sorted by the higher level can experience the multiplicative effect. This means that the fact values that are already aggregated may be re-aggregated and, therefore, return multiplied values.

When the adapter detects the multiplicative effect, it turns optimization off in order to handle the request processing and circumvent the multiplicative effect. However, performance is degraded when a request is not optimized.

A new context analysis process has been introduced in this release that detects the multiplicative effect and generates SQL script commands that retrieve the correct values for each segment context. These scripts are then passed to the RDBMS as subqueries in an optimized SQL statement.

To activate the context analysis feature, click Change Common Adapter Settings on the Adapters page of the Web Console. Then select Yes for the FCA parameter in the Miscellaneous Settings section and click Save, as shown in the following image.



x
Support for Discontiguous Key Definition in the Access File

In prior releases, fields participating in the key to a table had to be described first in the Master File. The number of key fields was identified by the KEYS=n attribute in the Access File, where n is the number of key fields.

While this method is still supported by the relational adapters, it has been deprecated. In Version 7 Release 7.06, new syntax is available that does not require any reordering of the fields in the Master File. If an Access File with the KEYS=n syntax is opened in the Synonym Editor, the Synonym Editor will convert it to the new syntax.



x
Syntax: How to Specify Discontiguous Keys in the Access File

Key fields in the Master File can be listed in the order established by the relational DBMS or in any order that is convenient. In the Access File, identify the key fields with the following syntax:

KEY=fld1/fld2/.../fldn

where:

fld1, fld2,...,fldn

Are the fields that participate in the key.



Example: Specifying Discontiguous Keys in the Access File

The following is a Master File for an Oracle table:

FILENAME=BMKEY, SUFFIX=SQLORA  , $
  SEGMENT=BMKEY, SEGTYPE=S0, $
    FIELDNAME=F1INT, ALIAS=F1INT, USAGE=I11, ACTUAL=I4, $
    FIELDNAME=F2CHAR4, ALIAS=F2CHAR4, USAGE=A4, ACTUAL=A4,
      MISSING=ON, $
    FIELDNAME=F3INT, ALIAS=F3INT, USAGE=I11, ACTUAL=I4,
      MISSING=ON, $
    FIELDNAME=F4CHAR6, ALIAS=F4CHAR6, USAGE=A6, ACTUAL=A6,
      MISSING=ON, $

The following Access File identifies F4CHAR6 and F1INT as the key fields, with F4CHAR6 as the high-order portion of the key, even though it is not first in the Master File:

SEGNAME=BMDKEY_KEY_FLDLST, 
   TABLENAME=R729999D.BMDKEY, 
   CONNECTION=<local>, 
   KEY=F4CHAR6/F1INT, $


x
Optimization of Simplified Character Functions

Simplified character functions have streamlined parameter lists, similar to those used by SQL functions. They are optimized against a wide variety of Relational data sources. The simplified character functions introduced in this release are:

The SQL Optimization Report provides optimization information for each function by adapter.



x
Optimization of Simplified Date and Date-Time Functions

Simplified date and date-time functions have streamlined parameter lists, similar to those used by SQL functions. They are optimized against a wide variety of Relational data sources. The simplified date and date-time functions introduced in this release are:

The SQL Optimization Report provides optimization information for each function by adapter.



x
Optimization of Legacy Character and Numeric Functions

The following functions have been optimized for most SQL adapters, as reflected in the SQL Optimization Report.



x
Optimization of Date and Date-Time Functions

The following functions are optimized as reflected in the SQL Optimization Report.



x
Automatically Generating FORMAT SQL_SCRIPT for HOLD Files

This feature implements a SET command that generates HOLD files as SQL_SCRIPT files when no format is specified on the ON TABLE HOLD command.

Note: If the adapter cannot generate SQL requests for a complex TABLE request (due to expression complexity or other reasons), the system automatically downgrades the HOLDFORMAT setting to BINARY for that request only. The procedure will run, but more slowly (with extract to disk). This can be used for application conversions to SQL_SCRIPT, such as BCEE-generated or MFACT-generated procedures.



x
Testing DBMS Case Sensitivity

The Web Console adapter configuration page for SQL-based adapters features a button that enables you to test the case-sensitivity setting of the targeted DBMS.



x
Amper Variables for SQLTAG and ABORTREPORT Settings

Variables that hold the values of SQLTAG and ABORTREPORT settings are available. You can use the following commands to find their values:

-TYPE &engine SQLTAG
-TYPE &engine ABORTREPORT

where:

engine

Is the adapter engine tag. For example, DB2 for DB2, ORA for Oracle, MSS for Microsoft SQL Server, DBC for Teradata, and so on.



x
Extended Bulk Load Support for the ON TABLE HOLD Clause

When using HOLD with an SQLengine FORMAT, a new option enables you to use a bulk load procedure to load the table.



x
Optimization of Simplified Character Functions

The following simplified functions are optimized to SQL:

The SQL Optimization Report provides optimization information for each function by adapter.

Note: Simplified functions are easier to use and are likely to be optimized in a wider range of engines than legacy functions.


Top of page

x
Adapter for DB2

In this section:

This section provides detailed descriptions of new features for the Adapter for DB2.



x
Case-Insensitive Filtering

Setting server collation to case insensitive (either as SET COLLATION=SRV_CI or under the NLS configuration) triggers the underlying i5 DB2 to also be case-insensitive for local IBM i DB2 connections.


Top of page

x
Adapter for C9 INC

The Adapter for C9 INC has been introduced in this release and is available as a named adapter on the Web Console in the SQL folder.

This Adapter provides read access to Salesforce data stored in C9 and supports C9 temporal features by generating fields in the Master File with temporal properties.

DEFINE DAILY_TREND/YYMD WITH ID TEMPORAL_PROPERTY 
TREND=DB_EXPR(INTERVAL '1' DAY); TITLE='Daily Trend', $ 
DEFINE WEEKLY_TREND/YYMD WITH ID TEMPORAL_PROPERTY 
TREND=DB_EXPR(INTERVAL '1' WEEK); TITLE='Weekly Trend', $ 
DEFINE MONTHLY_TREND/YYMD WITH ID TEMPORAL_PROPERTY 
TREND=DB_EXPR(INTERVAL '1' MONTH); TITLE='Monthly Trend', $ 
DEFINE QUARTERLY_TREND/YYMD WITH ID TEMPORAL_PROPERTY 
TREND=DB_EXPR(INTERVAL '1' QUARTER); TITLE='Quarterly Trend', $ 
DEFINE YEARLY_TREND/YYMD WITH ID TEMPORAL_PROPERTY 
TREND=DB_EXPR(INTERVAL '1' YEAR); TITLE='Yearly Trend', $ 

This metadata enables the adapter to optimize a TABLE FILE predicate condition as a C9 temporal trending clause at report time. For example:

TABLE FILE repro_nfs/OPPORTUNITY 
SUM AMOUNT EXPECTEDREVENUE 
BY DAILY_TREND 
WHERE 
DAILY_TREND FROM '2014-09-01' 
TO '2014-09-08'
END 

Is optimized as:

TREND FROM DATE '2014-09-01' 
TO DATE '2014-09-08' BY INTERVAL '1' DAY 
SELECT CAST(TrendDate() AS DATE), 
SUM(T1."Amount"), 
SUM(T1."ExpectedRevenue") 
FROM ads.Opportunity T1 
GROUP BY 
CAST(TrendDate() AS DATE) 
ORDER BY 
CAST(TrendDate() AS DATE); 
TABLE FILE OPPORTUNITY WRITE SUM AMOUNT 
WHERE DAILY_TREND EQ '2014-01-01' 

Results in the following SQL query:

AS OF DATE '2014-01-01' SELECT SUM(T1."Amount") FROM 
ads.Opportunity T1. 

Note: The condition WHERE xxx_TREND IS MISSING will be optimized as a C9 AS OF 'ALL' clause.

A report resulting from a TABLE query containing a BY and a FROM-TO test on a TREND field will show how selected totals progressed through the time period specified by the FROM-TO clause with the temporal interval specified in the definition of the TREND field.


Top of page

x
Adapter for Greenplum

In this section:

This section provides detailed descriptions of new features for the Adapter for Greenplum.



x
Extended Bulk Load Support

DataMigrator supports Extended Bulk Load for EMC Greenplum using the Greenplum Load Utility, gpload. This provides a faster load option than insert/update.

Additionally, it lets the user set a non-default column delimiter on a group of tables, and also provides better control when loading data from a CSV data file, by specifying custom character(s) as a new line separator.



x
Unicode Support

The Adapter for Greenplum can read and write data in Unicode.


Top of page

x
Adapters for Greenplum, Hyperstage (PG), and PostgreSQL: A256V Mapping for TEXT Fields

The Adapters for Greenplum, Hyperstage (PG), and PostgreSQL support A256V data type mapping using the following setting:

ENGINE SQLengine SET CONVERSION LONGCHAR ALPHA n

The setting affects the mapping of all native fields that have a data type of [VAR]CHAR(32767) or TEXT. The value of n ranges from 1 to 32767. The default value is 256.

The TEXT data type (no n length) can be set instead of ALPHA, when necessary.


Top of page

x
Adapter for Hive

In this section:

The Adapter for Hive has been introduced in this release and is available as a named adapter on the Web Console in the SQL folder.



x
Bulk Loading With DataMigrator

DataMigrator can create Hive metadata and load it into Hadoop using the Bulk Load via a Disk File load type.


Top of page

x
Adapter for Hyperstage

In this section:

This section provides detailed descriptions of new features for the Adapter for Hyperstage. It also describes various performance and memory management improvements for Hyperstage.



x
Performance Improvements

The increased use of parallelization in joins and sorting has improved the ability to run concurrent queries.



x
Memory Management Improvements

Hyperstage now uses memory more efficiently. In addition, various changes have been made which reduce the amount of memory needed.



x
Hyperstage ODBC Adapter

A new ODBC adapter for MySQL-based Hyperstage supplements the existing JDBC adapter on the Windows platform.



x
Using Denormalized Dimension Columns for Lookup in Quick Copy

When using Quick Copy for a cluster join describing a collection of tables that comprise a Star Schema with a denormalized Hyperstage target, character columns in dimension tables are created with the Hyperstage lookup comment attribute. This provides improved performance for sorts and filters on these columns.



x
DataMigrator and Quick Copy Support for Unicode

Unicode data can be loaded to a Hyperstage target using DataMigrator or Quick Copy.


Top of page

x
Adapter for Hyperstage (PG)

In this section:

The Adapter for Hyperstage using PostgreSQL (PG) has been introduced in this release and is available as a named adapter on the Web Console in the SQL folder.



x
Capture Column Comments as DESCRIPTION Attribute in a Synonym

Column Comments are supported as the synonym DESCRIPTION attribute for PostgreSQL-based Hyperstage, and for all other JDBC-based engines that return Column Comments information in a JDBC Metadata call.


Top of page

x
Adapter for i Access

The Adapter for i Access has been introduced in this release and is available as a named adapter on the Web Console in the SQL folder.


Top of page

x
Adapter for Informix: Support for SDK 4.10.x

The Adapter for Informix supports the Informix SDK 4.10.x.


Top of page

x
JDBC Adapters: COMMANDTIMEOUT Setting

The SET COMMANDTIMEOUT command is introduced for JDBC-based adapters whose drivers support the setQueryTimeout() call:

SQL ENGINE SET COMMANDTIMEOUT [n] 

where:

n

Is an optional number of seconds. Blank is the default value, which does not issue the java call Statement.setQueryTimeout() at all. The value zero (0) means no timeout limit.

Any value 0 or greater causes Statement.setQueryTimeout() to be executed.


Top of page

x
Adapter for Microsoft SQL Server

In this section:

The following features are supported for Microsoft SQL Server in this release.



x
ODBC Adapter for Microsoft SQL Server

The ODBC-based adapter is introduced for Microsoft SQL Server and is available as a named adapter on the Web Console in the SQL folder.



x
Capture REMARKS/DESCRIPTION/TITLE of Microsoft SQL Server Views in Synonym

The Adapter for Microsoft SQL Server can retrieve and store view properties (if present) in a Master File as the following attributes:


Top of page

x
Adapters for Microsoft SQL Server and Oracle Support HINT Clause

The Adapters for Microsoft SQL Server and Oracle can speed up the DML SELECT by specifying a HINT clause through the following SQL SET command:

SQL SQLengine SET HINT hint_syntax

where:

hint_syntax

Is the DBMS proprietary HINT statement, supplied with comment marks. For example:

SQL SQLORA SET HINT /* +USE_HASH */ sets USE_HASH hint in Oracle;
SQL SQLORA SET HINT removes the hint that was set before.

The TABLE command places the hint at the end of the generated query for Microsoft SQL Server or after the SELECT keyword for Oracle. The hint (or hint combination) will only be set when the adapter constructs a single SELECT statement. It does not occur in the case of a FOCUS-managed join.

The end user is responsible for the hint or hint combination syntax.


Top of page

x
Adapter for MongoDB

The Adapter for MongoDB has been introduced in this release and is available as a named adapter on the Web Console in the SQL folder.

This adapter is for the NoSQL MongoDB database, and requires the third-party Unity JDBC Driver.


Top of page

x
Adapter for MySQL Extended Bulk Load Support

DataMigrator and Quick Copy, with a MySQL database target, now support the load type Extended Bulk Load, which provides faster load times than insert/update processing.


Top of page

x
Adapter for Netezza

In this section:

The following features were added for the Adapter for Netezza.



x
Adapter for Netezza: Capture Column Comments as Synonym DESCRIPTION Attribute

The Adapter for Netezza supports column and table comments. They can be used as titles during the Create Synonym process.



x
Adapter for Netezza Supports Password Passthru

The Adapter for Netezza now supports Password Passthru authentication.


Top of page

x
Adapter for Oracle

In this section:

This section provides detailed descriptions of new features for the Adapter for Oracle.



x
Create Synonym Support for Materialized View

The Adapter for Oracle recognizes the Oracle synonym for materialized view as a candidate and presents it among the available objects on the Web Console Create Synonym page.



x
Support for Version 12c (Cloud)

Support has been added for Oracle Version 12c (Cloud).



x
Support for 12c Extended String Size

The Adapter for Oracle supports Oracle 12c extended-string data types. [N]VARCHAR2(32767) is supported, as described in the SQL Data Types Report.


Top of page

x
Adapter for Oracle TimesTen

The Adapter for Oracle TimesTen has been introduced in this release and is available as a named adapter on the Web Console in the SQL folder.


Top of page

x
Adapter for SAP HANA

In this section:

The following features have been added for the Adapter for SAP HANA.



x
Adapter for SAP HANA DB

The Adapter for SAP HANA Database has been introduced in this release and is available as a named adapter on the Web Console in the SQL folder.

This adapter provides greater flexibility for SQL optimization than the generic JDBC version used in previous releases.



x
SAP HANA DB Metadata For Different Types of Views

The Adapter for SAP HANA can now retrieve metadata and create Synonyms for different types of HANA DB views:

  • Views
  • Calculation Views
  • OLAP Views
  • Hierarchy Views
  • Join Views


x
Adapter for SAP HANA ODBC

The ODBC-based adapter is introduced for SAP HANA and is available as a named adapter on the Web Console in the SQL folder. This adapter (available on Windows 64-bit) supports SSO with Kerberos security.


Top of page

x
Adapter for Sybase: Support for Connection Name

Connection Name has been added as a configuration parameter for the Adapter for Sybase.


Top of page

x
Adapter for Teradata

In this section:

This section provides detailed descriptions of new features for the Adapter for Teradata.



x
Extended Bulk Load for Unicode Data

The Adapter for Teradata configured with CLI Teradata Client supports the loading of UTF8-encoded data through Extended Bulk Load.



x
Support for Version 14.10

The Teradata CLI and ODBC adapters support Read/Write access to Teradata Version 14.10.



x
Support for Version 15.0

The Teradata CLI and ODBC adapters support Read/Write access to Teradata Version 15.0.



x
Support for Reading PERIOD and INTERVAL Data Types

The Adapter for Teradata now supports read access to the PERIOD and INTERVAL data types.

The PERIOD data type is mapped to Alphanumeric (An), where n = 24 to 60, depending on the native PERIOD format.

The INTERVAL data type (all formats) is mapped as An, where n = 2 to 21, depending on the INTERVAL format.



x
Support for Long Table and View Names

Teradata introduced Extended Object Name (EON) in v.14.1. The Adapter for Teradata now supports table and view names (containing multi-byte characters) with length up to 128-bytes.


Top of page

x
Adapter for Vertica DB

The jdbc-based native Adapter for Vertica (SQLVRT) has been introduced in this release and is available as a named adapter on the Web Console in the SQL folder.


WebFOCUS