Features Added in FOCUS 7.7.06

In this section:

The following features were added as of FOCUS 7.7.06.

Preventing Visual Overflow

How to:

Reference:

At times, the size defined for the USAGE format of a field may be too small to fit the actual data to be displayed. In previous releases, this scenario would cause asterisks (***) to display in the report output in place of the actual value. Situations that cause this scenario include:

Syntax: How to Prevent Visual Overflow

SET EXTENDNUM = {ON|OFF|AUTO}

where:

ON

Displays all numbers in full, regardless of the USAGE format defined.

OFF

Displays asterisks when the value does not fit in the space allotted by the USAGE format. This is the legacy behavior.

AUTO

Applies an ON or OFF setting based on output format and SQUEEZE settings, as shown in the following table.

Format

SQUEEZE Setting

EXTENDNUM

PDF, PS, DHTML, PPT, PPTX

ON

OFF

ON

OFF

HTML, EXL2K, XLSX

N/A

ON

BINARY, ALPHA

N/A

OFF

WP, other delimited formats

N/A

OFF

AUTO is the default value.

Reference: Usage Notes for SET EXTENDNUM

Invoking SET EXTENDNUM=ON may change the report layout in the following ways.

Changes in report layout:

  • With styled formats (PDF, PS, DHTML, HTML, EXL2K, XLSX, PPT, PPTX), the width of the report is expanded to accommodate the width of the data columns. This may change the placement of the overall report on the defined page and in some instances cause columns to overflow to a new page.
    • For PDF, DHTML, PPT, and PPTX, the setting SQUEEZE ON will ensure that the alignment within each column is retained.
    • For HTML, EXL2K, and XLSX, the alignment will automatically be retained regardless of SQUEEZE settings.
  • With unstyled formats (WP), the columns are not adjusted to fit the new values, which may cause misalignment of data columns.

Changed behavior in operating systems where the defined number format is not supported:

  • Integers larger than 2GB cannot display on 32-bit machines. Therefore, such integers on these machines will display incorrect values.
  • Floating-point numbers (types F and D) that require more than 31 digits to print correctly will continue to display asterisks. For example, a field with usage D33.30 and a value of 10.0 will print as asterisks because it needs 30 zeros to the right of the decimal point and two digits to the left.

Using Applications

An application is a platform-independent repository for a group of related components, such as procedures, Master and Access Files, data files, HTML files, PDF files, and image files. It provides a way to confer a unique identity on the application components and facilitates the sharing of components across applications in an organized manner. This construct also simplifies the process of moving a user application from one platform to another.

These components are physically grouped together on an application-by-application basis for runtime execution. This physical grouping can be within an application under a common root or a mapping to an application anywhere in the file system. The physical application or mapped name is referred to as the application name in this document. A comprehensive set of application (APP) commands are provided to control or manipulate the application components, as well as to facilitate applications that can be written and deployed to any platform.

The physical location of an application and its components is determined by a configuration parameter called approot. This parameter is set at installation time and stored in the configuration file, edaserve.cfg. On z/OS, the EDASERVE configuration file must be a member in a PDS allocated to DDNAME ERRORS.

On z/OS, the following is a list of all application data sets automatically created for an application, where approot is USERID.APPS and the application name is BASEAPP:

APP CREATE BASEAPP
USERID.APPS.BASEAPP.ACCESS.DATA  
USERID.APPS.BASEAPP.ETG.DATA     
USERID.APPS.BASEAPP.FOCCOMP.DATA 
USERID.APPS.BASEAPP.FOCEXEC.DATA 
USERID.APPS.BASEAPP.FOCSTYLE.DATA
USERID.APPS.BASEAPP.GIF.DATA     
USERID.APPS.BASEAPP.HTML.DATA    
USERID.APPS.BASEAPP.MAINTAIN.DATA
USERID.APPS.BASEAPP.MASTER.DATA  
USERID.APPS.BASEAPP.SQL.DATA     
USERID.APPS.BASEAPP.WINFORMS.DATA

To reference an application file in a FOCUS command, use the syntax appname/filename if the file is not first in the application path. For example:

TABLE FILE baseapp/GGSALES
...

No allocation or definition is needed for referencing the standard file types. Other types of files that you may create within an application must be allocated or defined before being used. FOCUS files can be defined with a USE command.

For complete information about applications and APP commands, see the Developing Applications manual.

Simplified Character Functions

In this section:

Simplified character functions have streamlined parameter lists, similar to those used by SQL functions. In some cases, these simplified functions provide slightly different functionality than previous versions of similar functions.

The simplified functions do not have an output argument. Each function returns a value that has a specific data type.

When used in a request against a relational data source, these functions are optimized (passed to the RDBMS for processing).

Note: The simplified character functions are not supported in Maintain.

CHAR_LENGTH: Returning the Length in Characters of a String

How to:

The CHAR_LENGTH function returns the length, in characters, of a string. In Unicode environments, this function uses character semantics, so that the length in characters may not be the same as the length in bytes. If the string includes trailing blanks, these are counted in the returned length. Therefore, if the format source string is type An, the returned value will always be n.

Syntax: How to Return the Length of a String in Characters

CHAR_LENGTH(source_string)

where:

source_string

Alphanumeric

Is the string whose length is returned.

The data type of the returned length value is Integer.

Example: Returning the Length of a String

The following request against the EMPLOYEE data source creates a virtual field named LASTNAME of type A15V that contains the LAST_NAME with the trailing blanks removed. It then uses CHAR_LENGTH to return the number of characters.

DEFINE FILE EMPLOYEE
LASTNAME/A15V = RTRIM(LAST_NAME);
END
TABLE FILE EMPLOYEE
SUM LAST_NAME NOPRINT AND COMPUTE
NAME_LEN/I3 = CHAR_LENGTH(LASTNAME);
BY LAST_NAME
ON TABLE SET PAGE NOPAGE
END

The output is:

   LAST_NAME        NAME_LEN    
  ---------        --------      
  BANNING                 7
  BLACKWOOD               9
  CROSS                   5
  GREENSPAN               9
  IRVING                  6
  JONES                   5
  MCCOY                   5
  MCKNIGHT                8
  ROMANS                  6
  SMITH                   5
  STEVENS                 7

DIGITS: Converting a Number to a Character String

How to:

Reference:

Given a number, DIGITS converts it to a character string of the specified length. The format of the field that contains the number must be Integer.

Syntax: How to Convert a Number to a Character String

DIGITS(number,length)

where:

number

Integer

Is the number to be converted, stored in a field with data type Integer.

length

Integer between 1 and 10

Is the length of the returned character string. If length is longer than the number of digits in the number being converted, the returned value is padded on the left with zeros. If length is shorter than the number of digits in the number being converted, the returned value is truncated on the left.

Example: Converting a Number to a Character String

The following request converts -123.45 and BANK_ACCT to character strings:

DEFINE FILE EMPLOYEE
MEAS1/I8=-123.45;
DIG1/A8=DIGITS(MEAS1, 8) ;
DIG2/A9=DIGITS(BANK_ACCT, 9) ;
END
TABLE FILE EMPLOYEE
PRINT MEAS1 DIG1
BANK_ACCT DIG2
BY LAST_NAME
ON TABLE SET PAGE NOPAGE

The output is:

LAST_NAME           MEAS1  DIG1      BANK_ACCT  DIG2     
---------           -----  ----      ---------  ----     
BANNING              -123  00000123     160633  000160633
BLACKWOOD            -123  00000123  122850108  122850108
CROSS                -123  00000123  163800144  163800144
GREENSPAN            -123  00000123             000000000
IRVING               -123  00000123  819000702  819000702
JONES                -123  00000123   40950036  040950036
MCCOY                -123  00000123             000000000
MCKNIGHT             -123  00000123  136500120  136500120
ROMANS               -123  00000123             000000000
SMITH                -123  00000123             000000000
                     -123  00000123             000000000
STEVENS              -123  00000123             000000000

Reference: Usage Notes for DIGITS

  • Only I Format numbers will be converted. D, P, and F formats generate error messages and must be converted to I before using the DIGITS function. The limit for the number that can be converted is 2GB.
  • Negative integers are turned into positive integers.
  • Integer formats with decimal places are truncated.
  • DIGITS is not supported in Dialogue Manager.

LOWER: Returning a String With All Letters Lowercase

How to:

The LOWER function takes a source string and returns a string of the same data type with all letters translated to lowercase.

Syntax: How to Return a String With All Letters Lowercase

LOWER(source_string)

where:

source_string

Alphanumeric

Is the string to convert to lowercase.

The returned string is the same data type and length as the source string.

Example: Converting a String to Lowercase

In the following request against the EMPLOYEE data source, LOWER converts the LAST_NAME field to lowercase and stores the result in LOWER_NAME:

TABLE FILE EMPLOYEE
PRINT LAST_NAME AND COMPUTE
LOWER_NAME/A15 = LOWER(LAST_NAME);
ON TABLE SET PAGE NOPAGE
END

The output is:

  LAST_NAME        LOWER_NAME           
  ---------        ----------                   
  STEVENS          stevens        
  SMITH            smith          
  JONES            jones          
  SMITH            smith          
  BANNING          banning        
  IRVING           irving         
  ROMANS           romans         
  MCCOY            mccoy          
  BLACKWOOD        blackwood      
  MCKNIGHT         mcknight       
  GREENSPAN        greenspan      
  CROSS            cross    

LPAD: Left-Padding a Character String

How to:

Reference:

LPAD uses a specified character and output length to return a character string padded on the left with that character.

Syntax: How to Pad a Character String on the Left

LPAD(string, out_length, pad_character) 

where:

string

Fixed length alphanumeric

Is a string to pad on the left side.

out_length

Integer

Is the length of the output string after padding.

pad_character

Fixed length alphanumeric

Is a single character to use for padding.

Example: Left-Padding a String

In the following request against the GGSALES data source, LPAD left-pads the CATEGORY field with @ symbols:

DEFINE FILE GGSALES
LPAD1/A25 = LPAD(CATEGORY,25,'@');
END
TABLE FILE GGSALES
SUM UNITS LPAD1
BY CATEGORY
ON TABLE SET PAGE NOPAGE
END

The output is:

Category     Unit Sales  LPAD1
--------     ----------  -----
Coffee          1382715  @@@@@Coffee
Food            1394536  @@@@@Food
Gifts            934962  @@@@@Gifts

Reference: Usage Notes for LPAD

  • To use the single quotation mark as the padding character, you must double it and enclose the two single quotation marks within single quotation marks (LPAD(COUNTRY, 20,''''). You can use an amper variable in quotation marks for this parameter, but you cannot use a field, virtual or real.
  • Input can be fixed or variable length alphanumeric.
  • Output, when optimized to SQL, will always be data type VARCHAR.
  • If the output is specified as shorter than the original input, the original data will be truncated, leaving only the padding characters. The output length can be specified as a positive integer or an unquoted &variable (indicating a numeric).

LTRIM: Removing Blanks From the Left End of a String

How to:

The LTRIM function removes all blanks from the left end of a string.

Syntax: How to Remove Blanks From the Left End of a String

LTRIM(source_string)

where:

source_string

Alphanumeric

Is the string to trim on the left.

The data type of the returned string is AnV, with the same maximum length as the source string.

Example: Removing Blanks From the Left End of a String

In the following request against the MOVIES data source, the DIRECTOR field is right-justified and stored in the RDIRECTOR virtual field. Then LTRIM removes leading blanks from the RDIRECTOR field:

DEFINE FILE MOVIES
RDIRECTOR/A17 = RJUST(17, DIRECTOR, 'A17');
 END
TABLE FILE MOVIES
PRINT RDIRECTOR AND
COMPUTE
TRIMDIR/A17 = LTRIM(RDIRECTOR);
WHERE DIRECTOR CONTAINS 'BR'
ON TABLE SET PAGE NOPAGE
END

The output is:

  RDIRECTOR          TRIMDIR     
  ---------          -------                   
        ABRAHAMS J.  ABRAHAMS J.      
          BROOKS R.  BROOKS R.        
        BROOKS J.L.  BROOKS J.L. 

POSITION: Returning the First Position of a Substring in a Source String

How to:

The POSITION function returns the first position (in characters) of a substring in a source string.

Syntax: How to Return the First Position of a Substring in a Source String

POSITION(pattern, source_string)

where:

pattern

Alphanumeric

Is the substring whose position you want to locate. The string can be as short as a single character, including a single blank.

source_string

Alphanumeric

Is the string in which to find the pattern.

The data type of the returned value is Integer.

Example: Returning the First Position of a Substring

In the following request against the EMPLOYEE data source, POSITION determines the position of the first capital letter I in LAST_NAME and stores the result in I_IN_NAME:

TABLE FILE EMPLOYEE
PRINT LAST_NAME AND COMPUTE
I_IN_NAME/I2 = POSITION('I', LAST_NAME);
ON TABLE SET PAGE NOPAGE
END

The output is:

  LAST_NAME        I_IN_NAME                     
  ---------        ---------                           
  STEVENS                  0
  SMITH                    3
  JONES                    0
  SMITH                    3
  BANNING                  5
  IRVING                   1
  ROMANS                   0
  MCCOY                    0
  BLACKWOOD                0
  MCKNIGHT                 5
  GREENSPAN                0
  CROSS                    0

RPAD: Right-Padding a Character String

How to:

Reference:

RPAD uses a specified character and output length to return a character string padded on the right with that character.

Syntax: How to Pad a Character String on the Right

RPAD(string, out_length, pad_character) 

where:

string

Alphanumeric

Is a string to pad on the right side.

out_length

Integer

Is the length of the output string after padding.

pad_character

Alphanumeric

Is a single character to use for padding.

Example: Right-Padding a String

In the following request against the GGSALES data source, RPAD right-pads the CATEGORY field with @ symbols:

DEFINE FILE GGSALES
RPAD1/A25 = RPAD(CATEGORY,25,'@');
END
TABLE FILE GGSALES
SUM UNITS RPAD1
BY CATEGORY
ON TABLE SET PAGE NOPAGE
END

The output is:

Category     Unit Sales  RPAD1           
--------     ----------  -----           
Coffee          1382715  Coffee     @@@@@
Food            1394536  Food       @@@@@
Gifts            934962  Gifts      @@@@@

Reference: Usage Notes for RPAD

  • The input string can be data type AnV, VARCHAR, TX, and An.
  • Output can only be AnV or An.
  • When working with relational VARCHAR columns, there is no need to trim trailing spaces from the field if they are not desired. However, with An and AnV fields derived from An fields, the trailing spaces are part of the data and will be included in the output, with the padding being placed to the right of these positions. You can use RTRIM, TRIM or TRIMV to remove these trailing spaces prior to applying the RPAD function.

RTRIM: Removing Blanks From the Right End of a String

How to:

The RTRIM function removes all blanks from the right end of a string.

Syntax: How to Remove Blanks From the Right End of a String

RTRIM(source_string)

where:

source_string

Alphanumeric

Is the string to trim on the right.

The data type of the returned string can be An or AnV, with the same maximum length as the source string. (TX is allowed and can be used if the report output is a FOCUS or XFOCUS file or any output that does not create a Master file).

Example: Removing Blanks From the Right End of a String

The following request against the MOVIES data source creates the field DIRSLASH, that contains a slash at the end of the DIRECTOR field. Then it creates the TRIMDIR field, which trims the trailing blanks from the DIRECTOR field and places a slash at the end of that field:

TABLE FILE MOVIES
PRINT DIRECTOR NOPRINT AND
COMPUTE
DIRSLASH/A18 = DIRECTOR|'/';
TRIMDIR/A17V = RTRIM(DIRECTOR)|'/';
WHERE DIRECTOR CONTAINS 'BR'
ON TABLE SET PAGE NOPAGE
END

On the output, the slashes show that the trailing blanks in the DIRECTOR field were removed in the TRIMDIR field:

  DIRSLASH            TRIMDIR
  --------            -------
  ABRAHAMS J.      /  ABRAHAMS J./
  BROOKS R.        /  BROOKS R./
  BROOKS J.L.      /  BROOKS J.L./

SUBSTRING: Extracting a Substring From a Source String

How to:

The SUBSTRING function extracts a substring from a source string. If the ending position you specify for the substring is past the end of the source string, the position of the last character of the source string becomes the ending position of the substring.

Syntax: How to Extract a Substring From a Source String

SUBSTRING(source_string, start_position, length_limit)

where:

source_string

Alphanumeric

Is the string from which to extract the substring. It can be a field, a literal in single quotation marks (‘), or a variable.

start_position

Positive Integer

Is the starting position of the substring in source_string.

length_limit

Integer

Is the limit for the length of the substring. The ending position of the substring is calculated as start_position + length_limit - 1. If the calculated position beyond the end of the source string, the position of the last character of source_string becomes the ending position.

The data type of the returned substring can be specified as An or AnV.

Example: Extracting a Substring From a Source String

In the following request, POSITION determines the position of the first letter I in LAST_NAME and stores the result in I_IN_NAME. SUBSTRING then extracts three characters beginning with the letter I from LAST_NAME, and stores the results in I_SUBSTR. Where POSITION fails to find the letter, the value 0 is returned. The result is the two-letter substring for LAST_NAME without the letter I and three-letter substrings for those with the letter I.

TABLE FILE EMPLOYEE
PRINT
COMPUTE
I_IN_NAME/I2 = POSITION('I', LAST_NAME); AND
COMPUTE
I_SUBSTR/A3 =
SUBSTRING(LAST_NAME, I_IN_NAME, I_IN_NAME+2);
BY LAST_NAME
ON TABLE SET PAGE NOPAGE
END

The output is:

  LAST_NAME        I_IN_NAME  I_SUBSTR
  ---------        ---------  --------
  BANNING                  5  ING
  BLACKWOOD                0  BL
  CROSS                    0  CR
  GREENSPAN                0  GR
  IRVING                   1  IRV
  JONES                    0  JO
  MCCOY                    0  MC
  MCKNIGHT                 5  IGH
  ROMANS                   0  RO
  SMITH                    3  ITH
                           3  ITH
  STEVENS                  0  ST

TOKEN: Extracting a Token From a String

How to:

The TOKEN function extracts a token (substring) based on a token number and a delimiter character.

Syntax: How to Extract a Token From a String

TOKEN(string, delimiter, number)

where:

string

Fixed length alphanumeric

Is the character string from which to extract the token.

delimiter

Fixed length alphanumeric

Is a single character delimiter.

number

Integer

Is the token number to extract.

Example: Extracting a Token From a String

TOKEN extracts the second token from the PRODUCT field, where the delimiter is the letter c:

DEFINE FILE GGSALES
TOK1/A20 =TOKEN(PRODUCT,'c',2);
END
TABLE FILE GGSALES
SUM TOK1 AS Token
BY PRODUCT
ON TABLE SET PAGE NOPAGE
END

On the output, note that only the lowercase letter c is the delimiter, as specified in the request. The Token column has blanks where the second token is empty:

Product           Token
-------           -----
Biscotti          otti
Capuccino
Coffee Grinder
Coffee Pot
Croissant
Espresso
Latte                  
Mug
Scone             one
Thermos

TRIM_: Removing Leading Characters, Trailing Characters, or Both From a String

How to:

The TRIM_ function removes all occurrences of a single character from either the beginning of a string, the end of a string, or both.

Syntax: How to Remove Leading Characters, Trailing Characters, or Both From a String

TRIM_(trim_where, trim_character, source_string) 

where:

trim_where

Keyword

Defines where to trim the source string. Valid values are:

  • LEADING, which removes leading occurrences.
  • TRAILING, which removes trailing occurrences.
  • BOTH, which removes leading and trailing occurrences.
trim_character

Alphanumeric

Is a single character, enclosed in single quotation marks ('), whose occurrences are to be removed from source_string. For example, the character can be a single blank (‘ ‘).

source_string

Alphanumeric

Is the string to be trimmed.

The data type of the returned string can be An or AnV.

Example: Trimming a Character From a String

In the following request, TRIM_ removes leading occurrences of the character ‘B’ from the DIRECTOR field:

TABLE FILE MOVIES
PRINT DIRECTOR AND
COMPUTE
TRIMDIR/A17 = TRIM_(LEADING, 'B', DIRECTOR);
WHERE DIRECTOR CONTAINS 'BR'
ON TABLE SET PAGE NOPAGE
END

The output is:

  DIRECTOR           TRIMDIR
  --------           -------
  ABRAHAMS J.        ABRAHAMS J.
  BROOKS R.          ROOKS R.
  BROOKS J.L.        ROOKS J.L.

UPPER: Returning a String With All Letters Uppercase

How to:

The UPPER function takes a source string and returns a string of the same data type with all letters translated to uppercase.

Syntax: How to Return a String With All Letters Uppercase

UPPER(source_string)

where:

source_string

Alphanumeric

Is the string to convert to uppercase.

The returned string is the same data type and length as the source string.

Example: Converting Letters to Uppercase

In the following request, LCWORD converts LAST_NAME to mixed case. Then UPPER converts the LAST_NAME_MIXED field to uppercase:

DEFINE FILE EMPLOYEE
LAST_NAME_MIXED/A15=LCWORD(15, LAST_NAME, 'A15');
LAST_NAME_UPPER/A15=UPPER(LAST_NAME_MIXED) ;
END
TABLE FILE EMPLOYEE
PRINT LAST_NAME_UPPER AND FIRST_NAME 
BY LAST_NAME_MIXED
WHERE CURR_JOBCODE EQ 'B02' OR 'A17' OR 'B04';
ON TABLE SET PAGE NOPAGE
END

The output is:

  LAST_NAME_MIXED  LAST_NAME_UPPER  FIRST_NAME
  ---------------  ---------------  ----------
  Banning          BANNING          JOHN
  Blackwood        BLACKWOOD        ROSEMARIE
  Cross            CROSS            BARBARA
  Mccoy            MCCOY            JOHN
  Mcknight         MCKNIGHT         ROGER
  Romans           ROMANS           ANTHONY

Simplified Date and Date-Time Functions

In this section:

Simplified date and date-time functions have streamlined parameter lists, similar to those used by SQL functions. In some cases, these simplified functions provide slightly different functionality than previous versions of similar functions.

The simplified functions do not have an output argument. Each function returns a value that has a specific data type.

When used in a request against a relational data source, these functions are optimized (passed to the RDBMS for processing).

Standard date and date-time formats refer to YYMD and HYYMD syntax (dates that are not stored in alphanumeric or numeric fields). Dates not in these formats must be converted before they can be used in the simplified functions. Literal date-time values can be used with the DT function.

All arguments can be either literals (enclosed in single quotation marks), field names (not enclosed in quotation marks), or amper variables (if these are enclosed in single quotation marks, they required the .EVAL operator in order to be evaluated properly).

Note: The simplified date and date-time functions are not supported in Maintain.

DTADD: Incrementing a Date or Date-Time Component

How to:

Reference:

Given a date in standard date or date-time format, DTADD returns a new date after adding the specified number of a supported component. The returned date format is the same as the input date format.

Syntax: How to Increment a Date or Date-Time Component

DTADD(date, component, increment)

where:

date

Date or date-time

Is the date or date-time value to be incremented.

component

Keyword

Is the component to be incremented. Valid components (and acceptable values) are:

  • YEAR (1-9999)
  • QUARTER (1-4)
  • MONTH (1-12)
  • WEEK (1-53). This is affected by the WEEKFIRST setting.
  • DAY (of the Month, 1-31)
  • HOUR (0-23)
  • MINUTE (0-59)
  • SECOND (0-59)
increment

Integer

Is the value (positive or negative) to add to the component.

Example: incrementing the DAY Component of a Date

DTADD adds three days to the DATE after converting DATE to YYMD format:

DEFINE FILE GGSALES
NDATE/YYMD = DATE;
NEWDATE/YYMD = DTADD(NDATE, DAY, 3);
END
TABLE FILE GGSALES
SUM DOLLARS NOPRINT DATE NEWDATE
BY PRODUCT
WHERE PRODUCT NE 'Coffee Grinder'
ON TABLE SET PAGE NOPAGE
END

The output is:

Product                 Date  NEWDATE   
-------                 ----  -------   
Biscotti          1997/11/01  1997/11/04
Capuccino         1997/09/01  1997/09/04
Coffee Pot        1997/01/01  1997/01/04
Croissant         1996/10/01  1996/10/04
Espresso          1996/08/01  1996/08/04
Latte             1997/07/01  1997/07/04
Mug               1996/03/01  1996/03/04
Scone             1996/11/01  1996/11/04
Thermos           1997/03/01  1997/03/04

Reference: Usage Notes for DTADD

  • Each element must be manipulated separately. Therefore, if you want to add 1 year and 1 day to a date, you need to call the function twice, once for YEAR (you need to take care of leap years) and once for DAY. The simplified functions can be nested in a single expression or created and applied in separate DEFINE or COMPUTE expressions.
  • With respect to parameter validation, DTADD will not allow anything but a standard date or a date-time value to be used in the first parameter.
  • The increment is not checked, and the user should be aware that decimal numbers are not supported and will be truncated. Any combination of values that increases the YEAR beyond 9999 returns the input date as the value, with no message. If the user receives the input date when expecting something else, it is possible there was an error.

DTDIFF: Returning the Number of Component Boundaries Between Date or Date-Time Values

How to:

Given two dates in standard date or date-time formats, DTIFF returns the number of given component boundaries between the two dates. The returned value has integer format for calendar components or double precision floating point format for time components.

Syntax: How to Return the Number of Component Boundaries

DTDIFF(end_date, start_date, component)

where:

end_date

Date or date-time

Is the ending date in either standard date or date-time format. If this date is given in standard date format, all time components are assumed to be zero.

start_date

Date or date-time

Is the starting date in either standard date or date-time format. If this date is given in standard date format, all time components are assumed to be zero.

component

Keyword

Is the component on which the number of boundaries is to be calculated. For example, QUARTER finds the difference in quarters between two dates. Valid components (and acceptable values) are:

  • YEAR (1-9999)
  • QUARTER (1-4)
  • MONTH (1-12)
  • WEEK (1-53). This is affected by the WEEKFIRST setting.
  • DAY (of the Month, 1-31)
  • HOUR (0-23)
  • MINUTE (0-59)
  • SECOND (0-59)
Example: Returning the Number of Years Between Two Dates

DTDIFF calculates the years of service for those employees with termination dates:

DEFINE FILE CENTHR
YEARS/I9 = DTDIFF(TERM_DATE, START_DATE, YEAR);
END
TABLE FILE CENTHR
PRINT START_DATE TERM_DATE YEARS AS 'Years'
BY  ID_NUM
WHERE TERM_DATE NE MISSING AND TERM_DATE GT START_DATE
WHERE ID_NUM LE 76
ON TABLE SET PAGE NOPAGE
END

The output is:

Employee  Starting    Termination           
ID#       Date        Date             Years
--------  --------    -----------      -----
      24  1995/08/24  1999/02/16           4
      25  1995/10/05  1999/01/15           4
      26  1995/05/02  1999/06/30           4
      28  1995/04/26  1999/06/01           4
      30  1995/07/06  1999/07/02           4
      36  1998/01/04  1999/06/17           1
      40  1995/10/31  1999/06/15           4
      42  1995/05/19  1999/08/13           4
      44  1995/10/05  1999/04/23           4
      45  1995/07/15  1999/06/30           4
      46  1997/01/21  1998/08/05           1
      53  1995/07/05  1998/04/20           3
      60  1995/10/17  1999/04/20           4
      62  1995/09/05  1998/01/09           3
      63  1995/07/06  1999/03/31           4
      67  1995/06/06  1998/09/25           3
      72  1995/04/07  1998/07/03           3
      76  1995/10/05  1999/04/30           4

DTPART: Returning a Date or Date-Time Component in Integer Format

How to:

Given a date in standard date or date-time format and a component, DTPART returns the component value in integer format.

Syntax: How to Return a Date or Date-Time Component in Integer Format

DTPART(date, component)

where:

date

Date or date-time

Is the date in standard date or date-time format.

component

Keyword

Is the component to extract in integer format. Valid components (and values) are:

  • YEAR (1-9999).
  • QUARTER (1-4).
  • MONTH (1-12).
  • WEEK (of the year, 1-53). This is affected by the WEEKFIRST setting.
  • DAY (of the Month, 1-31).
  • DAY_OF_YEAR (1-366).
  • WEEKDAY (day of the week, 1-7). This is affected by the WEEKFIRST setting.
  • HOUR (0-23).
  • MINUTE (0-59).
  • SECOND (0-59.)
  • MILLISECOND (0-999).
  • MICROSECOND (0-999999).
Example: Extracting the Quarter Component as an Integer

DTPART extracts the QUARTER component from the employee start date:

DEFINE FILE CENTHR
QTR/I2 = DTPART(START_DATE, QUARTER)
END
TABLE FILE CENTHR
PRINT START_DATE QTR AS Quarter
BY ID_NUM
WHERE ID_NUM LE 15
ON TABLE SET PAGE NOPAGE
END

The output is:

Employee  Starting           
ID#       Date        Quarter
--------  --------    -------
       1  1995/09/19        3
       2  1995/06/20        2
       3  1998/09/05        3
       4  1995/12/06        4
       5  1995/09/19        3
       6  1997/04/01        2
       8  1997/07/13        3
       9  1996/10/04        4
      10  1995/09/05        3
      11  1997/10/01        4
      12  1996/12/19        4
      13  1995/06/08        2
      14  1996/11/05        4
      15  1995/08/08        3

DTRUNC: Returning the Start of a Date Period for a Given Date

How to:

Given a date or timestamp and a component, DTRUNC returns the first date within the period specified by that component.

Syntax: How to Return the First Date of a Date Period

DTRUNC(date_or_timestamp, date_period)

where:

date_or_timestamp

Date or date-time

Is the date or timestamp of interest.

date_period

Is the period whose starting date you want to find. Can be one of the following:

  • DAY, returns day of the month (1-31).
  • YEAR, returns year (1-9999).
  • MONTH, returns month (1-12).
  • QUARTER, returns quarter (1-4).
Example: Returning the First Date in a Date Period

In the following request, DTRUNC returns the first date of the quarter given the start date of the employee:

DEFINE FILE CENTHR
QTRSTART/YYMD = DTRUNC(START_DATE, QUARTER);
END
TABLE FILE CENTHR
PRINT START_DATE QTRSTART AS 'Start,of Quarter'
BY ID_NUM
WHERE ID_NUM LE 15
ON TABLE SET PAGE NOPAGE
END

The output is:

Employee  Starting    Start     
ID#       Date        of Quarter
--------  --------    ----------
       1  1995/09/19  1995/07/01
       2  1995/06/20  1995/04/01
       3  1998/09/05  1998/07/01
       4  1995/12/06  1995/10/01
       5  1995/09/19  1995/07/01
       6  1997/04/01  1997/04/01
       8  1997/07/13  1997/07/01
       9  1996/10/04  1996/10/01
      10  1995/09/05  1995/07/01
      11  1997/10/01  1997/10/01
      12  1996/12/19  1996/10/01
      13  1995/06/08  1995/04/01
      14  1996/11/05  1996/10/01
      15  1995/08/08  1995/07/01

Enhancements to SET DROPBLNKLINE

How to:

Reference:

The DROPBLNKLINE parameter controls whether blank lines display in a FOCUS report. With the options provided, you can affect blank lines that are automatically generated in different locations within a report. You can choose to drop the blank lines around subtotals, subheadings and subfootings, as well as certain data lines that may be blank and appear as blank lines on the report output. Additionally, when using borders, you can select to remove blank lines inserted around the headings and footings. You can eliminate these blank lines from the report output using the SET DROPBLNKLINE options.

Syntax: How to Control Automatic Blank Lines on Report Output

SET DROPBLNKLINE={OFF|ON|BODY|HEADING|ALL}

or

ON TABLE SET DROPBLNKLINE {OFF|ON|BODY|HEADING|ALL}

where:

OFF

Inserts system-generated blank lines as well as empty data lines. OFF is the default value.

ON|BODY

Removes system-generated blank lines within the body of the report, for example, before and after subheads. In addition, certain data lines that may be blank and appear as blank lines on the report output will be removed from the output. BODY is a synonym for ON.

HEADING

Removes the blank lines between headings and titles and between the report body and the footing. Works in positioned formats (PDF, PS, DHTML, PPT) when a request has a border or backcolor StyleSheet attribute anywhere in the report.

ALL

Provides both the ON and HEADING behaviors.

Reference: Usage Notes for SET DROPBLNKLINE=HEADING

  • In the positioned report formats (PDF, PS, DHTML, and PPT) with borders or backcolor, the system automatically generates a blank line below the heading and above the footing. This is done by design to make bordered lines work together. Generally, the rule is that each line is responsible for the border setting for its top and left border. Therefore, the bottom border of the heading is set by the top border of the row beneath it. To ensure that the bottom of the heading border is complete and does not interfere with the top of the column titles border, a blank filler line is automatically inserted. This filler line contains the defined bottom border of the heading as its top border. The same is true between the bottom of the data and the top of the footing.

    DROPBLNKLINE=HEADING removes the filler blank line by defining the height of the filler line to zero. This causes the bottom border of the heading to become the top border of the column titles. When backcolor is used without borders, this works well to close any blank gaps in color. However, FOCUS processing will not remediate between line styles, so using different border styles between different report elements may create some contention between the border styling definitions. To ensure that you have consistent border line styling between different report elements, use a single line style between the elements that present together in the report.

  • DROPBLNKLINE=HEADING is not supported with:
    • Different border styles between the heading and the column titles or the data and the footing.
    • Reports that use the ACROSS sort phrase.
    • Non-styled report output including FORMAT WP.
  • Usage Considerations:
    • In some reports, FOOTING BOTTOM requires the space added by the system-generated blank line between the data and the footing in order to present the correct distance between the sections. In these instances, the top of the FOOTING BOTTOM may slightly overlap the bottom of the data grid. You can resolve this by adding a blank line to the top of your footing.
    • Applying borders for the entire report (TYPE=REPORT) is recommended to avoid certain known issues that arise when bordering report elements individually. In some reports that define backcolor and borders on only select elements, the backcolor applied to the heading displays with a different width than the backcolor applied to the column titles. This difference causes a ragged right edge to present between the headings and the titles. Additionally, if you define the color of the border (BORDER-COLOR) for elements with backcolor to match the backcolor of those elements, the borders will blend into the backcolor and not be visible.

Example: Comparing DROPBLNKLINE Parameter Settings

The following request against the GGSALES data source has a heading, a footing, and a subtotal. Initially, DROPBLNKLINE is set to OFF.

TABLE FILE GGSALES
HEADING CENTER
"Gotham Grinds Sales By Region"
FOOTING CENTER
"Generated on: &DATETMDYY"
SUM DOLLARS UNITS
BY REGION SUBTOTAL
BY CATEGORY
BY PRODUCT
WHERE REGION EQ 'Northeast' OR 'West'
ON TABLE HOLD FORMAT PDF
ON TABLE SET DROPBLNKLINE OFF
ON TABLE SET PAGE NOPAGE
ON TABLE SET STYLE *
SQUEEZE = ON,
FONT = ARIAL,
TYPE=HEADING, BORDER=LIGHT,
$
ENDSTYLE
END

The output has a blank line below the heading, above the footing, above and below the subtotal lines, and above the grand total line.

Changing the DROPBLNKLINE setting to HEADING produces the following output. The blank line below the heading and the blank line above the footing have been removed. The blank lines above and below the subtotal and grand total lines are still inserted.

Changing the DROPBLNKLINE setting to ON (or BODY) produces the following output in which the blank lines above and below the subtotal and grand total lines have been removed, but the blank lines below the heading and above the footing are still inserted.

Changing the DROPBLNKLINE setting to ALL produces the following output in which the blank lines around the subtotal and grand total lines as well as the blank lines below the heading and above the footing have been removed.

Reading Selection Values From a File Using WHERE

How to:

Reference:

Instead of typing literal test values in a WHERE phrase, you can store them in a file and refer to the file in the report request. You can then select records based on equality or inequality tests on values stored in the file.

This method has the following advantages:

Reference: Usage Notes for Reading Values From a File

In order to read selection criteria from a file, the file must comply with the following rules:

  • Each value in the file must be on a separate line.
  • The selection value must start in column one.
  • The values are assumed to be in character format, unless the file name is HOLD, and numeric digits are converted to internal computational numbers where needed (for example, binary integer).
  • The file can be approximately 16,000 bytes. If the file is too large, an error message displays.
  • Alphanumeric values with embedded blanks or any mathematical operator (-, +, *, /) must be enclosed in single quotation marks.
  • When a compound WHERE phrase references more than one file, the specified files must have the same record formats.

    If your list of literals is too large, an error is displayed.

  • Sets of file names may be used, separated by the word OR or AND. Actual literals may also be mixed with the file names. For example:
    WHERE fieldname operator (filename) OR fieldname operator literal...

Syntax: How to Read Selection Values From a File (WHERE)

WHERE field1 operator1 (file1) [{OR|AND} field2 operator2 (file2) ... ]

where:

field1, field2

Are any valid field names or aliases.

operator1, operator2

Can be the EQ, IS, NE, or IS-NOT operator.

file1, file1

Are the names of the files.

These are the ddnames assigned by a DYNAM or TSO ALLOCATE command.

Example: Reading Selection Values From a File

The following request against the GGPRODS data source creates a HOLD file named EXPER1 that contains product IDs B141, B142, B143, and B144.

TABLE FILE GGPRODS
BY PRODUCT_ID BY PRODUCT_DESCRIPTION
WHERE PRODUCT_ID EQ 'B141' OR 'B142' OR 'B143' OR 'B144'
ON TABLE HOLD AS EXPER1 FORMAT ALPHA
END

The following request against the GGPRODS data source creates a HOLD file named EXPER2 that contains product IDs B144, F101, and F102.

TABLE FILE GGPRODS
BY PRODUCT_ID BY PRODUCT_DESCRIPTION
WHERE PRODUCT_ID EQ 'B144' OR 'F101' OR 'F102'
ON TABLE HOLD AS EXPER2 FORMAT ALPHA
END

The following request selects the values that exist in both EXPER1 AND EXPER2.

TABLE FILE GGPRODS
SUM PRODUCT_DESCRIPTION
BY PRODUCT_ID
WHERE PRODUCT_ID EQ (EXPER1) AND PRODUCT_ID IS (EXPER2)
ON TABLE SET PAGE NOPAGE
END

The output is:

Product
Code     Product
-------  -------
B144     Kona

Reformatting Sort Fields

How to:

Reference:

When displaying a vertical (BY) sort column or horizontal (ACROSS) sort row on report output, you can reformat the sort field values by specifying the new format in the sort phrase. The reformatting affects only the sort field value as displayed on the sort row or column. That is, if the field used as a sort field is referenced in a heading, subheading, footing, subfooting, or summary line, it displays with its original format.

Syntax: How to Reformat a Sort Field

{BY [TOTAL]|ACROSS [TOTAL]} sortfield/fmt ...

where:

sortfield

Is the sort field.

fmt

Is the new display format.

Reference: Usage Notes for Reformatting Sort Fields

  • Reformatting is not supported in an ON phrase.
  • Reformatting is only applied to the row or column generated by the sort phrase, not to subheading, subfooting, or summary rows that reference the sort field.
  • Field-based reformatting (the format is stored in a field) is not supported for sort fields.
  • BY field reformatting is propagated to HOLD files. ACROSS field reformatting is not propagated to HOLD files.

Example: Reformatting Sort Fields

The following request against the GGSALES data source includes the following reformatted sort fields:

  • BY CATEGORY, with CATEGORY reformatted as A3.
  • BY PRODUCT, with PRODUCT reformatted as A4.
  • ACROSS REGION, with region reformatted as A6.
TABLE FILE GGSALES
SUM UNIT
BY CATEGORY/A3
BY PRODUCT/A4
ACROSS REGION/A6
ON CATEGORY SUBTOTAL
ON CATEGORY SUBHEAD
"CATEGORY IS <CATEGORY "
" "
ON TABLE SET PAGE NOPAGE
END

On the output, the reformatting displays on the BY and ACROSS rows but is not propagated to the subheading and subtotal rows.

                                         REGION                                              
CATEGORY  PRODUCT      Midwes       Northe       Southe       West                                                             
-----------------------------------------------------------------------
CATEGORY IS Coffee                            
                                                                       
Cof       Capu              .        44785        73264        72831   
          Espr         101154        68127        68030        71675   
          Latt         231623       226243       211063       213920   
                                                                       
*TOTAL CATEGORY Coffee                                                 
                       332777       339155       352357       358426   
                                                                       
CATEGORY IS Food                               
                                                                       
Foo       Bisc          90413       149793       119594        70569   
          Croi         139881       137394       156456       197022   
          Scon         116127        70732        73779        72776   
*TOTAL CATEGORY Food                                                
                       346421       357919       349829       340367
                                                                    
CATEGORY IS Gifts                     
                                                                    
Gif       Coff          54002        40977        50556        48081
          Coff          47156        46185        49922        47432
          Mug           86718        91497        88474        93881
          Ther          46587        48870        48976        45648
*TOTAL CATEGORY Gifts                                               
                       234463       227529       237928       235042
                                                                    
                                                                    
TOTAL                  913661       924603       940114       933835

DB_EXPR Function: Inserting an SQL Expression Into a Request

How to:

Reference:

The DB_EXPR function inserts a native SQL expression exactly as entered into the native SQL generated for a FOCUS or SQL language request.

The DB_EXPR function can be used in a DEFINE command, a DEFINE in a Master File, a WHERE clause, a FILTER FILE command, a filter in a Master File, or in an SQL statement. It can be used in a COMPUTE command if the request is an aggregate request (uses the SUM, WRITE, or ADD command) and has a single display command. The expression must return a single value.

Syntax: How to Insert an SQL Expression Into a Request With DB_EXPR

DB_EXPR(native_SQL_expression)

where:

native_SQL_expression

Is a partial native SQL string that is valid for inserting into the SQL generated by the request. The SQL string must have double quotation marks (" ") around each field reference, unless the function is used in a DEFINE with a WITH phrase.

Reference: Usage Notes for the DB_EXPR Function

  • The expression must return a single value.
  • Any request that includes one or more DB_EXPR functions must be for a synonym that has a relational SUFFIX.
  • Field references in the native SQL expression must be within the current synonym context.
  • The native SQL expression must be coded inline. SQL read from a file is not supported.

Example: Inserting the DB2 BIGINT and CHAR Functions Into a TABLE Request

The following TABLE request uses the DB_EXPR function in the COMPUTE command to call two DB2 functions. It calls the BIGINT function to convert the squared revenue to a BIGINT data type and then uses the CHAR function to convert that value to alphanumeric:

TABLE FILE WFLITE 
SUM REVENUE_US NOPRINT AND
COMPUTE BIGREV/A31 = DB_EXPR(CHAR(BIGINT("REVENUE_US" * "REVENUE_US")));
 AS 'Alpha Square REVENUE_US'
BY BUSINESS_REGION
ON TABLE SET PAGE NOPAGE
END

The trace shows that the expression from the DB_EXPR function was inserted into the DB2 SELECT statement:

 SELECT
T3."BUSINESS_REGION",
 SUM(T1."REVENUE_US"),
 ((CHAR(BIGINT( SUM(T1."REVENUE_US") *
SUM(T1."REVENUE_US")))))
 FROM
( ( USER1.w_wf_retail_sales T1
 LEFT OUTER JOIN USER1.w_wf_retail_customer T2
 ON T2."ID_CUSTOMER" = T1."ID_CUSTOMER" )
 LEFT OUTER JOIN USER1.w_wf_retail_geography T3
 ON T3."ID_GEOGRAPHY" = T2."ID_GEOGRAPHY" )
 GROUP BY
T3."BUSINESS_REGION"
 ORDER BY
T3."BUSINESS_REGION"
 FOR FETCH ONLY;

The output is:

Customer
Business
Region           Alpha Square Revenue
--------         --------------------
EMEA             3047170226996
North America    4130421809160
Oceania          186522662
South America    106586898522

Creating a Subquery or Sequential File With HOLD FORMAT SQL_SCRIPT

How to:

When used in a request against a relational data source, the HOLD FORMAT SQL_SCRIPT command generates the SQL SELECT statement needed to execute the current query and stores it in an application as a file with a .sql extension on UNIX and Windows or in the SQL component of the application data sets on z/OS, along with the Master (.mas on UNIX and Windows, and MASTER.DATA on z/OS) and Access File (.acx on UNIX and Windows, and ACCESS.DATA on z/OS) pair that describes the SQL answer set. If you do not have an application defined, the script will be placed in a temporary file on z/OS.

When used in a request against any other type of data source, the HOLD FORMAT SQL_SCRIPT command executes the current query and stores the retrieved values in an application as a sequential file with an .ftm extension on UNIX or a .FTM low-level qualifier on z/OS, along with the Master File that describes the retrieved data. The data can also be stored in any sequential file on z/OS by allocating the AS name to a file.

You can use the output from HOLD FORMAT SQL_SCRIPT as the target file for the DB_INFILE function.

Note: Once you have the .sql file and its accompanying Master File, you can customize the .sql file using global Dialogue Manager variables. You must declare these global variables in the Master File. For information about parameterizing Master Files with global variables, see the Describing Data manual.

For information about configuring and working with applications, see Using Applications.

Syntax: How to Create an SQL Script or Sequential File Using HOLD FORMAT SQL_SCRIPT

ON TABLE HOLD AS script_name FORMAT SQL_SCRIPT

where:

script_name

Is the name of the .sql file or the .ftm file created as a result of the HOLD FORMAT SQL_SCRIPT command.

Example: Creating an SQL Script File Using HOLD FORMAT SQL_SCRIPT

The following request creates an SQL Script file in the baseapp application.

APP HOLD baseapp
TABLE FILE WFLITE
SUM BUSINESS_SUB_REGION                                     
STATE_PROV_CODE_ISO_3166_2                                  
BY BUSINESS_SUB_REGION NOPRINT                              
BY STATE_PROV_CODE_ISO_3166_2 NOPRINT                       
WHERE BUSINESS_SUB_REGION EQ 'Midwest' OR 'East'            
WHERE STATE_PROV_CODE_ISO_3166_2                            
  EQ 'AR' OR 'IA' OR 'KS' OR 'KY' OR 'WY' OR 'CT' OR 'MA' OR
  'NJ' OR 'NY' OR 'RI'                                      
WHERE COUNTRY_NAME EQ 'United States'                       
ON TABLE HOLD AS RETAILS FORMAT SQL_SCRIPT
END

The result of this request is a script file named retails.sql on distributed systems and member RETAILS in the BASEAPP.SQL.DATA library on z/OS, and a corresponding Master and Access File.

The script file contains the following SQL SELECT statement:

SELECT   MAX(T3."BUSINESS_SUB_REGION") AS "E01", 
 MAX(T3."STATE_PROV_CODE_ISO_3166_2") AS "E02"  
 FROM  USER1.w_wf_retail_geography T3 
 WHERE(T3."COUNTRY_NAME" = 'United States') AND 
 (T3."STATE_PROV_CODE_ISO_3166_2" IN('AR', 'IA', 'KS', 'KY', 'WY', 'CT',
 'MA', 'NJ', 'NY', 'RI')) AND
 (T3."BUSINESS_SUB_REGION" IN('Midwest', 'East')) 
 GROUP BY  T3."BUSINESS_SUB_REGION",
 T3."STATE_PROV_CODE_ISO_3166_2"

The Master File follows:

FILENAME=RETAILS, SUFFIX=DB2     , $
  SEGMENT=RETAILS, SEGTYPE=S0, $
    FIELDNAME=BUSINESS_SUB_REGION, ALIAS=E01, USAGE=A25V, ACTUAL=A25V,
      MISSING=ON,
      TITLE='Customer,Business,Sub Region', $
    FIELDNAME=STATE_PROV_CODE_ISO_3166_2, ALIAS=E02, USAGE=A5V, ACTUAL=A5
      MISSING=ON,
      TITLE='Customer,State,Province,ISO-3166-2,Code', $

The Access File follows:

 SEGNAME=RETAILS,
  CONNECTION=CON1,
  DATASET=RETAILS.SQL,
  SUBQUERY=Y, $

Example: Creating a Sequential File Using HOLD FORMAT SQL_SCRIPT

The following request against the EMPLOYEE data source creates a sequential file containing the values retrieved by the request along with a corresponding Master File:

APP HOLD baseapp 
TABLE FILE EMPLOYEE 
PRINT LAST_NAME FIRST_NAME 
WHERE DEPARTMENT EQ 'MIS' 
ON TABLE HOLD AS EMPVALUE FORMAT SQL_SCRIPT 
END

The sequential file empvalue.ftm contains the following data:

SMITH           MARY        JONES           DIANE       MCCOY           JOHN        BLACKWOOD       ROSEMARIE   GREENSPAN       MARY        CROSS           BARBARA     

The empvalue.mas Master File follows:

FILENAME=EMPVALUE, SUFFIX=FIX     , IOTYPE=BINARY, $ 
  SEGMENT=EMPVALUE, SEGTYPE=S0, $ 
    FIELDNAME=LAST_NAME, ALIAS=E01, USAGE=A15, ACTUAL=A16, $ 
    FIELDNAME=FIRST_NAME, ALIAS=E02, USAGE=A10, ACTUAL=A12, $

DB_INFILE Function: Testing Values Against a File or an SQL Subquery

In this section:

How to:

Reference:

The DB_INFILE function compares one or more field values in a source file to values in a target file. It returns the value 1 (TRUE) if the set of source fields matches a set of values from the target file. Otherwise, the function returns zero (FALSE). DB_INFILE can be used anywhere a function can be used in a FOCUS request, such as in a DEFINE or a WHERE phrase.

The target file can be any data source that FOCUS can read. Depending on the data sources accessed and the components in the request, either FOCUS or an RDBMS will process the comparison of values.

If FOCUS processes the comparison, it dynamically reads the target data source and creates a sequential file containing the target data values along with a synonym describing the data file. It then builds IF or WHERE structures in memory with all combinations of source and target values. If the target data contains characters that FOCUS considers wildcard characters, it will treat them as wildcard characters unless the command SET EQTEST = EXACT is in effect.

The following situations exist when a relational data source is the source file:

Syntax: How to Compare Source and Target Field Values With DB_INFILE

DB_INFILE([app/]target_file, s1, t1, ... sn, tn)

where:

app

Is an optional application reference.

target_file

Is the synonym for the target file.

s1, ..., sn

Are fields from the source file.

t1, ..., tn

Are fields from the target file.

The function returns the value 1 if a set of target values matches the set of source values. Otherwise, the function returns zero (0).

The mapping is done using pairs, where the s1 field is matched to t1 field, and so on.

Reference: Usage Notes for DB_INFILE

  • If both the source and target data sources have MISSING=ON for a comparison field, then a missing value in both files is considered an equality. If MISSING=OFF in one or both files, a missing value in one or both files results in an inequality.
  • Values are not padded or truncated when compared, except when comparing date and date-time values.
    • If the source field is a date field and the target field is a date-time field, the time component is removed before comparison.
    • If the source field is a date-time field and the target field is a date field, a zero time component is added to the target value before comparison.
  • If an alphanumeric field is compared to a numeric field, an attempt will be made to convert the alphanumeric value to a number before comparison.
  • If FOCUS processes the comparison and the target data contains characters that FOCUS considers wildcard characters, it will treat them as wildcard characters unless the command SET EQTEST = EXACT is in effect.

Example: Comparing Source and Target Values Using an SQL Subquery File

This example uses the WFLITE DB2 data source.

The SQL file named retails2.sql contains the following subquery that retrieves state codes in the Midwest and East subregions:

  SELECT
 MAX(T3."BUSINESS_SUB_REGION") AS "E01",
 MAX(T3."STATE_PROV_CODE_ISO_3166_2") AS "E02"
 FROM
USER1.w_wf_retail_geography T3
 WHERE
(T3."COUNTRY_NAME" = 'United States') AND
(T3."BUSINESS_SUB_REGION" IN('Midwest', 'East'))
 GROUP BY
T3."BUSINESS_SUB_REGION",
T3."STATE_PROV_CODE_ISO_3166_2";

The SQL script file was created by the following request:

TABLE FILE WFLITE
SUM BUSINESS_SUB_REGION
STATE_PROV_CODE_ISO_3166_2
BY BUSINESS_SUB_REGION NOPRINT
BY STATE_PROV_CODE_ISO_3166_2 NOPRINT
WHERE BUSINESS_SUB_REGION EQ 'Midwest' OR 'East'
WHERE COUNTRY_NAME EQ 'United States'
  ON TABLE HOLD AS RETAILS2 FORMAT SQL_SCRIPT
END

Note: You can create an SQL subquery file along with a corresponding synonym using the HOLD FORMAT SQL_SCRIPT command. For more information, see Creating a Subquery or Sequential File With HOLD FORMAT SQL_SCRIPT.

The following request uses the DB_INFILE function to compare region names and state codes against the names retrieved by the subquery:

SET TRACEUSER=ON
SET TRACESTAMP=OFF
SET TRACEON = STMTRACE//CLIENT
TABLE FILE WFLITE
SUM REVENUE_US
BY BUSINESS_SUB_REGION AS 'Region'
BY STATE_PROV_CODE_ISO_3166_2 AS 'State Code'
WHERE DB_INFILE(RETAILS2, BUSINESS_SUB_REGION, BUSINESS_SUB_REGION,
   STATE_PROV_CODE_ISO_3166_2, STATE_PROV_CODE_ISO_3166_2)
ON TABLE SET PAGE NOPAGE
END

The trace shows that the subquery was inserted into the WHERE predicate in the generated SQL:

 SELECT
T3."BUSINESS_SUB_REGION",
T3."STATE_PROV_CODE_ISO_3166_2",
 SUM(T1."REVENUE_US")
 FROM
USER1.w_wf_retail_sales T1,
USER1.w_wf_retail_customer T2,
USER1.w_wf_retail_geography T3
 WHERE
(T2."ID_CUSTOMER" = T1."ID_CUSTOMER") AND
(T3."ID_GEOGRAPHY" = T2."ID_GEOGRAPHY") AND
((T3."BUSINESS_SUB_REGION", T3."STATE_PROV_CODE_ISO_3166_2") IN
( SELECT
 MAX(T3."BUSINESS_SUB_REGION") AS "E01",
 MAX(T3."STATE_PROV_CODE_ISO_3166_2") AS "E02"
 FROM
USER1.w_wf_retail_geography T3
 WHERE
(T3."COUNTRY_NAME" = 'United States') AND
(T3."BUSINESS_SUB_REGION" IN('Midwest', 'East'))
 GROUP BY
T3."BUSINESS_SUB_REGION",
T3."STATE_PROV_CODE_ISO_3166_2")
GROUP BY
T3."BUSINESS_SUB_REGION",
T3."STATE_PROV_CODE_ISO_3166_2"  )
 ORDER BY
T3."BUSINESS_SUB_REGION",
T3."STATE_PROV_CODE_ISO_3166_2"
 FOR FETCH ONLY;

The output is:

Region                     State Code                     Revenue
------                     ----------                     -------
East                       CT                          $10,536.34
                           DE                           $6,254.51
                           MA                          $22,463.11
                           NJ                          $33,598.01
                           NY                          $69,800.34
                           PA                          $63,984.09
                           RI                             $818.97
                           VT                           $7,474.33
Midwest                    IA                          $12,316.32
                           IL                          $60,364.26
                           IN                          $42,367.87
                           KS                          $26,478.06
                           KY                          $13,290.75
                           MI                          $62,878.30
                           MN                          $50,591.70
                           MO                          $41,776.64
                           ND                          $13,552.55
                           NE                          $19,472.62
                           OH                          $49,394.62
                           SD                           $9,311.33
                           WI                          $39,605.50

Example: Comparing Source and Target Values Using a Sequential File

The empvalue.ftm sequential file contains the last and first names of employees in the MIS department:

SMITH           MARY        JONES           DIANE       MCCOY           JOHN        BLACKWOOD       ROSEMARIE   GREENSPAN       MARY        CROSS           BARBARA     

The empvalue Master File describes the data in the empvalues.ftm file:

FILENAME=EMPVALUE, SUFFIX=FIX     , IOTYPE=STREAM, $         
  SEGMENT=EMPVALUE, SEGTYPE=S0, $                            
    FIELDNAME=LAST_NAME, ALIAS=E01, USAGE=A15, ACTUAL=A15, $ 
    FIELDNAME=FIRST_NAME, ALIAS=E02, USAGE=A10, ACTUAL=A10, $

Note: You can create a sequential file along with a corresponding synonym using the HOLD FORMAT SQL_SCRIPT command. For information, see Creating a Subquery or Sequential File With HOLD FORMAT SQL_SCRIPT.

The following request against the FOCUS EMPLOYEE data source uses the DB_INFILE function to compare employee names against the names stored in the empvalues.ftm file. You must FILEDEF or allocate the empvalue.ftm file before issuing this request. For distributed systems, issue the following command.

FILEDEF EMPVALUE DISK baseapp/empvalue.ftm

For z/OS, issue the following command.

DYNAM ALLOC DD EMPVALUE DA baseapp/EMPVALUE.FTM SHR REU 
TABLE FILE EMPLOYEE
SUM CURR_SAL
BY LAST_NAME BY FIRST_NAME
WHERE DB_INFILE(EMPVALUE, LAST_NAME, LAST_NAME, FIRST_NAME, FIRST_NAME)
ON TABLE SET PAGE NOPAGE
END

The output is:

LAST_NAME        FIRST_NAME         CURR_SAL
---------        ----------         --------
BLACKWOOD        ROSEMARIE        $21,780.00
CROSS            BARBARA          $27,062.00
GREENSPAN        MARY              $9,000.00
JONES            DIANE            $18,480.00
MCCOY            JOHN             $18,480.00
SMITH            MARY             $13,200.00

Controlling DB_INFILE Optimization Against a Relational DBMS

How to:

When used against a single RDBMS, DB_INFILE performs an analysis to determine whether the resulting expression can be optimized using a subquery. It then creates that subquery and passes it to the relational adapter for inclusion in the optimized SQL to be passed to the RDBMS for processing. Occasionally, because of specific syntax not supported by the RDBMS involved, the RDBMS is not able to process the subquery that was created by DB_INFILE. If this happens, you will see messages to that effect. In this case, you can prevent DB_INFILE from optimizing the expression as a subquery by setting the DB_INFILE parameter to EXPAND_ALWAYS. You should implement this setting only for those specific requests that demonstrate problems with the subquery generated by DB_INFILE.

You can also prevent DB_INFILE from expanding the query into IF and WHERE phrases by setting the DB_INFILE parameter to EXPAND_NEVER. One reason to use this value is if the target file is so large that reading it could freeze processing. In this case, DB_INFILE will attempt to create a subquery. If this is not possible, the following message will be issued and processing will halt:

(FOC32585) ERROR EXPANDING DB_INFILE

Syntax: How to Control DB_INFILE Optimization

To control whether to prevent optimization of the DB_INFILE expression, issue the following command:

SET DB_INFILE = {DEFAULT|EXPAND_ALWAYS|EXPAND_NEVER}

In a TABLE request, issue the following command:

ON TABLE SET DB_INFILE  {DEFAULT|EXPAND_ALWAYS|EXPAND_NEVER}

where:

DEFAULT

Enables DB_INFILE to create a subquery if its analysis determines that it is possible. This is the default value.

EXPAND_ALWAYS

Prevents DB_INFILE from creating a subquery and, instead, expands the expression into IF and WHERE clauses in memory.

EXPAND_NEVER

Prevents DB_INFILE from expanding the expression into IF and WHERE clauses in memory and, instead, attempts to create a subquery. If this is not possible, a FOC32585 message is generated and processing halts.

Defining or Allocating a Holiday File

How to:

Some date functions that calculate business days use a holiday file to define days that are considered holidays. These days are not counted in calculations based on business days.

By default, the holiday file has a file name in the form HDAYxxxx.err and is on your path or, on z/OS, is a member named HDAYxxxx of a PDS allocated to DDNAME ERRORS. In your procedure or request, you must issue the SET HDAY=xxxx command to identify the file or member name. Alternatively, you can define the file to have any name and be stored anywhere or, on z/OS, allocate the holiday file as a sequential file of any name or as member HDAYxxxx of any PDS.

Syntax: How to FILEDEF or DYNAM the Holiday File

In all environments except z/OS, use the following syntax.

FILEDEF HDAYxxxx DISK {app/|path}/filename.ext 

where:

On z/OS, use the following to allocate a sequential holiday file.

DYNAM ALLOC {DD|FILE} HDAYxxxx DA qualif.filename.suffix SHR REU

On z/OS, use the following to allocate a holiday file that is a member of a PDS.

DYNAM ALLOC {DD|FILE} HDAYxxxx DA qualif.filename.suffix(HDAYxxx) SHR REU

where:

HDAYxxxx

Is the DDNAME for the holiday file. Your FOCEXEC or request must set the HDAY parameter to xxxx, where xxxx is any four characters you choose. If your holiday file is a member of a PDS, HDAYxxxx must also be the member name.

qualif.filename.suffix

Is the fully-qualified name of the sequential file that contains the list of holidays or of the PDS with member HDAYxxxx that contains the list of holidays.

Example: Defining a Holiday File

The following holiday file, named holiday.data in the c:\temp directory on Windows, defines November 3, 2011 and December 24, 2011 as holidays.

20111103
20111224

The following request against the MOVIES data source uses the FILEDEF command to define this file as the holiday file. The logical name in the FILEDEF command is HDAYMMMM, and the procedure issues the SET HDAY=MMMM command. It then defines the date November 2, 2011 and calculates the next business day.

FILEDEF HDAYMMMM DISK c:\ddrive\ibi\holiday.data
SET HDAY = MMMM
SET BUSDAYS = _MTWTF_
DEFINE FILE MOVIES
NEWDATE/YYMD = '20111102';
NEXTDATE/YYMD = DATEADD(NEWDATE, 'BD', 1);
END
TABLE FILE MOVIES           
SUM COPIES NEWDATE NEXTDATE
ON TABLE SET PAGE NOPAGE
END

The output shows that the next business day after November 2 is November 4 because November 3 is a holiday.

Example: Allocating the Holiday File to a Sequential File on z/OS

The following sequential file, named USER1.HOLIDAY.DATA defines November 3, 2011 and December 24, 2011 as holidays.

20111103
20111224

The following request against the MOVIES data source uses the DYNAM command to allocate this file as the holiday file. The DDNAME in the DYNAM command is HDAYMMMM, and the procedure issues the SET HDAY=MMMM command. It then defines the date November 2, 2011 and calculates the next business day.

DYNAM ALLOC DD HDAYMMMM DA USER1.HOLIDAY.DATA SHR REU
SET HDAY = MMMM
SET BUSDAYS = _MTWTF_
DEFINE FILE MOVIES
NEWDATE/YYMD = '20111102';
NEXTDATE/YYMD = DATEADD(NEWDATE, 'BD', 1);
END
TABLE FILE MOVIES
SUM COPIES NEWDATE NEXTDATE
ON TABLE SET PAGE NOPAGE
END

The output shows that the next business day after November 2 is November 4 because November 3 is a holiday.

COPIES  NEWDATE     NEXTDATE
------  -------     --------
   117  2011/11/02  2011/11/04

Example: Allocating the Holiday File to a PDS Member on z/OS

The following holiday file, member HDAYMMMM in a PDS named USER1.HOLIDAY.DATA defines November 3, 2011 and December 24, 2011 as holidays.

20111103
20111224

The following request against the MOVIES data source uses the DYNAM command to allocate this file as the holiday file. The DDNAME in the DYNAM command is HDAYMMMM, the member name is also HDAYMMMM, and the procedure issues the SET HDAY=MMMM command. It then defines the date November 2, 2011 and calculates the next business day:

DYNAM ALLOC DD HDAYMMMM DA USER1.HOLIDAY.DATA(HDAYMMMM) SHR REU
SET HDAY = MMMM
SET BUSDAYS = _MTWTF_
DEFINE FILE MOVIES
NEWDATE/YYMD = '20111102';
NEXTDATE/YYMD = DATEADD(NEWDATE, 'BD', 1);
END
TABLE FILE MOVIES
SUM COPIES NEWDATE NEXTDATE
ON TABLE SET PAGE NOPAGE
END

The output shows that the next business day after November 2 is November 4 because November 3 is a holiday:

COPIES  NEWDATE     NEXTDATE
------  -------     --------
   117  2011/11/02  2011/11/04

Validating Parameter Values Without Data File Access: REGEX

You can now validate an amper variable value without accessing the data by using the REGEX mask in VALIDATE. The REGEX mask specifies a regular expression to be used as the validation string. A regular expression is a sequence of special characters and literal characters that you can combine to form a search pattern.

Many references for regular expressions exist on the web. For a basic summary, see the section Using Regular Expressions in Chapter 1, Introduction to FOCUS, of the Overview and Operating Environments manual.

The syntax for validating a variable using the REGEX mask is

&variable.(|VALIDATE=REGEX,REGEX='regexpression').

where:

&variable

Is the variable to validate.

regexpression

Is the regular expression that specifies the acceptable values.

For example, the following request validates a Social Security number in either xxxxxxxxx or xxx-xx-xxxx format:

-REPEAT NEXTFMT FOR &FMTCNT FROM 1 TO 2
-SET &EMPID1=DECODE &FMTCNT(1 '071382660' 2 '818-69-2173'); 
-SET &EMPID=IF &EMPID1.(|VALIDATE=REGEX,REGEX='^\d{3}\-?\d{2}\-?\d{4}$').Employee ID. CONTAINS '-' 
-       THEN EDIT(&EMPID1,'999$99$9999') ELSE &EMPID1; 
TABLE FILE EMPLOYEE 
HEADING
" "
"Testing EMPID = &EMPID1</1"                                                                   
PRINT EID CSAL                                                          
WHERE EID EQ '&EMPID.EVAL' 
ON TABLE SET PAGE NOPAGE
ON TABLE SET STYLE *
GRID=OFF,$
END 
-RUN 
-NEXTFMT 

The output is

Testing EMPID = 071382660


EMP_ID CURR_SAL
071382660 $11,000.00

Testing EMPID = 818-69-2173


EMP_ID CURR_SAL
818692173 $27,062.00

The following messages display in case of an error:

(FOC2909) INVALID REGULAR EXPRESSION:
(FOC2910) RESPONSE DOES NOT MATCH THE REGULAR EXPRESSION:

SET ASNAMES=FLIP

The SET ASNAMES=FLIP command propagates the field names in the original Master File to the alias names in the HOLD Master File and the alias names in the original Master File to the field names in the HOLD Master File.

When generating field names and aliases for a HOLD file with the default setting for the ASNAMES parameter, if the HOLD file is a relational data source, the field names and aliases from the original Master File are propagated to the HOLD Master File. The alias names become the column names in the generated relational table.

If the HOLD file is not relational, field names from the original Master File are propagated to the HOLD Master File, but alias name are not propagated, and default aliases of the form E01, E02, and so on, are generated in the HOLD Master File.

With SET ASNAMES=ON, AS names specified in the request become the field names in the HOLD Master File. For a relational HOLD file, the AS name also becomes the TITLE attribute in the HOLD Master File

When you set the ASNAMES parameter to FLIP, for relational HOLD files, the field names from the original Master File or the AS names specified in the request become the alias names in the HOLD Master File as well as the column names in the generated relational table and the TITLE attributes in the HOLD Master File. The alias names in the original Master File become the field names in the HOLD Master File, except when there is an AS name, in which case the original field name becomes the HOLD field name.

For non-relational HOLD files, the field names from the original Master File or the AS names from the request become the alias names in the HOLD Master File, and default field names are generated in the form F01, F02, and so on.

The following table summarizes how field names and aliases are generated for each type of HOLD file with the OFF, ON, and FLIP values of SET ASNAMES:

ASNAMES value

Relational HOLD File

Non-Relational HOLD File

OFF

FIELDNAME= original field name

ALIAS = original alias

FIELDNAME= original field name

ALIAS = E01, E02, ...

ON

FIELDNAME= AS name. If no AS name specified, original field name is used.

ALIAS = original alias

FIELDNAME= AS name. If no AS name specified, original field name is used.

ALIAS = E01, E02, ...

FLIP

FIELDNAME=original alias, unless there is an AS name in the request. The AS name becomes the alias, and the original field name is propagated to the HOLD field name.

ALIAS = AS name. If no AS name specified, original field name is used.

FIELDNAME= F01, F02, ...

ALIAS = AS name. If no AS name specified, original field name is used.

Example: Generating a HOLD File With SET ASNAMES=FLIP

The following request generates a HOLD file in ALPHA format using the OFF value for SET ASNAMES. The field CURR_SAL has the AS name SALARY in the request:

SET ASNAMES=OFF
TABLE FILE EMPLOYEE
SUM CURR_SAL AS SALARY PCT_INC
BY DEPARTMENT
ON TABLE HOLD FORMAT ALPHA
END

In the HOLD Master File, AS names have not been propagated, the field names are from the original Master File, and default alias names are generated:

FILENAME=HOLD    , SUFFIX=FIX     , IOTYPE=STREAM, $
  SEGMENT=HOLD, SEGTYPE=S1, $
    FIELDNAME=DEPARTMENT, ALIAS=E01, USAGE=A10, ACTUAL=A10, $
    FIELDNAME=CURR_SAL, ALIAS=E02, USAGE=D12.2M, ACTUAL=A12, $
    FIELDNAME=PCT_INC, ALIAS=E03, USAGE=F6.2, ACTUAL=A06, $

The following version of the request generates a relational table:

SET ASNAMES=OFF
TABLE FILE EMPLOYEE
SUM CURR_SAL AS SALARY PCT_INC
BY DEPARTMENT
ON TABLE HOLD FORMAT SQLMSS
END

The field names from the original Master File have been propagated to the field names in the HOLD Maser File, and the alias names from the original Master File have been propagated to the HOLD Master File. The AS name for CURR_SAL has become the TITLE in the HOLD Master File:

FILENAME=HOLD   , SUFFIX=SQLMSS  , $
  SEGMENT=SEG01, SEGTYPE=S0, $
    FIELDNAME=DEPARTMENT, ALIAS=DPT, USAGE=A10, ACTUAL=A10, $
    FIELDNAME=CURR_SAL, ALIAS=CSAL, USAGE=D12.2M, ACTUAL=D8,
      TITLE='SALARY', $
    FIELDNAME=PCT_INC, ALIAS=PI, USAGE=F6.2, ACTUAL=F4, $

Changing SET ASNAMES to ON propagates the AS name SALARY to the field name in the HOLD Master File. The following is the Master File for the HOLD file in ALPHA format:

FILENAME=HOLD    , SUFFIX=FIX     , IOTYPE=STREAM, $
  SEGMENT=HOLD, SEGTYPE=S1, $
    FIELDNAME=DEPARTMENT, ALIAS=E01, USAGE=A10, ACTUAL=A10, $
    FIELDNAME=SALARY, ALIAS=E02, USAGE=D12.2M, ACTUAL=A12, $
    FIELDNAME=PCT_INC, ALIAS=E03, USAGE=F6.2, ACTUAL=A06, $

The following is the Master File for the HOLD file in relational format:

FILENAME=HOLD   , SUFFIX=SQLMSS  , $
  SEGMENT=SEG01, SEGTYPE=S0, $
    FIELDNAME=DEPARTMENT, ALIAS=DPT, USAGE=A10, ACTUAL=A10, $
    FIELDNAME=SALARY, ALIAS=CURR_SAL, USAGE=D12.2M, ACTUAL=D8,
      TITLE='SALARY', $
    FIELDNAME=PCT_INC, ALIAS=PI, USAGE=F6.2, ACTUAL=F4, $

Changing SET ASNAMES to FLIP propagates the AS name SALARY to the alias name in the HOLD Master File. In the ALPHA HOLD file, the other field names have been propagated to the alias names in the HOLD Master File, and default field names have been generated:

FILENAME=HOLD    , SUFFIX=FIX     , IOTYPE=STREAM, $
  SEGMENT=HOLD, SEGTYPE=S1, $
    FIELDNAME=F01, ALIAS=DEPARTMENT, USAGE=A10, ACTUAL=A10, $
    FIELDNAME=F02, ALIAS=SALARY, USAGE=D12.2M, ACTUAL=A12, $
    FIELDNAME=F03, ALIAS=PCT_INC, USAGE=F6.2, ACTUAL=A06, $

In the relational HOLD file, changing SET ASNAMES to FLIP propagates the AS name SALARY to the alias name in the HOLD Master File. For that field, the field name from the original Master File becomes the field name in the HOLD Master File and the TITLE attribute. The other field names have been propagated to the alias names in the HOLD Master File, and the corresponding alias names from the original Master File have been propagated to the field names in the HOLD Master File:

FILENAME=HOLD   , SUFFIX=SQLMSS  , $
  SEGMENT=SEG01, SEGTYPE=S0, $
    FIELDNAME=DPT, ALIAS=DEPARTMENT, USAGE=A10, ACTUAL=A10, $
    FIELDNAME=CURR_SAL, ALIAS=SALARY, USAGE=D12.2M, ACTUAL=D8,
      TITLE='SALARY', $
    FIELDNAME=PI, ALIAS=PCT_INC, USAGE=F6.2, ACTUAL=F4, $

Unicode Support

Unicode is a universal character encoding standard that assigns a code to every character and symbol in every language in the world. Unicode is the only encoding standard that ensures that you can retrieve or combine data using any combination of languages.

FOCUS supports a Unicode Transformation Format (UTF) called UTF-8 in ASCII environments, and UTF-EBCDIC in EBCDIC environments:


Information Builders