Starting in FOCUS 7.6.1, the Adapter for Adabas has several new settings, the Adapter for VSAM supports RRDS data sources, the Adapter for Oracle supports Oracle 10g and long column descriptions, and the Adapter for Teradata supports using Direct SQL Passthru to call a macro or stored procedure. In addition, the relational adapters enable you to control generation of calculations, and the PASSRECS setting is supported by the adapters for VSAM, IMS, and Adabas.
The SET CALLTYPE command enables you to set the data retrieval type per session, file, or segment by switching dynamically between physical and logical reads.
The following examples illustrate the use of variations of dynamic CALLTYPE syntax:
Uses FIND as the type of data retrieval call for all files.
Retrieves data for the EMPFILE using RL as the type of data retrieval call.
Note that you can issue several commands for different files. Each command must be issued separately. Changes are cumulative. For example, the following commands retrieve data for the EMPFILE using FIND as the type of data retrieval call for a particular ADBS segment (segment S02):
ENGINE ADBSINX SET CALLTYPE RL AFD EMPFILE
ENGINE ADBSINX SET CALLTYPE FIND AFD EMP2
ENGINE ADBSINX SET CALLTYPE MIXED AFD VEHICLES
ENGINE ADBSINX SET CALLTYPE FIND AFD EMPFILE SEGNAME S02
Displays your settings.
Returns to the default settings. This command resets all SET CALLTYPE commands for all Access Files.
How to: |
You can use the FETCHJOIN command to take advantage of Multifetch efficiencies in Join operations for a session. This feature is particularly useful when you are joining to a base file in which sorting is based on the key field that is used for the join.
ENGINE ADBSINX SET FETCHJOIN {ON|OFF}
where:
Indicates the Adapter for Adabas.
Sets the FETCHJOIN feature on for the user session. ON is the default value.
Sets the FETCHJOIN feature off for the user session. This value is recommended when joining with a base file that is unsorted by the key field used for joining. This option avoids buffering and reduces processing time.
Adabas includes two data definition options, Not Counted (NC) and Not Null or Null Value Not Allowed (NN), for providing SQL-compatible null representation for Software AG's mainframe Adabas SQL Server (ESQ) and other Structured Query Language (SQL) database query languages.
The NC and NN options cannot be applied to fields defined:
NC: SQL Null Value Option
Without the Not Counted (NC) option, a null value is either zero or blank depending on the field format.
With the NC option, zeros or blanks specified in the record buffer are interpreted according to the "null indicator" value: either as true zeros or blanks (that is, as "significant" nulls) or as undefined values (that is, as true SQL or "insignificant" nulls).
If the field defined with the NC option has no value specified in the record buffer, the field value is always treated as an SQL null.
Note: On the mainframe platform, subdescriptors and superdescriptors defined with NC=YES and MISSING=ON parameters cannot be used to search for the SQL NULL value, as that causes an Adabas RC 61.
NN: SQL Not Null Option
The Not Null or Null Value Not Allowed (NN) option may only be specified when the NC option is also specified for a data field. The NN option indicates that an NC field must always have a value (including zero or blank) defined; it cannot contain "no value".
The Create Synonym facility adds the parameter NC=YES in the Access File for fields defined with the NC option.
If a field with the NC option does not have the NN option, then the parameter MISSING=ON is also added to the field definition in the Master File. Subdescriptors and superdescriptors derived from this field will be defined as the fields with the following options: TYPE=NOP, NC=YES, and MISSING=ON. No component fields will be associated with these subdescriptors and superdescriptors.
Sample Master File and Access File contents with the NC option:
Master File:
FILENAME=ADANC1, SUFFIX=ADBSINX , $ SEGMENT=S01, SEGTYPE=S0, $ FIELDNAME=AA_FIELD, ALIAS=AA, USAGE=I6, ACTUAL=I2, FIELDTYPE=I, $ FIELDNAME=AB_FIELD, ALIAS=AB, USAGE=I6, ACTUAL=I2, FIELDTYPE=I, MISSING=ON, $ FIELDNAME=AC_FIELD, ALIAS=AC, USAGE=I6, ACTUAL=I2, FIELDTYPE=I, $ FIELDNAME=AD_FIELD, ALIAS=AD, USAGE=I6, ACTUAL=I2, MISSING=ON, $ FIELDNAME=AE_FIELD, ALIAS=AE, USAGE=I6, ACTUAL=I2, $ FIELDNAME=BA_FIELD, ALIAS=BA, USAGE=A2, ACTUAL=A2, FIELDTYPE=I, $ FIELDNAME=BB_FIELD, ALIAS=BB, USAGE=A2, ACTUAL=A2, FIELDTYPE=I, MISSING=ON, $ FIELDNAME=BC_FIELD, ALIAS=BC, USAGE=A2, ACTUAL=A2, FIELDTYPE=I, $ FIELDNAME=BD_FIELD, ALIAS=BD, USAGE=A2, ACTUAL=A2, MISSING=ON, $ FIELDNAME=BE_FIELD, ALIAS=BE, USAGE=A2, ACTUAL=A2, $ GROUP=G1_GROUP, ALIAS=G1, USAGE=A6, ACTUAL=A6, $ FIELDNAME=CA_FIELD, ALIAS=CA, USAGE=A2, ACTUAL=A2, $ FIELDNAME=CB_FIELD, ALIAS=CB, USAGE=A2, ACTUAL=A2, MISSING=ON, $ FIELDNAME=CC_FIELD, ALIAS=CC, USAGE=A2, ACTUAL=A2, $ $ $$$$$$$$$$$$$$$$$$$$$$ Superdescriptor $$$$$$$$$$$$$$$$$$$$$$$$$$ $ FIELDNAME=S1_FIELD, ALIAS=S1, USAGE=A6, ACTUAL=A6, FIELDTYPE=I, MISSING=ON, $ $ FIELD=BA_FIELD_S01 ,ALIAS=BA ,A2 ,A2 ,INDEX=I,$ $ FIELD=BB_FIELD_S01 ,ALIAS=BB ,A2 ,A2 ,INDEX=I,$ $ FIELD=BE_FIELD_S01 ,ALIAS=BE ,A2 ,A2 , ,$ $ $$$$$$$$$$$$$$$$$$$$$$ Superdescriptor $$$$$$$$$$$$$$$$$$$$$$$$$$ $ FIELDNAME=S2_FIELD, ALIAS=S2, USAGE=A6, ACTUAL=A6, FIELDTYPE=I, MISSING=ON, $ $ FIELD=BA_FIELD_S02 ,ALIAS=BA ,A2 ,A2 ,INDEX=I,$ $ FIELD=BC_FIELD_S02 ,ALIAS=BC ,A2 ,A2 ,INDEX=I,$ $ FIELD=BD_FIELD_S02 ,ALIAS=BD ,A2 ,A2 , ,$ $ $$$$$$$$$$$$$$$$$$$$$$$ Subdescriptor $$$$$$$$$$$$$$$$$$$$$$$$$$$ $ FIELDNAME=S3_FIELD, ALIAS=S3, USAGE=A1, ACTUAL=A1, FIELDTYPE=I, MISSING=ON, $ $ $$$$$$$$$$$$$$$$$$$$$$$ Subdescriptor $$$$$$$$$$$$$$$$$$$$$$$$$$$ $ FIELDNAME=S4_FIELD, ALIAS=S4, USAGE=A1, ACTUAL=A1, FIELDTYPE=I, $ $
Access File:
RELEASE=6, OPEN=YES, $ SEGNAM=S01, ACCESS=ADBS, FILENO=120, DBNO=3, CALLTYPE=FIND, UNQKEYNAME=AA_FIELD, WRITE=YES, $ $ CALLTYPE=RL ,SEQFIELD=AA_FIELD ,$ $ FIELD=AA_FIELD ,TYPE=DSC ,NC= ,$ FIELD=AB_FIELD, TYPE=DSC, NC=YES, $ FIELD=AC_FIELD, TYPE=DSC, NC=YES, $ FIELD=AD_FIELD, TYPE=, NC=YES, $ FIELD=AE_FIELD, TYPE=, NC=YES, $ $ FIELD=BA_FIELD ,TYPE=DSC ,NC= ,$ FIELD=BB_FIELD, TYPE=DSC, NC=YES, $ FIELD=BC_FIELD, TYPE=DSC, NC=YES, $ FIELD=BD_FIELD, TYPE=, NC=YES, $ FIELD=BE_FIELD, TYPE=, NC=YES, $ FIELD=CB_FIELD, TYPE=, NC=YES, $ FIELD=CC_FIELD, TYPE=, NC=YES, $ FIELD=S1_FIELD, TYPE=NOP, NC=YES, $ FIELD=S2_FIELD, TYPE=NOP, NC=YES, $ FIELD=S3_FIELD, TYPE=NOP, NC=YES, $ FIELD=S4_FIELD, TYPE=NOP, NC=YES, $
Note: The following processing rules apply for fields defined with the NC=YES and MISSING=ON parameters:
SQL INSERT INTO ADANC1 (AA_FIELD, AB_FIELD, AC_FIELD, BA_FIELD, BC_FIELD, BE_FIELD, CA_FIELD, CC_FIELD) VALUES (21, 22, 23, 'B1', 'B3', 'B5','A2', 'C2'); END
SQL UPDATE ADANC1 SET AB_FIELD=NULL WHERE AA_FIELD = 21; END
SQL SELECT AA_FIELD, AB_FIELD, AC_FIELD, AD_FIELD, AE_FIELD FROM ADANC1 WHERE AB_FIELD IS NULL AND AA_FIELD GE 10; END SQL SELECT AA_FIELD, AB_FIELD, AC_FIELD, AD_FIELD, AE_FIELD FROM ADANC1 WHERE AB_FIELD IS NOT NULL; END TABLE FILE ADANC1R PRINT AA_FIELD AB_FIELD AC_FIELD AD_FIELD AE_FIELD G1_GROUP WHERE AB_FIELD EQ MISSING; END
SQL SELECT AA_FIELD, AB_FIELD, AC_FIELD, AD_FIELD, AE_FIELD FROM ADANC1 WHERE S1_FIELD = 'A1A2A4' AND AA_FIELD > 10; END SQL SELECT AA_FIELD, AB_FIELD, AC_FIELD, AD_FIELD, AE_FIELD FROM ADANC1 WHERE S1_FIELD IS NOT NULL AND AA_FIELD > 10; END
Note: On the mainframe platform, subdescriptors and superdescriptors defined with the NC=YES and MISSING=ON parameters cannot be used to search for an SQL NULL value, as that causes an Adabas RC 61.
How to: |
PASSRECS returns the number of rows affected by a successfully executed SQL Passthru INSERT, UPDATE, or DELETE command.
ENGINE INT SET PASSRECS {ON|OFF}
where:
Indicates that the PASSRECS setting in this command will be applied globally to all adapters that support SQL INSERT, UPDATE, and DELETE commands.
Provides the number of rows affected in the application program SCB count member after the successful execution of an SQL Passthru INSERT, UPDATE, or DELETE command. ON is the default value.
Provides no information after the successful execution of an SQL Passthru INSERT, UPDATE, or DELETE command.
In addition, the adapter updates the &RECORDS system variable with the number of rows affected. You can access this variable using Dialogue Manager.
How to: |
The Adapter for VSAM supports both direct and sequential retrieval of fixed- and variable-length records from RRDS files, as well as modification of those records. RRDS files are VSAM files that are accessed through Relative Record Number keys (RRNs) and are processed in basically the same manner as key-sequenced (KSDS or KSEQ) VSAM files.
The RRN field that serves as the primary (unique) key for the segment follows all real fields in the physical root segment of the Master File and establishes the sequence for the records. The RRN field description must contain the following attributes:
ALIAS=RRN, USAGE=I10, ACTUAL=I4
Records in RRDS files are processed sequentially if the request does not provide a screening condition or join for the RRN-associated field.
FILE=RRDSVSAM, SUFFIX=VSAM, DATASET=PGMYAN.RRDSVSAM.CLUSTER, $ SEGNAME=ROOT ,SEGTYPE=S0 ,$ FIELD=CODE , ,USAGE=A2 ,ACTUAL=A2 ,$ FIELD=NAME , ,USAGE=A10 ,ACTUAL=A10 ,$ FIELD=VALUE , ,USAGE=P2 ,ACTUAL=Z1 ,$ FIELD=ACCESS_KEY ,ALIAS=RRN ,USAGE=I10 ,ACTUAL=I4 ,$Relative Record Number (RRN) key for VSAM;
The following examples show how to view and apply unique Relative Record Number (RRN) key values when working with records in RRDS files.
Printing the records with their key values
TABLE FILE RRDSVSAM PRINT CODE NAME VALUE ACCESS_KEY END
Screening records by unique key values
TABLE FILE RRDSVSAM PRINT CODE NAME VALUE ACCESS_KEY IF ACCESS_KEY GE 5 IF ACCESS_KEY LE 15 END
Retrieving records by unique key values
TABLE FILE RRDSVSAM PRINT CODE NAME VALUE ACCESS_KEY IF ACCESS_KEY EQ 5 OR 7 OR 9 END
Modifying records by unique key values
MODIFY FILE RRDSVSAM ECHO FREEFORM ACCESS_KEY MATCH ACCESS_KEY ON NOMATCH COMPUTE CODE = 'A1'; ON NOMATCH COMPUTE NAME = 'RECORD 20'; ON NOMATCH INCLUDE ON MATCH REJECT DATA ACCESS_KEY=20 END
Updating matching records in RRDS files by unique key values
MODIFY FILE RRDSVSAM ECHO FREEFORM ACCESS_KEY MATCH ACCESS_KEY ON NOMATCH REJECT ON MATCH TYPE "CODE: <D.CODE>, NAME: <D.NAME>, VALUE: <D.VALUE>"; ON MATCH COMPUTE VALUE = 9; ON MATCH UPDATE VALUE DATA ACCESS_KEY=20 END
Deleting RRDS records based on unique key values
MODIFY FILE RRDSVSAM ECHO FREEFORM ACCESS_KEY MATCH ACCESS_KEY ON NOMATCH REJECT ON MATCH TYPE "CODE: <D.CODE>, NAME: <D.NAME>, VALUE: <D.VALUE>"; ON MATCH DELETE DATA ACCESS_KEY=7 END
In this section: How to: Reference: |
You can create a report output file—that is, a HOLD file—as a native DBMS temporary table. This increases performance by keeping the entire reporting operation on the DBMS server, instead of downloading data to your computer and then back to the DBMS server.
For example, if you temporarily store report output for immediate use by another procedure, storing it as a temporary table instead of creating a standard HOLD file avoids the overhead of transmitting the interim data to your computer.
The temporary table columns are created from the following report elements
except for those for which NOPRINT is specified.
The temporary table that you create from your report will be the same data source type (that is, the same DBMS) as the data source from which you reported. If the data source from which you reported contains multiple tables, all must be of the same data source type and reside on the same instance of the DBMS server.
You can choose between several types of table persistence.
You can create extract files as native DBMS tables with the following adapters:
The syntax to save report output as a native DBMS temporary table is
ON TABLE HOLD [AS filename]FORMAT SAME_DB [PERSISTENCE persistValue]
where:
Specifies the name of the HOLD file. If you omit AS filename, the name of the temporary table defaults to "HOLD".
Because each subsequent HOLD command overwrites the previous HOLD file, it is recommended to specify a name in each request to direct the extracted data to a separate file, thereby preventing an earlier file from being overwritten by a later one.
Specifies the type of table persistence and related table properties. This is optional for DBMSs that support volatile tables, and required otherwise. For information about support for volatile tables for a particular DBMS, see Temporary Table Properties for SAME_DB Persistence Values, and consult your DBMS vendor's documentation.
Is one of the following:
Specifies that the table is local to the DBMS session. A temporary synonym—a Master File and Access File—is generated automatically; it expires when the server session ends.
This is the default persistence setting for all DBMSs that support volatile tables.
For information about support for the volatile setting, and about persistence and other table properties, for a particular DBMS, see Temporary Table Properties for SAME_DB Persistence Values, and consult your DBMS vendor's documentation.
Specifies that while the table exists, its definition will be visible to other database sessions and users though its data will not be. A permanent synonym—a Master File and Access File—is generated automatically.
For information about support for the global temporary setting, and about persistence and other table properties, for a particular DBMS, see Temporary Table Properties for SAME_DB Persistence Values, and consult your DBMS vendor's documentation.
Specifies that a regular permanent table will be created. A permanent synonym—a Master File and Access File—is generated automatically.
The following chart provides additional detail about persistence and other properties of temporary tables of different data source types that are supported for use with HOLD format SAME_DB.
DBMS |
VOLATILE |
GLOBAL_TEMPORARY |
---|---|---|
DB2 |
DB2: a volatile table is created using the DECLARE GLOBAL TEMPORARY TABLE command with the ON COMMIT PRESERVE ROWS option. Declared global temporary tables persist and are visible only within the current session (connection). SESSION is the schema name for all declared global temporary tables. |
DB2 Release 7.1 and up for z/OS only: a global temporary table is created using the CREATE GLOBAL TEMPORARY TABLE command. The definition of a created global temporary table is visible to other sessions, but the data is not. The data is deleted at the end of each transaction (COMMIT or ROLLBACK command). The table definition persists after the session ends. |
Oracle |
This type of table is not supported by Oracle. |
The table's definition is visible to all sessions; its data is visible only to the session that inserts data into it. The table's definition persists for the same period as the definition of a regular table. |
Teradata |
A volatile table definition and data are visible only within the session that created the table and inserted the data. The volatile table is created with the ON COMMIT PRESERVE ROWS option. |
A global temporary table persists for the same duration as a permanent table. The definition is visible to all sessions, but the data is visible only to the session that inserted the data. The global temporary table is created with the ON COMMIT PRESERVE ROWS option. |
Each HOLD file column is assigned its name:
A primary key or an index is created for the HOLD table. The key or index definition is generated from the sort (BY) keys of the TABLE command, except for undisplayed sort keys (that is, sort keys for which NOPRINT is specified). To determine whether a primary key or an index will be created:
The adapter supports column descriptions of up to 255 characters.
The Adapter for Oracle now supports Oracle Release 10g. To install this version of the adapter, you must edit and submit member G10FORA in the FOCSQL.DATA data set. This member links Oracle with FOCUS.
Oracle 10g is stored in a PDSE data set. To link Oracle 10g to FOCUS, you must link it to a version of the FOCUS FOCLIB.LOAD data set that is also stored in a PDSE data set.
To run an interactive FOCUS session that accesses Oracle 10g, you must have a region size that is at least 32 megabytes.
Following is the G10FORA member of the FOCSQL.DATA data set. Add a job card, make the necessary edits, and submit this JCL to install the adapter:
//job card goes here //********************************************************************* //* Name: G10FORA JCL //* //* Function: Linkedit the MVS FOCUS/ORACLE 10 interface //* //* Substitutions:-Change "qualif" into the high level qualifier //* for your FOCUS datasets. //* -Change "qualpdse.LOAD" to the name of your //* FOCUS PDSE load library. //* -Change "oraqual.SQLLIB" TO THE NAME OF YOUR //* Oracle SQL library. //* -Change "libprfx" to the high level qualifier //* of your linker library (like CEE) //* -Change "plang" to the name of your linker //* message name (like EDCPMSGE) //********************************************************************* //* //LKED1 EXEC PGM=IEWL,PARM='LIST,NOXREF,LET,RENT' //OBJECT DD DISP=SHR,DSN=qualif.FOCSQL.DATA //SYSMODIN DD DISP=SHR,DSN=qualif.FOCSQL.LOAD //SYSLMOD DD DISP=SHR,DSN=qualpdse.LOAD //SYSUT1 DD UNIT=SYSDA,SPACE=(100,(50,50)) //SYSPRINT DD SYSOUT=A //LKED1.SYSLIN DD * INCLUDE SYSMODIN(OR8FOC) INCLUDE OBJECT(OR8FOC) NAME ORAFOC(R) /*
//* //LKED2 EXEC PGM=IEWL, // PARM='LIST,NOXREF,LET,RENT,AMODE=31,RMODE=ANY,UPCASE' //SYSLIB DD DISP=SHR,DSN=libprfx.SCEELKED //SYSMSGS DD DSN=libprfx.SCEEMSGP(plang),DISP=SHR //OBJECT DD DISP=SHR,DSN=qualif.FOCSQL.DATA //ORAOBJ DD DISP=SHR,DSN=oraqual.SQLLIB //SYSLMOD DD DISP=SHR,DSN=qualpdse.LOAD //SYSUT1 DD UNIT=SYSDA,SPACE=(100,(50,50)) //SYSPRINT DD SYSOUT=A //LKED2.SYSLIN DD * INCLUDE OBJECT(OR8PS2) INCLUDE OBJECT(ORAPAS10) INCLUDE ORAOBJ(ORASTBL) NAME ORAPAS2(R) /*
where:
Is the high-level qualifier for your production FOCUS data sets.
Is the high-level qualifier for your FOCUS load library in PDSE format.
Is the high-level qualifier for your Oracle SQL library.
Is the high-level qualifier for your link edit load library.
Is the linkage editor message member name.
How to: |
Calculations can be processed differently in different RDBMSs and operating environments. If you want FOCUS to handle calculations instead of the RDBMS, you can issue the The SQL SET OPTIMIZATION NOAGGR command. This command disables optimization of calculations (DEFINE fields) without disabling optimization of join and sort operations.
SQL Passthru is supported for Teradata macros and stored procedures.
Macros need to be developed within Teradata using the CREATE or REPLACE MACRO command. Procedures need to be developed within Teradata using the CREATE PROCEDURE command.
You must call a macro in the same transaction mode in which it was compiled. To find out which transaction mode is currently in effect, issue the HELP SESSION command:
SQL SQLDBC HELP SESSION; TABLE FILE SQLOUT PRINT TRANSACTION_SEMANTICS END
Before you can call a stored procedure or a macro, you must set the connection accordingly, by issuing the SET MACRO command
SQL SQLDBC SET MACRO {ON|OFF}
where:
Enables one to call a macro. This is the default.
Enables one to call a stored procedure.
This is an example of the syntax for calling a macro:
ENGINE SQLDBC EX SAMPLE PARM1,PARM2,PARM3...; TABLE FILE SQLOUT END
The supported syntax to call a stored procedure is shown below.
ENGINE SQLDBC EX SAMPLE PARM1,PARM2,PARM3...; TABLE FILE SQLOUT END
When using the adapter with:
CREATE OR REPLACE PACKAGE pack1 AS TYPE nfrectype IS RECORD ( employee NF29005.EMPLOYEE_ID5%TYPE, ssn5 NF29005.SSN5%TYPE, l_name NF29005.LAST_NAME5%TYPE, f_name NF29005.FIRST_NAME5%TYPE, birthday NF29005.BIRTHDATE5%TYPE, salary NF29005.SALARY5%TYPE, joblevel NF29005.JOB_LEVEL5%TYPE); TYPE nfcurtype IS REF CURSOR RETURN nfrectype ; PROCEDURE proc1(c_saltable IN OUT nfcurtype); END pack1 ; / CREATE OR REPLACE PACKAGE BODY pack1 AS PROCEDURE proc1 (c_saltable IN OUT nfcurtype) IS BEGIN OPEN c_saltable FOR SELECT EMPLOYEE_ID5,SSN5,LAST_NAME5,FIRST_NAME5,BIRTHDAT E5,SALARY5,JOB_LEVEL5 FROM NF29005; END proc1 ; -- end of procedure END pack1; -- end of package body /
Information Builders |