Hints and Tips for Coding Db2 DDF Applications

This section suggests ways you may be able to improve the efficiency of your Db2 DDF applications.

Use the Distributed Relational Database Architecture Protocol

You can improve performance by switching from the Db2 Private Protocol (PP) to the Distributed Relational Database Architecture Protocol (DRDAP).

The DRDAP supports the following functions that are unavailable in PP:

  • Connections using TCP/IP.

  • Connections to Db2 data on non-z/OS platforms.

  • Connection using the SQL CONNECT statement.

  • DDL (for example, CREATE, DROP) and DCL (for example, GRANT, REVOKE) SQL statements.

  • Stored procedures, which help minimize network traffic between the application requester and application server.

  • Remote bind. Since remote bind is not supported by the Db2 PP, all SQL is treated as dynamic SQL, which may cause performance degradation.

  • Static SQL.

  • User-defined and LOB data types.

  • Thread pooling.

  • A simpler and more condensed message format, which improves network performance.

Use the Type 4 Driver in Db2 Universal Database

Db2 Universal Database (UDB) currently supports three types of driver: type 2, type 3 and type 4. The type 4 driver has the following benefits over the others:

  • It is independent from the Db2 CLI driver, so a Db2 client library is not necessary.

  • It can be used as an applet driver, but does not need connections to an applet server, so performance is better than that provided by the type 3 driver.

  • It is developed in Java, and its file size is much smaller than the size of the type 2 and 3 drivers.

  • It is implemented as a DRDA AR (application requestor) and provides direct connectivity to Db2 UDB, without needing to go through Db2 Connect.

  • Its common JDBC implementation across platforms provides portability and very good performance.

Important: IBM will be phasing out the type 2 and type 3 drivers. All future Java database connectivity enhancements will use the new Universal Driver architecture.

Multi-row operations in a distributed environment

When executing INSERT statements on the distributed platform, Db2 ODBC clients use array input:

  • INSERT statements are bundled by the driver into a single network message and sent to the Db2 for z/OS system.

  • At the server, the message is taken apart and multiple INSERT statements are executed to insert all the rows that make up the input array. This reduces network traffic, as fewer messages need to be sent.

In Db2 for z/OS V8, array input can take advantage of Db2’s capabilities to use multi-row INSERT. In this case a single message is sent (as before) with a single (multi-row) INSERT statement but we see a reduction in the number of API crossings between DDF and DBM1 because it is only one INSERT statement, and therefore a reduction in CPU and elapsed time.

You may need to modify your applications so that they can exploit multi-row fetch operations. For example, you have to set up your applications to use host variable arrays. However, when you use a remote client to connect to Db2 (for example, a Java application using a type 4 connection with the Universal Driver), Db2 will use multi-row fetch automatically when fetching rows from the tables while building a block that will be sent back to the client.

You may need to modify your non-DDF applications so that they can exploit multi-row fetch operations. For example, you must code your applications to use host variable arrays and the new multi-row clauses in your SQL. However, when you use a remote client to connect to Db2 (for example, a Java application using a type 4 connection with the Universal Driver), Db2 can automatically use multi-row fetch when fetching rows from the tables while building a block that will be sent back to the client.

For Db2 to automatically exploit multi-row fetch for distributed applications:

  • the cursor must be read-only, or

  • you must be using CURRENTDATA NO with an ambiguous cursor.

When either of these conditions is satisfied, Db2 can enable block fetching. When Db2 is putting together a block of data to be sent to the client inside the DDF address space, DDF issues FETCH statements. When you are using block fetching against a V8 Db2 system, the DDF address space will use multi-row fetch to build the blocks of data to be sent to the client. This is transparent to the requester.

IBM states that you can realize significant CPU savings by using this feature, up to 50% in a case where many rows are fetched.

Note that this enhancement does not require the client application to use FOR n ROWS or host variable arrays, and has no effect on the blocking done by DDF. It only affects the number of API crossings between DDF and DBM1.

Use the ON COMMIT DROP TABLE option for CREATE DECLARED GLOBAL TEMPORARY TABLE statement (New with V8)

In Db2 DDF Version 7, you must explicitly drop a declared temporary table before COMMIT, or the remote connection cannot become inactive. Db2 DDF Version 8 provides the new ON COMMIT DROP table option on the CREATE DECLARED GLOBAL TEMPORARY TABLE statement. A declared temporary table is implicitly dropped at commit time if no open held cursor against it.

Example:

DECLARE GLOBAL TEMPORARY TABLE ...

ON COMMIT DELETE ROWS

ON COMMIT PRESERVE ROWS

ON COMMIT DROP TABLE

Before Version 8, Db2 treated a declared global temporary table as a base table with an open held cursor. A remote connection is not eligible to become inactive (and the thread being returned to the pool) at commit time.

Since scrollable cursors use declared temporary tables, connections that use scrollable cursors that are open are not eligible to become inactive at a COMMIT time. (When you CLOSE a scrollable cursor, the declared temporary table is dropped.)

Declared temporary tables are created as “pseudo” release at deallocate structures. Any data inserted into the table is removed at commit, unless the table was defined with ON COMMIT PRESERVE ROWS. The table remains active across a commit unless explicitly dropped. If the thread reuse does not go through a "New User" process (for example, a DDF inactive connection), then the declared temporary table is not dropped until the thread is terminated and deallocated. The lock on the TEMP table space remains as long as the table exists. An application that uses connection pooling will retain all temporary tables as long as the Web application server is running. Locks on temporary tables will be preserved, even past COMMITS.

In Db2 V8, ON COMMIT DROP TABLE enables you to define a declared global so the application does not have to explicitly delete the declared temporary table before committing.

  • Distributed work that can now be switched from active to inactive.

  • Temporary tables that do not have any HELD cursors open are dropped automatically at COMMIT.

This will allow the connection to be switched to INACTIVE. When the declared temporary table is defined using ON COMMIT DROP TABLE, the declared global temporary table is implicitly dropped at COMMIT if no open cursors on the table are defined as WITH HOLD.

You may need to modify your applications to take advantage of this feature, as a declared temporary table must be defined within the program to specify the ON COMMIT DROP TABLE clause.

Writing Accounting Records with KEEPDYNAMIC(YES)

When the Db2 DSNZPARM CMTSTAT is set to INACTIVE (new default for Db2 V8), Db2 writes an accounting record when a transaction commits and the connection qualifies to become inactive.

 When using the KEEPDYNAMIC(YES) bind option, Db2 cannot disconnect the connection from the thread (which is what happens when the connection becomes inactive), because the thread contains information about locally cached statements.

Therefore DDF threads must remain active when using KEEPDYNAMIC(YES). As a consequence accounting records are not cut at transaction boundaries. Likewise, DDF does not re-establish WLM enclaves at transaction boundaries.

Although KEEPDYNAMIC(YES) still prevents DDF threads from becoming inactive in Db2 V8, using KEEPDYNAMIC(YES) still allows Db2 to cut accounting records at transaction boundaries. That is, at commit time, when Db2 evaluates whether a DDF connection is eligible for inactivation, if the only reason why it cannot become inactive is the presence of cached dynamic SQL statements due to KEEPDYNAMIC(YES), DDF still cuts an accounting record, and also completes the WLM enclave (as if KEEPDYNAMYIC(YES) is not specified).

When a new request arrives from the client system over the connection (that is still active), Db2 creates a new enclave and starts a new accounting interval. With this new behavior, you may now consider period-based WLM goals for threads that commit frequently, but cannot become inactive only because they use KEEPDYNAMIC(YES). Threads that cannot become inactive for other reasons do not reset the WLM enclave, and period based goals are probably not right for them (as was the case in the past).

This new behavior is supported for DRDA clients only, not Db2 PP clients. As in Db2 V7, the presence of held cursors or declared temporary tables keeps the threads active and does not allow accounting intervals or WLM enclaves to complete.

Read-only using update locks (new with Db2 V8)

Depending on how WebSphere is deployed, the persistence layer (which interacts with the database on behalf of an entity bean) currently uses ISOLATION(RS) to retrieve one or more rows with the FOR UPDATE KEEP UPDATE LOCKS clause when loading the WebSphere entity beans. The Java application is then allowed to perform updates on those beans, and the updates are subsequently sent to Db2 as searched UPDATE statements. WebSphere cannot do a positioned update because after the row is read in, the cursor is closed. WebSphere uses this approach to minimize the number of open cursors at runtime.

In Db2 Version 7, you can only specify KEEP UPDATE (or EXCLUSIVE) LOCKS in combination with FOR UPDATE. The FOR UPDATE clause causes DRDA to use a separate network flow for each operation (OPEN, FETCH,... CLOSE) because the cursor may appear in an UPDATE or DELETE WHERE CURRENT OF (positioned delete).

Migrating your application to Db2 V8 can provide significant performance improvements. Db2 V8 can obtain exclusive locks with a FOR READ ONLY query. This allows the JDBC driver and DDF to use block fetch for the SELECT, eliminating the extra network messages required with a FOR UPDATE query, while still obtaining and holding the locks WebSphere needs for the searched UPDATE statement.

Use Dynamic Scrollable Cursors (new with Db2 V8)

Db2 V8 introduces dynamic scrollable cursors. A dynamic scrollable cursor does not materialize the result table. Instead, it scrolls directly on the base table and is therefore sensitive to all committed inserts, updates, and deletes. Dynamic scrollable cursors are supported for index scan and table space scan access paths. Dynamic scrollable cursors can be used with row-level operations or row set-level operations.