Issue with a SELECT ... FOR UPDATE DB2 11.1

Hi!

We are a issue in our database DB2 11.1 with a SELECT … FOR UPDATE which is executed many time by seconds and it is generating many locks and the applications hangs.

SQL_TEXT:
SELECT flag1,flag2,flag3,status1,status2,status3,status4,status5,status_ext,device_fitness,cassette_fitness,configid,last_cmdtype,last_txntype,oar_screen,lastmsg_time,trace,tpdu,emv_identifier,language,pcode,respcode,acct_num,acct_num2,tvn,luno,cap_date,msg_coord_num,oar_line,model_specific,node, cassette_dispense, statetablename, statetablever, fault, severity, availdate, availtime,cmdmbid, balancerid, site_id, shclog_id, last_modify_dttime FROM atmdevicestate WHERE institutionid = ? and group_name = ? and unit = ? FOR UPDATE

The ATMDEVICESTATE table only has 1063 rows and two indexes:

INDEX ATMDEVICESTATE_1 (“LASTMSG_TIME” ASC,“GROUP_NAME” ASC,“UNIT” ASC) COMPRESS YES INCLUDE NULL KEYS ALLOW REVERSE SCANS
UNIQUE INDEX ATMDEVICESTATE_IX (“INSTITUTIONID” ASC, “GROUP_NAME” ASC,“UNIT” ASC) COMPRESS YES INCLUDE NULL KEYS ALLOW REVERSE SCANS

Is there a way to prevent these locks generated by the application in the database by SELECT … FOR UPDATE with some parameter in the instace/database or howto can avoid this locks ???

In the top 5 SQL by executions appears the sentences SELECT…FOR UPDATE, the UPDATE of CURSOR and SELECT, the tree for the same table ATMDEVICESTATE:

EXECUTIONS TIME_SECONDS TEXT


4617270 0 SELECT flag1,flag2,flag3,status1,status2,status3,status4,status5,status_ext,device_fitness,cassette_fitness,configid,last_cmdtype,last_txntype,oar_screen,lastmsg_time,trace,tpdu,emv_identifier,language,pcode,respcode,acct_num,acct_num2,tvn,luno,cap_date,msg_coord_num,oar_line,model_specific,node, cassette_dispense, statetablename, statetablever, fault, severity, availdate, availtime,cmdmbid, balancerid, site_id, shclog_id, last_modify_dttime FROM atmdevicestate WHERE institutionid = ? and group_name = ? and unit = ? FOR UPDATE
4616826 0 UPDATE atmdevicestate SET flag1 = ?,flag2 = ?,flag3 = ?,status1 = ?,status2 = ?,status3 = ?,status4 = ?,status5 = ?,status_ext = ?,device_fitness = ?,cassette_fitness = ?,configid = ?,last_cmdtype = ?,last_txntype = ?,oar_screen = ?,lastmsg_time = ?,trace = ?,tpdu = ?,emv_identifier = ?,language = ?,pcode = ?,respcode = ?,acct_num = ?,acct_num2 = ?,tvn = ?,luno = ?,cap_date = ?,msg_coord_num = ?,oar_line = ?,model_specific = ?,node = ?, cassette_dispense = ?, statetablename = ?, statetablever = ?, fault = ?, severity = ?, availdate = ?, availtime = ?,cmdmbid = ?, balancerid = ?, site_id = ?, shclog_id = ?, last_modify_dttime = ? WHERE CURRENT OF SQL_CURSH200C4
3174610 0 SELECT flag1,flag2,flag3,status1,status2,status3,status4,status5,status_ext,device_fitness,cassette_fitness,configid,last_cmdtype,last_txntype,oar_screen,lastmsg_time,trace,tpdu,emv_identifier,language,pcode,respcode,acct_num,acct_num2,tvn,luno,cap_date,msg_coord_num,oar_line,model_specific,node, cassette_dispense, statetablename, statetablever, fault, severity, availdate, availtime,cmdmbid, balancerid, site_id, shclog_id, last_modify_dttime FROM atmdevicestate WHERE institutionid = ? and group_name = ? and unit = ?

We are think it’s a programming issue, but the applications say than is a problem with the db2sync since his analysis they are observed high time in commits/checkpoints in the database

IBM team said there is no way to avoid this locks. We are check isolation level but we do not know what level of isolation is recommended to reduce/avoid blocking in the application side ???

-> The history file is less than 5 mb
-> Lock waits are not so high in the snapshots
-> Query execution times are good

Thks !

Hello vilallva11,

These locks are there “by design”. It means that this level of locking is expected and required. No way how to change it for FOR UPDATE clause. There is no DB/instance parameter to change it (trust IBM in this :-)), but you can try to rewrite the logic. It depends on the level of isolation you require.

Check combination of FOR READ ONLY and the USE AND KEEP UPDATE LOCKS together, isolation-clause and maybe WAIT FOR OUTCOME, but all options will need the change of the logic.

I hope it helps.

Regards
Ondrej

Thank Ondrej.Zizka !

I agree that there is no parameter that change the behavior of the locks. IBM requires it to be executed at the moment the issue occurs:

-> db2mon.pl and db2fodc -hang full

I understand to detect something that could be tuned in the db2 engine.

We know that the application has worked well with Oracle and they did not have this Issue of locks.

The table ATMDEVICESTATE is the table that more OVERFLOWs and more UPDATE suffers, here statistics since the BD is reactivated at

Start Date Start Time
2018/06/05 02:25:15

TABLE_NAME PAGES OVERFLOWS


ATMDEVICESTATE 12 1,192,483

The Database page size = 4096 but the table was moved to tablespaces of 32K, nevertheless OVERFLOWs still appears at the top.

The ATMDEVICESTATE table only has 1063 rows with two indexes and has columns VARCHAR (10,32,50,64 and 256) we have considered rebuilding the table and change VARCHAR by CHAR, of course VARCHAR (256) changing it only by CHAR (255), we have monitored the column VARCHAR (256) and the maximum reached at the moment is 116 in length.

Do you think this could help ???, Any other idea ???

I’m going to review “FOR READ ONLY and the USE AND KEEP UPDATE LOCKS”

Hello villalva11,


We know that the application has worked well with Oracle and they did not have this Issue of locks.

Oracle uses a different approach to concurrency.
There is database parameter CUR_COMMIT which will change the behaviour to be equal to the Oracle default behaviour. You can read more here:
https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.1.0/com.ibm.db2.luw.admin.perf.doc/doc/c0053760.html

Regards
Ondrej

Thanks Ondrej.Zizka

Yes, (CUR_COMMIT) = ON DB2 v11.1.0.0

I consider correct or would you like to change it to OFF for the issue that I present ???

Value ON is correct.

Do you use DB2 compatibility vector settings? Have you set it to ORA and restarted the database?

Regards
Ondrej

Hi Ondrej.Zizka,

**NO, **only have set DB2_COMPATIBILITY_VECTOR=ORA in our development environment.

Do you think this can help us ?? If so, how ??

We have two possibles options in mind:

1.- Alter table to PCTFREE
2.- Rebuilding table and change VARCHAR by CHAR
3.- Rebuild table very frequent is no possible because the applications is OLTP 7x24

The table suffers 450,000 Updates by day
PCTFREE = -1

**LOCKSIZE= **Row

Rows: 1072

The table length is: 700 bytes

The AVGROWSIZE is: 332

The tablespace of the table is 32K

==> OVERFLOW

SCHEMA TABLE_NAME PAGES OVERFLOWS

SW_APP ATMDEVICESTATE 12 1,827,658

What value of PCTFREE do you recommend ???

Hello,

The ORA vector variable sets DB2 to behave as Oracle. You mentioned, that you have this parameter configured on DEV environment. Dou you have the problem you are describing on the DEV environment?

Please read the following article to see how to set PCTFREE value.
https://www.dbisoftware.com/blog/db2_performance.php?id=139

As far as I know. PCTFREE = -1 is interpreted as default value and the default value is 0 (which is good for DW, but not for OLTP where is frequently updated data). I would start with PCTFREE = 5. Reorg table and check the behaviour. Be aware, that you need to reorg the table and it will cause a table growth.

Regards
Ondrej

Hi Ondrej.Zizka!!!

The ORA vector variable sets DB2 to behave as Oracle. You mentioned, that you have this parameter configured on DEV environment. Dou you have the problem you are describing on the DEV environment? **No, no because in DEV environment. **

we do not have concurrency that PROD

Page overflows occur when a VARCHAR column in a row has been updated which causes the row to use more space than it did before the update. If DB2 cannot insert the row on the page where it was originally written, DB2 will leave a pointer to the actual location where it writes the new row. If page overflows occur frequently, then performance will degrade as it will take more time for DB2 to search the table.

With CHAR or PCTFREE in the table we will avoid OVERFLOW and PAGE REORGS and we will maintain good performance in the database.

Any comments on this variables to reduce the issue of locks ???

DB2_SKIPINSERTED=ON
DB2_EVALUNCOMMITTED=ON
DB2_SKIPDELETED=ON

Thanks