A question on BLKSIZE and LRECL.

JES2/3, JCL, utilities.
Post Reply
durga
Registered Member
Posts: 53
Joined: Mon Jul 01, 2013 3:28 pm

A question on BLKSIZE and LRECL.

Post by durga »

There was an abend in production which was because of LRECL mismatch. In that the production support person, as a workaround, changed the record length on all the steps that used that program and as well as the backup step in the job. They changed the blocksize from 27600 to 29900 saying that the expected record length 1300 is not multiple of the 27600 - where the previous BLKSIZE in the JCL was coded as 27600 while the LRECL was 1200.

Is the 29900 correct one for LRECL 1300?

I did some study and found that it should be 27300 for LRECL=1300. The file is FB and a sequential file.

First, I've allocated the file with LRECL 1300 using ISPF 3.2 by giving the BLKSIZE as 0 - the allocated file had BLKSIZE=29900.

Second, per an online resource, for 3390 type the BLKSIZE can be calculated like this:

blocksize = INTEGER(half-track-blocksize/LRECL)*LRECL

half-track-blocksize for 3390 is 27,998.

So INTEGER(half-track-blocksize/LRECL)=21

LRECL*21=1300*21=27300

Now with 3390, capacity is 56,664 bytes per track and as I read that 55,996 bytes are accessible by applications so with 29900 as BLKSIZE are we not wasting 26,096 (55,996 - 29900) space per track?
User avatar
Robert Sample
Global Moderator
Global Moderator
Posts: 1895
Joined: Fri Jun 28, 2013 1:22 am
Location: Dubuque Iowa
United States of America

Re: A question on BLKSIZE and LRECL.

Post by Robert Sample »

Is the 29900 correct one for LRECL 1300?
Short answer -- it is ONE of the correct block sizes. It may not be the most efficient block size, but it will definitely work.

Some unstated assumptions: you are using fixed, blocked records and the COBOL program (assuming it is a COBOL program) has the clause BLOCK CONTAINS 0 as part of the file definition. Under these assumptions, as long as the block size is a multiple of the record length then the program will correctly function. If the COBOL program has BLOCK CONTAINS 23 RECORDS as part of the file definition, then 29900 is NOT optional and the job should be left at 29900.
Now with 3390, capacity is 56,664 bytes per track and as I read that 55,996 bytes are accessible by applications so with 29900 as BLKSIZE are we not wasting 26,096 (55,996 - 29900) space per track?
Track block calculations are not that simple -- if you use 4096 as the block size, you can only get 12 blocks per track (or 49,152 bytes). Broadly speaking, the smaller the block size the less of the track you can use.

Usually, half-track blocking (or third-track blocking in some cases) is most efficient for the 3390 drive. However, if the COBOL program is hard-coded to reference 23 records per block then you ignore the half-track blocking value and use 23 times the record length. The answer to your question is … maybe, but it depends upon the code.
durga
Registered Member
Posts: 53
Joined: Mon Jul 01, 2013 3:28 pm

Re: A question on BLKSIZE and LRECL.

Post by durga »

COBOL program has BLOCK CONTAINS 0 RECORDS.
durga
Registered Member
Posts: 53
Joined: Mon Jul 01, 2013 3:28 pm

Re: A question on BLKSIZE and LRECL.

Post by durga »

Robert Sample wrote: Fri Jul 05, 2019 10:28 pm Under these assumptions, as long as the block size is a multiple of the record length then the program will correctly function.
Yes that I understand but is it the optimum BLKSIZE? And if yes, why it is?
User avatar
Robert Sample
Global Moderator
Global Moderator
Posts: 1895
Joined: Fri Jun 28, 2013 1:22 am
Location: Dubuque Iowa
United States of America

Re: A question on BLKSIZE and LRECL.

Post by Robert Sample »

Half-track blocking will be more efficient -- your data set will put 42 records per track with a block size of 27300 as opposed to 23 records per track with a block size of 29900. You asked about "optimum" but that largely depends upon other factors (such as what you mean by "optimum"). If you really want to learn more, do an Internet search for GX26-4577 which is the IBM reference card on the 3390 disk sizes. It originally came out in June 1989 but is still relevant; it gives you the number of blocks that fit into a track for various block sizes (worst case: 22-byte blocks will allow 86 blocks per track, using 1892 bytes of each track).

As far as your ISPF 3.2 session goes, I don't know exactly what you did but I suspect you put in the data set with a blank command to get back the current data set characteristics. If you then attempted to allocate a new data set with half-track blocking, ISPF may well use the 29900 block size since that is what the data set you looked at has. If you start a new TSO session, go to ISPF 3.2, issue the Allocate command and then provide a zero block size you should get 27300 for your data set. As an alternative, submit a batch job that creates a data set that has RECFM=FB,LRECL=1300,BLKSIZE=0 and see what that data set block size is set to. If it is anything over 27998, you need to review the situation with your system programmers and find out what is going on at your site.
durga
Registered Member
Posts: 53
Joined: Mon Jul 01, 2013 3:28 pm

Re: A question on BLKSIZE and LRECL.

Post by durga »

your data set will put 42 records per track with a block size of 27300 as opposed to 23 records per track with a block size of 29900.
How did you get these numbers?
Robert Sample wrote: Sat Jul 06, 2019 2:25 amAs far as your ISPF 3.2 session goes, I don't know exactly what you did but I suspect you put in the data set with a blank command to get back the current data set characteristics. If you then attempted to allocate a new data set with half-track blocking, ISPF may well use the 29900 block size since that is what the data set you looked at has. If you start a new TSO session, go to ISPF 3.2, issue the Allocate command and then provide a zero block size you should get 27300 for your data set. As an alternative, submit a batch job that creates a data set that has RECFM=FB,LRECL=1300,BLKSIZE=0 and see what that data set block size is set to. If it is anything over 27998, you need to review the situation with your system programmers and find out what is going on at your site.
I started with a new session of 3.2 and allocated a dataset by giving only LRECL=1300 and it picked up BLKSIZE as 27300.

I think I was not clear in my question - if we use hardcoded 29900 instead of 27300, program still works but is the choice of 29900 a good choice? As you said per track 29900 would allow to write 23 records while we could write 42, does not that mean that the I/Os are more and this space "42 minus 23 records" are wasted per track?
User avatar
Robert Sample
Global Moderator
Global Moderator
Posts: 1895
Joined: Fri Jun 28, 2013 1:22 am
Location: Dubuque Iowa
United States of America

Re: A question on BLKSIZE and LRECL.

Post by Robert Sample »

your data set will put 42 records per track with a block size of 27300 as opposed to 23 records per track with a block size of 29900.
How did you get these numbers?
Any block size over 27998 bytes means you get 1 record per track. 29900 divided by 1300 gives 23 and since 29900 is larger than 27998 there is only one block of 23 records per track. 27300 is less than 27998 bytes so you get half track blocking, meaning 2 blocks per track. 21 records per block times 2 blocks means 42 records per track.
I think I was not clear in my question - if we use hardcoded 29900 instead of 27300, program still works but is the choice of 29900 a good choice?
If you want to minimize additional work, then 29900 is a good choice. If you want to minimize the disk space used by the data set, 29900 will not be a good choice. I think you're making poor word choices in your questions -- "good" or "correct" are relative terms and their meaning varies according to what you are looking to do. If the data set is 100 tracks, you've spent a lot more time worrying about the "correct" or "good" block size than you will EVER recover from program executions. If the data set is 10,000 cylinders, then minimizing the disk space used by putting 42 records per track instead of 23 will be a worthwhile exercise.

The difference in I/O counts will not really be that relevant -- system Z uses a variety of methods to minimize I/O so attempting to justify ANYTHING by citing reduced I/O counts may cause the operating system to work in a less than optimal fashion due to the changes. And the choice of 27300 versus 29900 for the block size is a value judgment -- if the data set has 100 tracks, who cares if 29900 for the block size is wasting over 26,000 bytes per track? If you're looking for a categorical "yes" or "no" answer, you'll need to consult an Eight Ball -- on this forum, we work in the real world and answers are not always black or white.
durga
Registered Member
Posts: 53
Joined: Mon Jul 01, 2013 3:28 pm

Re: A question on BLKSIZE and LRECL.

Post by durga »

Thanks Robert.

I did not mean to offend anyone but I wanted to learn if my way of thinking is correct. I understand that DASD is not that expensive now but it used to be. I wanted to know about the calculation and the implications. I understand it now.

Thank you.
nicc
Global Moderator
Global Moderator
Posts: 691
Joined: Wed Apr 23, 2014 8:45 pm

Re: A question on BLKSIZE and LRECL.

Post by nicc »

consult an Eight Ball -- on this forum, we work in the real world and answers are not always black or white.
But the 8-ball is black! Surely a see-through crystal ball is better?
Regards
Nic
User avatar
Robert Sample
Global Moderator
Global Moderator
Posts: 1895
Joined: Fri Jun 28, 2013 1:22 am
Location: Dubuque Iowa
United States of America

Re: A question on BLKSIZE and LRECL.

Post by Robert Sample »

I did not mean to offend anyone but I wanted to learn if my way of thinking is correct.
You did not offend me -- but you kept asking the same question in different ways. At one extreme, if the data set is 10 tracks no one will care if the data set is blocked efficiently or not so 27300 or 29900 or even 32500 would all work and which is used won't make that much difference in space usage. At the other extreme, if the data set 10,000 cylinders with half-track blocking, using a larger block size would cause thousands of cylinders of wasted space and that could be significant. In between those extremes, there will be a data set size where the block size starts to become significant. HOWEVER, what that data set size is will typically vary by site -- if a site uses charge back on DASD, then reducing charges may play a role. Some sites don't use charge back, so as long as the data set block size does not require adding any more DASD to the system, for those sites any block size at all will be fine unless the data set has a lot of tracks / cylinders.

Nic, you're right -- I should have use a crystal ball analogy instead of the 8 Ball since it has white answers on a black background. :D
Post Reply

Create an account or sign in to join the discussion

You need to be a member in order to post a reply

Create an account

Not a member? register to join our community
Members can start their own topics & subscribe to topics
It’s free and only takes a minute

Register

Sign in

Return to “JCL - Job Control Language.”