Work data sets can for a step?

All sort of Mainframes Interview Questions.
Post Reply
User avatar
Vivek Nayak
Registered Member
Posts: 10
Joined: Sun Jan 18, 2015 5:16 pm

Work data sets can for a step?

Post by Vivek Nayak »

I searched for the answer of this question but I could not locate a good naswer -

How many work data sets can we specify at most in a step?

I don't if there is a limit of this sort? If it is there are we supposed to really remember it? Is there a thoumb rule for this which can help to answer this question in an interview? Please help, I am somehwat frustated with these type of questions.
enrico-sorichetti
Global Moderator
Global Moderator
Posts: 826
Joined: Wed Sep 11, 2013 3:57 pm

Re: Work data sets can for a step?

Post by enrico-sorichetti »

How many work data sets can we specify at most in a step?
as usual the IQ of the interviewer is pretty low ...

the term work dataset is usually used to define NONT APPLICATION related datasets
e.g. sort work datasets, datasets used by compilers , ...

so how many of these You CAN. sometimes MUST, specify, it depends on the utility/program used


there is a global limit on the number of DDNAMES that can be used

ant that number depends also on two more variables,
the number of extents and the number of volumes allocated

the SYSTEM INITIALIZATION AND TUNING manual has the following details

Quote:
TIOT
Specifies the installation defaults for the task I/O table (TIOT).
SIZE(nn)
Specifies the size of the TIOT. The TIOT contains an entry for each DD statement.
The size of the TIOT controls how many DDs are allowed per jobstep. By specifying an integer from 16 to 64 as the value of this parameter, the installation controls the default DD allowance.

The following table shows the relationship between the size of the TIOT and the maximum number of DDs allowed:

Maximum number
of DDs allowed
when every DD
Maximum number requests the
SIZE Value of single Unit maximum number

Code: Select all

Dec (Hex) Size of TIOT DDs allowed of units (59) 
16 10 16384 (16K)   816  64 
17 11 17408 (17K)   867  68 
24 18 24576 (24K)  1225  97 
25 19 25600 (25K)  1277 101 
32 20 32768 (32K)  1635 129 
40 28 40960 (40K)  2045 162 
48 30 49152 (48K)  2454 194 
56 38 57344 (56K)  2864 227 
64 40 65536 (64K)  3273 259 
Notes:
Your calculations need to take into account that the size of a TIOT entry, for a DD statement or a Dynamic Allocation, increases by four (4) bytes for every SMS Candidate volume assigned (e.g., by your DATACLAS), regardless of whether they're guaranteed space.
For a VSAM KSDS the number of 4-byte entries in the TIOT for the data set depends on whether or not the data set is defined as reusable. The count of entries in the TIOT is the count of candidate volumes for the data and index components plus:
For a reusable data set - the number of volumes used by the data component plus the number of volumes used by the index component.
For a nonreusable data set - the number of volumes in the set of volumes used by the data and index component.
Use the following to calculate the maximum number of DDs allowed per Job Step:
The TIOT Prefix, Header, and Trailer consume sixty (60) ('3C'x) bytes of the total TIOT space available to a Job Step.
A DD statement requesting a single unit requires twenty (20) bytes ('14'x) of TIOT space.
Example 1:

//TAPEJOB JOB
//STEP1 EXEC PGM=IEFBR14
//DD1 DD UNIT=3490 ** DD requires 20 bytes *


TIOT space requirement for entire step = 80 bytes.
A DD statement requesting two (2) units requires twenty four (24) bytes ('18'x) of TIOT space. Twenty bytes for the basic information for the first unit and an additional four bytes for the second unit.
Example 2:

//TAPEJOB JOB
//STEP1 EXEC PGM=IEFBR14
//DD1 DD UNIT=(3490,2) ** DD requires 24 bytes *


TIOT space requirement for entire step = 84 bytes.
A DD requesting the maximum number of units allowed, fifty nine (59), utilizes two hundred fifty two (252) bytes ('FC'x) of TIOT space.
Example 3:

//TAPEJOB JOB
//STEP1 EXEC PGM=IEFBR14
//DD1 DD UNIT=(3490,59) ** DD requires 252 bytes *


TIOT space requirement for entire step = 312 bytes.
A Job Step with three (3) DD statements and each DD requesting one more unit than the previous DD would utilitize the following TIOT space:
//TAPEJOB JOB
//STEP1 EXEC PGM=IEFBR14
//DD1 DD UNIT=3490 ** DD requires 20 bytes *
//DD2 DD UNIT=(3490,2) ** DD requires 24 bytes *
//DD3 DD UNIT=(3490,3) ** DD requires 28 bytes *


TIOT space requirement for entire step = 132 bytes.
Value Range: 16 - 64 kilobytes

Default: 32 kilobytes
cheers
enrico
When I tell somebody to RTFM or STFW I usually have the page open in another tab/window of my browser,
so that I am sure that the information requested can be reached with a very small effort 8-)
William Collins
Global Moderator
Global Moderator
Posts: 490
Joined: Sun Aug 25, 2013 7:24 pm

Re: Work data sets can for a step?

Post by William Collins »

Are you asking about a SORT, or some other specific utility, or generally?
User avatar
Vivek Nayak
Registered Member
Posts: 10
Joined: Sun Jan 18, 2015 5:16 pm

Re: Work data sets can for a step?

Post by Vivek Nayak »

William Collins wrote:Are you asking about a SORT, or some other specific utility, or generally?
I'm asking generally. But if I could site a specific example like SORT too, it could have helped. So if you help with similar answer, it will be a great help.
User avatar
Vivek Nayak
Registered Member
Posts: 10
Joined: Sun Jan 18, 2015 5:16 pm

Re: Work data sets can for a step?

Post by Vivek Nayak »

enrico-sorichetti wrote:there is a global limit on the number of DDNAMES that can be used

ant that number depends also on two more variables,
the number of extents and the number of volumes allocated

the SYSTEM INITIALIZATION AND TUNING manual has the following details
But for the work dataset, will be concerned about the TIOT?
enrico-sorichetti
Global Moderator
Global Moderator
Posts: 826
Joined: Wed Sep 11, 2013 3:57 pm

Re: Work data sets can for a step?

Post by enrico-sorichetti »

But for the work dataset,
the system does not care / does not know about the use of a ddname/dataset

for the system all the ddnames/datasets are handled the same way, according to the jcl keywords
cheers
enrico
When I tell somebody to RTFM or STFW I usually have the page open in another tab/window of my browser,
so that I am sure that the information requested can be reached with a very small effort 8-)
William Collins
Global Moderator
Global Moderator
Posts: 490
Joined: Sun Aug 25, 2013 7:24 pm

Re: Work data sets can for a step?

Post by William Collins »

Generally, work datasets aren't used outside utilities and products. Where they are used, they are used for specific purposes, and the technical staff responsible for those utilities and products will know how to define/optimise their use.

For SORT it is mostly best to use dynamic allocation rather than specifying actual work datasets, as your SORT product can make estimates are run-time depending on some characteristics of the actual data for that run, usually avoiding excessive overallocation or fatal underallocation.

Your SORT documentation will contain some information about the allocation of work datasets.

Language-processing can require work datasets, and if you have very large sources, you may have to increase the amount of space on one or more work-datasets, but how those are used is so complex, I doubt there is any useful estimate. If a compiler, for instance, fails with space on a work dataset, get advice from your technical staff.

If you have a specific case, outline it. Generally, there is no possible answer to how to estimate "work" space.
User avatar
Vivek Nayak
Registered Member
Posts: 10
Joined: Sun Jan 18, 2015 5:16 pm

Re: Work data sets can for a step?

Post by Vivek Nayak »

William Collins wrote:Generally, work datasets aren't used outside utilities and products. Where they are used, they are used for specific purposes, and the technical staff responsible for those utilities and products will know how to define/optimise their use.

For SORT it is mostly best to use dynamic allocation rather than specifying actual work datasets, as your SORT product can make estimates are run-time depending on some characteristics of the actual data for that run, usually avoiding excessive overallocation or fatal underallocation.

Your SORT documentation will contain some information about the allocation of work datasets.

Language-processing can require work datasets, and if you have very large sources, you may have to increase the amount of space on one or more work-datasets, but how those are used is so complex, I doubt there is any useful estimate. If a compiler, for instance, fails with space on a work dataset, get advice from your technical staff.

If you have a specific case, outline it. Generally, there is no possible answer to how to estimate "work" space.
Thanks for the details William.

So for SORT we just don't need to put in the work files are they will be taken care by the system always?
nicc
Global Moderator
Global Moderator
Posts: 691
Joined: Wed Apr 23, 2014 8:45 pm

Re: Work data sets can for a step?

Post by nicc »

Please read William's post carefully - word by word. Especiall the sentence that starts "For SORT it is mostly best..." and the paragraph starting "Your SORT documentation...". In the latter case you were meant to actually go and read it.
Regards
Nic
User avatar
Vivek Nayak
Registered Member
Posts: 10
Joined: Sun Jan 18, 2015 5:16 pm

Re: Work data sets can for a step?

Post by Vivek Nayak »

enrico-sorichetti wrote:
But for the work dataset,
the system does not care / does not know about the use of a ddname/dataset

for the system all the ddnames/datasets are handled the same way, according to the jcl keywords


Thanks. This is a new way of thinking for me. Thanks!
User avatar
Vivek Nayak
Registered Member
Posts: 10
Joined: Sun Jan 18, 2015 5:16 pm

Re: Work data sets can for a step?

Post by Vivek Nayak »

nicc wrote:Please read William's post carefully - word by word. Especiall the sentence that starts "For SORT it is mostly best..." and the paragraph starting "Your SORT documentation...". In the latter case you were meant to actually go and read it.

Yes I read that:
For SORT it is mostly best to use dynamic allocation rather than specifying actual work datasets, as your SORT product can make estimates are run-time depending on some characteristics of the actual data for that run, usually avoiding excessive overallocation or fatal underallocation.
But in my company I usually see the work data sets being coded in sort jobs. And we alos do the same as we are told copy from an existing step or job. :?:
William Collins
Global Moderator
Global Moderator
Posts: 490
Joined: Sun Aug 25, 2013 7:24 pm

Re: Work data sets can for a step?

Post by William Collins »

There's a whole Appendix in the DFSORT Application Programming Guide on workspace. If you are using SyncSORT, there will also be documentation on workspace usage.

You site is making the job of getting the SORT workspace efficient more difficult by using JCL allocation.
nicc
Global Moderator
Global Moderator
Posts: 691
Joined: Wed Apr 23, 2014 8:45 pm

Re: Work data sets can for a step?

Post by nicc »

In the good ol' days work datasets HAD to be specified. Nowadays this is not absolutely necessary and the program can more accurateley specify at run time what it requires.
Regards
Nic
Post Reply

Create an account or sign in to join the discussion

You need to be a member in order to post a reply

Create an account

Not a member? register to join our community
Members can start their own topics & subscribe to topics
It’s free and only takes a minute

Register

Sign in

Return to “Interview Questions.”