For crop breeding purposes (as against generating mutant stocks for functional genomics, for instance), the aim should be to obtain a reasonable number of desired mutations for a trait of interest while inflicting the least unintended disruption to the genotypic integrity of the crop. This ensures that the fitness of the desirable induced mutant is not otherwise compromised by the presence of unintended induced deleterious alleles which would require additional interventions in time and resources for, e.g., backcrossing to the elite starting genotype in order to break linkage drags. The universally adopted norm is to select a dosage that results in reductions of 30 to 50 or 40 to 60 percent in growth or survival rates, respectively of the first generation mutant (M1) seedlings compared to the seedlings of untreated seeds. Reductions in germination rate, seedling height, survival rate, number of tillers, seed set, and fertility test in the M2 generation as well as chlorophyll mutation are the main parameters measured in the sensitivity tests for determining the optimal doses. The reductions are plotted against the mutagen doses and the dose corresponding to the desired level of reduction is read off the graph. This could also be determined through regression analysis. A sample graph, Figure 1, shows that a 50% reduction in seedling plant height is caused by gamma irradiation at 300 Gy. Doses corresponding to other percentage reductions can be read off the gradient of the graph. Mba et al. [39,40] described the procedures for using the radiosensitivity to the physical mutagen, gamma rays, and the reactions to EMS to ascertain their optimal doses for inducing mutation events in both seed and vegetatively propagated plants.