The reliability of sensory characterizations from DA can be analyzed by monitoring the global performance of the panel, aswell as the performance of each individual assessor. Considering that assessors are trained in attribute identification and scaling, the dispersion of the scores provided to each attribute for each sample is used to estimate panel agreement. Besides, samples are evaluated in duplicate or triplicate, which enables the analysis of global and individual reproducibility . However, rapid methodologies for sensory characterization do not require training and are usually performed in a unique session, which makes it difficult to evaluate their reliability. Although no standard procedure is available for evaluating the reliability of sensory characterizations obtained with these methodologies, several approaches have been used. proposed to estimate the reliability of sample configurations using simulated repeated experiments through a bootstrapping resampling approach. Blancher et al. (2012) argued that a sorting map could be considered stable if sampling repeatedly from the population of interest provides equivalent sorting maps. Using this approach CATA questions and PSP proved to be highly reliable, providing sample configurations that reach average RV coefficients higher than 0.95. Projective mapping was less stable than the other two methodologies and did not reach an average RV value of 0.95. The minimum number of consumers needed to reach stable sample configurations using CATA questions and projective mapping are similar to those reported by Ares