Designers of computerized Decision Support Systems (DSS)
face a difficult problem. As socio-technical systems, DSS
use artificial intelligence (AI), statistical models, and related
technologies to help people make decisions such as diagnosing
a medical condition or choosing stocks. Frequently, DSS
do this by providing specific recommendations about a decision.
However, since the human users of a DSS are prone to
decision-making errors or biases, it is possible that at a certain
point, efforts to perfect the quality of recommendations
made by a DSS will be met with diminishing returns in terms
of the quality of decisions made by its users.