-availability of data with rich descriptions:this means that unless the relations captured in the database are of high degree,extracting hidden patterns and relationships among the various attributes will not make any practical sense.
-availability of a large volume of data:this is mostly mandated for statistical significance of the rules to hold.absence of say, atleast a hundred thousand transactions will most likely reduce the usefuiness of the rules generated from the transactional database.
-reliability of the data available:although a given terabyte database may have hundreds of attributes per relation,the DM algorithms run on this dataset may be rendered defunct if the data itself was generated by manual and error prone means and wrong default values were set.also,the lesser the integration with legacy applications,the better the accuracy of the dataset.
-ease of quantification of the return on investment(RO1) in DM :although the earlier three factors may be favourable,unless a strong business case can be easily made,investments in the next level DM efforts may not be possible.in other words,the utility of the DM exercise needs to be quantified vis-a-vis the domain of application.
-ease of interfacing with legacy systems:it is commonplace to find large organizations run on several legacy systems that generate huge volumes of data. a DM exercise which is usually preceded by other exercises like extract,transformation and loading(ETL),data filtering etc,should not add more overheads to system integration.