What are the computational tradeoffs involved in model construction? How can they be measured?
How to incrementally develop and update the satisficing model with changes in the environment and changes in the collection or behavior of other agents?
What is the role of inductive learning in resource-bounded reasoning? Should learning be used to control deliberation? How should one control the exploration-exploitation tradeoff?