I agree with Clark Quinn to a point. “we’re talking about layering learning on performance in a context. However, it’s more than just performance support… To make it learning, what you really need is to support developing an ability to understand the rationale behind the steps, to support adapting the steps in different situations.” Fair enough. But this is a step too far: “here we’re talking about models. What (causal) models give us is a way to explain what has happened, and predict what will happen.” I don’t agree that a learner requires an explicit semantic representation in order to have learned.
Source: New feed2