Agent-based models have a track record of generating stock market bubbles when they include agents that are not optimizing and use backward-looking decision rules. But they do not seem to have convinced the profession of their relevance because of the perceived arbitrariness of model components and the fact that they basically predict that a broken clock is right twice a day. Hence, it should be quite interesting to try to embed an agent-based model into a more widely accepted model and see how far this can bring us.
Matthias Lengnik and Hans-Werner Wohltmann do this by including two type of asset traders in a Neo-Keynesian model: fundamentalists, who are forward-looking and expect that price will get closer to the fundamental equilibrium, and chartists, who are backward-looking and obey some predefined rules based on past prices. This introduces some degree of history dependence and assumes that both types of agents are fooled every time. They never learn. And asset prices are thus essentially exogenously determined. The non-financial part of the model follows some old-fashioned model where inflation linearly impacts the output gap, and inflation is determined by the output gap and the evolution of stock prices. In other words, we are back the wind-generating hand-waving of 1980's macro, and not exactly something I would call DSGE.
Anyways, let's see what comes out of this. Of course, by the very nature of the model, there can be multiple equilibria, and an unstable equilibrium is possible. So one has to be very careful with simulations as potentially a lot of scenarios are possible. Yet, Lengnik and Wohltmann base their entire analysis on a single 40 quarter run of their model. They call is "representative." In which sense? Have all runs the same statistical properties? Or did the authors mine for the most convenient one? None of the results can be believed until this is clarified.