On the superiority of secularism

The US presidential election season in upon us, and of course religion will again be a major factor. While this is rather unique in the Western world that the religion of a candidate would matter, this fact seems very natural to Americans. The basic logic is that a god-fearing politician is less likely to abuse his power, especially in the position of president, where he cannot vie for a better position through exemplary behavior. That seems to be a slam-dunk for religious candidates, yet experience from the rest of the Western world seems to contradict this. In fact in many other countries, overtly religious candidates are suspected of having allegiances primarily with the religion, not the country.

Pavel Ciaian, Jan Pokrivcak and d'Artis Kancs go one step further and try to compare developed economies, that generally rely on secular institutions to enforce laws and rules, to developing economies, that more frequently draw on informal institutions, in particular religious ones. They find that religion-based institutions are weaker because thy hinge on credibility which is difficult to build and easily lost. Secular ones have an explicit and formal legal enforcement mechanism that can also adapt to changing circumstances. The latter means also that religious enforcement systems are best for static societies, while dynamic and growing economies should adopt secular systems. I am not quite sure causality goes this way, but correlations certainly support this.

Recessions are costly

Robert Lucas has pushed the idea that business cycles are not that costly that they would need intervention, and the real business cycle literature, at least the early one, has anyway advocated that the government should stay out of this kind of business. It is true that long term growth and understanding why some countries are so poor are more important questions, yet one cannot shake the feeling that recessions are costly. The recent one is more severe than usual and can highlight how its costs can be high, and those costs may persist for some time if the much longer than usual unemployment durations translate into significant losses in human capital and ultimately wages.

Steve Davis and Till von Wachter provide some new evidence of a somewhat different kind. Studying US workers displaced in mass-layoffs, they calculate the present value of a job loss in terms of pre-loss wage years. When the unemployment rate is below 6 percent, the loss is of 1.4 years. Above six percent, as is typical in a recession, the loss is 2.8 years, i.e., much much more than the increase in unemployment duration. One can only imagine that these numbers are going to be much worse for the last recession. And keep in mind that these higher numbers apply to more people in a recession. And it matters in aggregate: if the unemployment rate goes from 5 to 10%, it means 5% of the population loses 1.4 additional years of wages. That is 0.7% of national labor income, or 0.5% of GDP. Not peanuts, and I have not even factored in anything about curvature in utility.

Why the young demand more social insurance than older generations

Take up rates for various social insurance schemes generally increase from generation to generation, even when their is no change to the rules. That must be either because new generations are more feeble and, say, tend to become more frequently or earlier handicapped, or that they have a higher demand for social insurance benefits, say, because they are feeling more entitled (one can have different interpretations).

Martin Ljunge build a model where younger generations are influenced by what older generations did in the following way: Deciding whether to apply for benefits depends on a "psychic cost" that depends on the take up rate of the previous cohort. The model is the estimated using individual data from the sick leave program in Sweden (I think, this is never explicitly mentioned). It is found that, indeed, having parents taking advantage of social benefits lowers the cost on one doing so oneself. This effect makes up half of the long term increase in the take up rate.

Precaution versus prudence

Why do people accumulate precautionary savings. Conventional wisdom tells us this happens because people face some shocks to income and they are averse to risk and prudent. Now, we need to be careful here. Risk aversion means that one does not like fluctuations in utility (say, from consumption). Prudence means that one dislikes bad outcomes. Hence they do not mean exactly the same thing, and one could conceive precautionary savings without prudence.

Agustín Roitman shows an example where this works. To do this, he uses a new class of preferences that allows to distinguish clearly risk-aversion and prudence: act - bct3 (it may not be easily visible, this is linear consumption less cubed consumption with some coefficients). It has a relative coefficient of prudence of -1 (ratio of third to second derivative), which is constant and independent of risk aversion. The negative value also means this economic agent is imprudent. As Roitman shows, this agent will still accumulate precautionary savings, hence prudence is not necessary. And except for these assumptions on the utility function, the results holds quite generally.

Malthus visits Rwanda

Rwanda has always struck me as the perfect example of a Malthusian economy. A dense population where land is systematically divided up among descendants, leading to tiny lots that are barely sufficient for survival.Lots are so small that new capital for its exploitation is not relevant, and no technological improvements have any significant bite. In the end the land can only support a population at the edge of famine.

Marijke Verpoorten brings an intriguing connection between the Malthusian theory as applied to Rwanda and the genocide of 1994. Using regional data, she finds that the areas where there was the most urgent population pressure (through density or growth) were also the ones with the most killings. In a way, society was taking care of a business nature and famine could have.

On the mobility of academics in Europe

Europe has suffered a brain drain of top academic scientists that it has tried to reverse by offering better work conditions. The main competitor in the United States, where top scientists are able to attract easy funding and universities are accommodating. While pertains to relatively few people, they are considered to be key, as their reputation can attract better colleagues and graduate students, ultimately improving the rankings administrators vie for. Given the large amounts of money spent by the European Commission and its country counterparts, it is important to understand what motivates scientists to move.

Edward Bergman does this using a survey of 1800 European academics considered to be among in the top institutions. Those who exhibit higher levels of loyalty or "voice" (opinionated on local affairs) tend to stay and try to improve things internally , if necessary. The others prefer to leave when local conditions worsen, and then they have no particular loyalty to stay in Europe when they are just looking for better working conditions. All this is not too surprising. What I find more interesting is that scientists top priority is research opportunities followed by salary, and language preferences is very minor. European universities cannot count on scientists coming home any more.

The surprisingly low border effect of the BigMac

The Economist's BigMac Index is widely used as an indicator of purchasing power parity. I have never been convinced that it is an appropriate indicator, though. But studies keep using it, and teachers keep mentioning it.

The latest is Anthony Landry who uses it to study the border effect, i.e., how a border adds to the cost of transportation. That assumes that BicMacs are all produced in one location and then shipped to all stores worldwide. This clearly is not the case, both for the raw material and for the assembly. While the raw material is produced centrally in each country, it is only rarely passing a border due to regulation and preferences for local product. I thus do not see the point of estimating borders effects with BigMacs.

The recent collapse in the trade of durable goods

One important feature of the recent crisis has been a significant drop in international trade. There is nothing surprising in this, as it is a regular feature of recessions that imports decrease more than GDP, as they are mostly composed of intermediate and investment goods. And it is well known that investment is very volatile through the business cycle. This are all well-known facts, that are easy to replicate with standard international business cycle models.

Dimitra Petropoulou and Kwok Tong Soo look at this trade drop from the perspective of trade theorists. They use a small open economy model (hence prices are exogenous) with two-period overlapping generations (hence we are talking about long-term movements, not business cycles) with tradable durable goods and a non-tradable non-durable good. There is a fix endowment of labor and capital that can freely be allocated between sectors (hence there is no investment, or savings). Agents can freely borrow and lend within a generation, but not between generations or with abroad.

Why do I mention this? Because Petropoulou and Soo try to reinvent the wheel and make it square. There is a large international business cycle literature that has gone through all this with much more realistic assumption and delivered quantitative results. And this not the first time I see that trade theorists could learn a lot by reading a little bit outside their bubble.

The history of negative nominal interest rates

There is much talk about the zero bound on nominal interest rates and how this is constraining the policy options of many central banks. How can of course ask oneself why there would be such a restriction on nominal interest rates. Would it be possible to tax (nominal) money holdings? It is certainly conceivable to have negative interest rates on some bank accounts, and it has happened before. Switzerland and Germany imposed negative rates on non-resident account holders in the 1970's, and the Swiss National Bank is currently contemplating doing this again (New York Times). Sweden imposed them recently on mandatory reserve holdings of commercial banks. There is, however, very little theory on this.

Cordelius Ilgmann and Martin Menner try to make sense of the existing literature on the topic. There are essentially two strands, according to them: the first is started with Silvio Gesell in the 19th century and proposes taxing money, the second lies within the very recent money-search literature.

Gesell was the proponent of an anarchist free-market utopia, the free-economy movement. He proposed that bank notes would need to have a weekly stamp affixed to remain valid, amounting to a 5% tax every year. The stated reason is that while other goods depreciate naturally, money does not and may be withheld from circulation. The tax alleviates that, and should in particular be used in times of crisis, because it increases the velocity of money and prevents its hoarding. That argument can be made for today, but it neglects the influence of inflation on all this, and that is crucial.

The recent money-search literature uses taxes on money holding as a proxy for inflation. My understanding that this is really for analytic convenience but in no way a policy proposal. Indeed, what this literature cares about is the real return on money, not the nominal one. But Ilgmann and Menner run with it and believe there is an endorsement of a Gesell tax.

PS: a third way is discussed in the paper, recently proposed by Willem Buiter. It is based on the silly idea that the various function of money are taken over by separate currencies and backed up by equally silly arguments with money in the utility function.

Four years, 1000 posts

I am now entering the fifth year of blogging, and this is the 1000th post, which means I have discussed about one thousand papers. Personally, I find this an incredibly high number. In fact there are so many that I once discussed a paper a second time, forgetting I already did take care of it before.

This is also my yearly opportunity to reassess whether what I am doing here makes sense at all. I obviously get no outside reward for the blog, but that is OK. Only the top bloggers get recognition, and I am not one of them. So it is all about personal gratification, and I think I am still happy about doing it. It is sometimes a struggle to find the time to put things on "paper", but as I still enjoy it, I'll continue for another year.

This year of blogging was dominated by the Bruno Frey saga, which I believe has attracted quite a few new regular readers. That is noticeable because readers from Germany, Switzerland and Australia are much more frequent now. The blog seems also to have an unusually strong and loyal readership in Slovenia.

If you wonder what the most popular posts during this year were, here they are:

  1. On the ethics of research cloning
  2. Why boards keep bad CEOs
  3. Keep CEOs off outside boards
  4. Do we need awards in Economics?
  5. The Bruno Frey bubble
  6. Economists did see the bubble coming
  7. What is an MBA worth?

I am also encouraged to see that discussion of the posts has picked up compared to previous years. Again, Bruno Frey has helped here, but it is also apparent in the regular posts about Economics research. Unfortunately, spam comments have started flocking to this blog, and I apologize that some make it live before I notice them.

Measuring optimists and pessimists

According to Wikipedia, the Oxford English Dictionary defines optimism as "hopefulness and confidence about the future or successful outcome of something; a tendency to take a favourable or hopeful view." This is a poor definition, I think, because it mixes two concepts: a) that one has subjective probabilities on good outcomes higher than what objective ones would warrant, 2) that one is less sensitive to uncertainty. Put it that way, it becomes quite obvious that economists have the tools to separate the two and can thus document what I would call true optimism (or pessimism), a tendency for favor good outcomes in expected utility.

David Dillenberger, Andrew Postlewaite, and Kareen Rozen do exactly this by appealing to Savage's subjective expected utility. If we look at choices of an individual across lotteries, then we should be able to back out both the subjective probability distribution and the aversion to risk. This can actually be a big deal, for example when one wants to give advice in a risky environment. The advisor will try to understand the preferences of the subject, but ignoring optimism/pessimism may lead to a very erroneous assessment of risk aversion. This can be crucial in assessing, say, investment strategies, health care options, or career choices.

Anonymous applications on the Economics PhD market. Really?

There is unfortunately still some discrimination left in labor markets. Literature has shown that in particular race and gender may matter in some cases, as well as beauty. While this is usually demonstrated by looking at the significance of some characteristic dummies that should not matter in hiring or wage decisions, another way to test for discrimination, at least in hiring, is to compare anonymous job applications to open ones. This is costly to do, and it may infringe some ethical issues, hence this is a rare exercise.

Annabelle Krause, Ulf Rinne and Klaus Zimmermann did this for applications of Economics PhDs to a position in a European institution. I am really puzzled what they expected to learn from such a peculiar market. Indeed, part of the recruitment pool was anonymized before being submitted to the recruitment team. This is very difficult to do properly, as CV, papers and reference letter have plenty of mentions of names and gender. And how to hide that when a candidate has published or is previously known to recruiters? And of course, this can only test up to the selection for the live interview, which is very early in the recruitment process.

In addition, it is very unlikely to find discrimination in such a specialized market. One, recruiting in an academic environment is usually closely scrutinized for discrimination. In fact I have been highly annoyed by the burden it takes to prove one has not discriminated. Two, I would even argue there is reverse discrimination, as recruitment committees are often under strong pressure to hire from "under-represented" groups. Three, the institution that agrees for this exercise is not discriminating consciously, or it is very foolish.

The results are not surprising. No discrimination is found, except a little reverse discrimination for women. It is impossible to generalize the results, as the sample is so small and so specific to this recruiting institution. I really do not see the point of this paper.

Banking crises and income inequality

With the Occupy X movement, discussion about the unequal distribution of income has flared up. At the same time, we are still not over the banking crisis. Several people have linked the two, saying that the large banking sector has lead to more income inequality and that the rich have benefited form the crisis at the expense of the poor. We probably do not yet the data to corroborate any of this, but we have data that allow to look at income inequality through other banking crises.

This is what Luca Agnello and Ricardo Sousa set out to do with a panel dataset from OECD and non-OECD countries. They find some regularities: there is a run-up of income inequality before the crisis hits especially in non-OECD countries; it declines fast thereafter, especially in OECD countries; better access to credit reduces income inequality; and the size of government has no impact on income inequality. The estimates of the paper are rather crude, there is just a lag on the Gini coefficient. I am sure one can tease out more interesting dynamics with a structural vector auto-regression. But the results are still interesting as is.

One more argument for taxing unhealthy activities

Whenever an activity exerts a negative externality on others, you want to tax it. For example, sin taxes are in place or are proposed for smoking, eating junk food or drinking soda. The reasoning is that these activities are bad for health, and thus end up costing society, even when there is no socialized health care. The fact that these unhealthy people tend to live shorter only partly offsets the direct effect of the externality.

Catarina Goulão and Agustín Pérez-Barahona explain that there is another good reason to tax unhealthy food: unhealthy eating habits are transmitted from a generation to the next even when there is no genetic reason for this to happen. This can also within a group of socially interacting people, which gives obesity or smoking the characteristics of an epidemic. To break this vicious circle of learning, the price mechanism has to come to the rescue. So let's impose more or new sin taxes. We could use the revenue, apparently.

Social security and the increase in US health care costs

Health care expenditures increase faster than inflation, and I have already offered several explanations for that. But there are more, and they work in rather subtle ways. Today's theme is social security.

Kai Zhao builds a large general equilibrium overlapping generation model to quantify the impact of the introduction of social security in the US on health expenditures. And it is substantial, at 43% of the increase since the fifties. How can this happen? One mechanism is that social security transfers income from the young to the old, and the old have a much higher propensity to spend on health. A second one is that with social security, the utility during retirement years is higher, and thus people want to make sure to have more of those years. Interestingly, the depressing impact of social security on capital accumulation is far too small to counteract the first two effects.


The American Economic Association launched three years ago the four American Economic Journals, juniors to the American Economic Review. For a journal, three years is a short time, but I still want to assess where they stand for now. From the get-go, let me express again my disappointment that the AEA did not make them open access, the finance would certainly have allowed it, and there was no real need to have print copies. In fact, issues were lying around the department like junk mail that no one bothers to throw away. That was a rather bad omen for the new journals. How has each fared since.

American Economic Journal: Macroeconomics had a horrible start. Editors struggled to fill the first issues as submissions were severely lacking. I suspect this had to do with some very poor choices for the editorial team, to a large extend the usual AEA insiders. A change of editor after a year and some serious recruiting efforts changed all that, and AEJ-Macro is now a success story and the flagship of the AEJs, with impressive impact factors for such a young journal.

American Economic Journal: Microeconomics is rather unremarkable. The market for a new micro journal was thin to begin with. Arguably, the AEA wants to recapture the market share lost to commercial publishers, but with Theoretical Economics launched shortly before (in open access), it was difficult to rally economic theoreticians for this new cause. Unless people finally get fed up with the commercial publishers, AEJ-Micro will remain unnoticed.

American Economic Journal: Economic Policy fares better, but only little. I think it suffers from the fact the most economic research has policy implications, so it is difficult which papers should go there rather than elsewhere. And there are plenty of field journals that attract the top papers in their field. If, say, a health economics paper does not make it in the AER, the author will always prefer sending it to the Journal of Health Economics.

American Economic Journal: Applied Economics is the most worrisome. It has essentially been hijacked for the research agenda of the editor, and for the rest it looks like a junior partner not to the AER, but to the Quarterly Journal of Economics, even with the same mafia mentality (and hence the title of this post). I am surprised that the Association has not yet started rectifying the situation, but then again it is run by people close to the editorial team. I am afraid that this journal, despite being so young, is already the epitome of club mentality in publishing.

In summary, an unexpected success, two non-remarkable journals and a basket case. Not a promising start.

Risk taking and the menstrual cycle

Women are grumpy during their period, and they have good reasons to be so. That this can impact some of their decisions should come as no surprise, yet it can be useful to determine how and how much.

Matthew Pearson and Burkhard Schipper do this by running an experiment that tries to tease out risky behavior and find that women bid higher in an auction when in the most fecund phase of their menstrual cycle or when they are on hormonal contraceptives. OK.

But wait, much like in an infomercial on TV, there is a bonus. In a second paper, the same authors find that the ratio of the length of the index and ring fingers of the right hand has no impact on risk taking. While that seems to be a rather odd measure to look at, there is a good reason to do so. But what annoys me that this is the exact same experiment as in the previous paper, they just use a different characteristic of the participants.

This is a bad case of turning a research project into many thin salami slices. The authors did not even bother rewriting much of the paper, with many part being cut-and-pasted from one to the other. Sadly, this second paper is already scheduled to appear in Experimental Economics. What are we to expect next? A paper about hair color? Astrological sign?

Are economists not humble enough?

The Economics profession has been targeted on various fronts lately: one is for a lack of a code of ethics, as exposed by the documentary Inside Job, and another has been the lack of forecasting or warning about the current crisis. With respect to the first, the American Economic Association has convened a committee to create a code of ethics, although unfortunately with a rather narrow mandate. Regarding the second, I believe the accusations are overblown, in part because economists have warned about excessive house prices, because bubbles are by definition unobservable, and because the principal accused, modern macroeconomics, has addressed before the crisis many of the aspects it is being accused of missing. This latter point has mainly been put forward by some economists who have a rather antiquated knowledge of the field, as occasionally addressed here.

One of them is David Colander, who has an admirable art of getting into all the right committees at the AEA. This time, it is the Ethics Committee. In his latest paper, he argues that he is not too worried about the funding of economic research and the lack of disclosures. He is rather bothered by the fact that economists do not have the humility to declare how fragile their results may be. They should be more forthcoming about the risk of error, much as engineers do as they care a lot about failure.

I can see where Colander is coming from, but I do not think this is the fault of the economists, but rather of the public consuming economic research. From personal experience, nobody cares about alternative scenarios. Well many editors do, but people in the industry do not. All they want is a precise number to run with. And even if you include standard errors and such, all that is reported is the median. I am guilty of this on this blog as well, it would take too much time and space to report this for every paper, and it distract from the main message. Only when I think the authors have abused the simplification or neglected possible scenarios do I discuss this, and this does not happen too often. And I think it is very symptomatic how Thomas Sargent and Christopher Sims have recently been ridiculed in the press for refusing to provide instant answers to difficult questions. In short, I think the problem has less to do with the economists than with the readership.

Which childhood sport is more promising for labor market outcomes

Which sport should you encourage your child to adopt? My guess would be cross-country running, swimming and rowing, which have all the important characteristic of encouraging perseverance and long-term planning. They also make your child hang out with the "right people" as these athletes feature prominently among the best students. But these are just my impressions, let us see what can be done with more than anecdotal data.

Charlotte Cabane and Andrew Clark look at US schools, although not quite at the level of athletic detail I would have wished. Healthy students are more likely to participate in sports and later be successful in life. But those participating in sports are also more likely to be healthier. The direction of the causation is not clear. But of interest here is whether participation is sports is an important determinant in latter outcomes. Using the National Longitudinal Study of Adolescent Health, which looks at students who were in grades 7-12 in 1994-95, they can track how the students are doing as late as 2008. In the end, participating in team sports once a week as a student increases the hourly wage by 1.5%. Not a lot but still significant, especially as this for adults in their thirties, and gaps tend to widen later on. Individual sports seem only to have an impact for adult outcomes of girls.

The best solution: carbon taxes

When there is some externality, the best way to deal it is with a tax (for a negative externality like pollution) or a subsidy (for a positive externality like education). Yet, I am continuously amazed how this policy using the market mechanism has found little reception in the United States. And economists are also very fond of it: it is the most efficient way to reach an objective, and in the case of a negative externality it even allows to reduce other taxes that distort the wrong way, like income taxes.

Joseph Aldy and Robert Stavins writes a survey article about how to best deal with carbon pollution, comparing a cap-and-trade of pollution permits, clean energy standards and taxation of carbon content. And the latter is the easy winner. And as argued multiple times on this blog, alternative energy should not be subsidized.

How much tax evasion is there in the US?

As mentioned before, tax authorities are now especially eager to catch tax evaders. But how many of those are actually out there? One would suspect that there are relatively few of them in a low tax country like the United States. But again, the tax authority there has been given relatively few means to pursue investigations and audit rates are surprisingly low. And the tax code is so complicated that the line between tax evasion and confusion is rather blurred.

The Internal Revenue Service, the US federal tax authority, has estimates about how much it is missing in revenue, but as far as I know these numbers are kept well hidden, except for a study in the eighties. Academics have tried to replicate this exercise, obviously with poorer data than the IRS, but with less political pressure. The latest attempt is by Edgar Feige and Richard Cebula. They use a technique similar to one used to calculate the size of an informal economy, a technique based on the quantity of currency in circulation and of check deposits, adjusting for currency suspected abroad and financial innovation. Indeed, tax evaders try to hide income from reporting by using cash transactions. This ignores though those who use tax havens, and I welcome informed guesses on how large a factor that may be.

In any case, Feige and Cebula come to the conclusion that about 20% of reportable income is not properly reported, leading to lost revenue in the order of $400-500 billion every year. They even compute a time series that allows them to figure out what makes the non-compliance rate change. It will not surprise that it increases when national income is higher, when tax rates are higher or when nominal interest rates are higher. It is interesting to see that higher unemployment rates lead to higher non-compliance. That may have to do with more people getting informal income while on unemployment insurance. Still, I cannot shake the feeling that all these results are shaky themselves, as this data is essentially made up.

Hurrican damage and climate change

Global climate change is not only supposed to bring higher average temperatures on earth but also more extreme weather. The latter is possibly the more important consequence, as it has an impact on agriculture and more generally can destroy property. One example is the incidence of hurricanes in the United States, with more and stronger hurricanes likely. What would be the economic impact of this?

Robert Mendelsohn, Kerry Emanuel and Shun Chonabayashi study this using historical data from hurricanes and estimating a damage function. Then they use this function to estimate damages from two scenarios, with and without climate change, taking into account that various US states will have higher populations and incomes in the following decades. In the end, some of the increase in damages is due to this growth, but climate change would have twice that impact. Yet, at $40 billion a year, it still proves to be relatively affordable compared to US GDP. But it is concentrated around the Gulf of Mexico, which shall become even less hospitable with higher temperatures anyway. The outlook for Florida is not too good...

Why more bad mortgages? Too much reliance on credit scores

The current financial crisis is at least partially blamed on lax lending practices in the US mortgage industry. More mortgages were provided to less credit-worthy individuals with smaller down-payments than ever before, until this house of cards fell apart. Of course, this is not the whole story, but at least there is some partial truth to it, right? Now I am not so sure.

Indeed, Geetesh Bhardwaj and Rajdeep Sengupta look at a large fraction of the sub-prime mortgages originated from 2000 to 2006. And they find that the credit-worthiness of their holders, as measured by the FICO score, actually increased (and more so than the general population). How could this be possible? One hypothesis is that mortgage issuers have gradually relied more and more on simple metrics they could enter into some software instead on analyzing other details on an application file. And if you end up relying on a single criterion, the selected applicant will look much better according to this criterion. But if this criterion is not well correlated with actual credit-worthiness and relevant information is neglected, your loan pool becomes more risky.

How much does race contribute to poverty in South Africa?

It is no one's surprise that blacks in South Africa are poor, less educated and less healthy than whites, given the still rather recent struggles through apartheid. But the country has also gone through a remarkable reversal of fortunes, with blacks now running the country and leading some formidable efforts to raise the blacks from chronic poverty. In some ways, these efforts are much more substantial than those that have been made in the US, where blacks are still a minority and yet have remained in poverty with little progress for decades. It is therefore of interest to understand how things have improved in South Africa, and whether the damage from apartheid has been overcome.

Carlos Gradín focuses on poverty and deprivation, both of which still show considerable gaps between blacks and whites, and comes to the conclusion that they are mostly explained by education and "family background" (parents' occupation and education). This all points to the fact that education is the big policy option, but it will take considerable time to reduce the gap.

These results are obtained by estimating a reduced-form equation for blacks and then letting them assume the characteristics of whites. Unfortunately, this procedure does not allow to determine whether there is still some discrimination against blacks, which would have been of particular interest in this context.

Are payroll tax cuts efficient?

While there is ample debate in various countries whether payroll taxes should be reduced or increased, there is little agreement. One side focuses on government revenue generation while the other is worried about the impact of such changes on economic activity. It looks like agreement is unlikely, as this is about different objectives. But maybe there could be.

Ossi Korkeamäki looks at a natural experiment in Finland, where the payroll tax was reduced by 3-6 percentage points in some provinces. As it is levied at the firm level, he studies the impact on firm activity, and finds nothing of statistical significance. If you are willing to look beyond statistical hairsplitting, most of the tax reduction went into profits, a little into wage and nothing into employment. But this is only looking at the firm level, there may be some aggregate effects, like those profits are getting consumed, right? Well, given that those receiving those profits are either pension funds or rich people, which both have lower marginal propensities to consume than the typical wage earner, that is not going to help economic activity either. So, in the end, it just looks like the government lost some revenue in this experiment, and the likely decrease in government purchases cancelled out any small output effect there may have been.

Corruption and the exchange rate regime

A fixed exchange rate regime is a sign of an authoritarian and weaselly government nowadays. Letting your currency float lets markets freely show when policies are weak. Governments who are afraid of showing such weakness either get a fixed exchange rate regime or announce a floating one and in practice manage it to make it look stable as a fixed one. This is the so-called fear of floating, common among corrupt governments. This would indicate that the choice of the exchange rate regime could be determined by the political regime. Could the causality run the other way?

Katherina Popkova studies this question and finds the rather striking result that it makes sense to have an fixed exchange rate when you are corrupt and corruption has a strongly positive impact on output. The logic is that in such circumstances taxation is not that much distorting, thus seigniorage can be efficiently replaced by taxes. That said, some may argue that it is a silly idea to mention that corruption may have a positive impact on output. Yet, it is far from empirically settled whether corruption has a positive or negative impact.

When random taxation is better

As many governments are scrambled for new revenue, tax evaders are falling under more intense scrutiny than usual. Southern European countries are particularly well known to be a paradise for tax evasion, as authorities are rather week and corruptible. As enforcement is rather difficult to improve, could an institutional change work better?

Stéphane Gauthier and Guy Laroque think they have found a solution, which is to randomize taxes. This builds on the fact the risk tolerance varies across tax payers and that the latter can face stiff penalties (or inconvenience) when caught evading. Suppose the skilled worker is more risk averse than the unskilled one, and you want to redistribute from skilled to unskilled, but cannot observe skills (or risk aversion). The skilled will try to pretend to be unskilled to avoid taxation. But if taxes are random, then he has fewer incentives to do so. If risk aversion goes the other way, then of course, tax evasion becomes worse. The authors offers no evidence on the correlation of risk aversion and skills or income, thus it is hard to tell how useful this scheme would be, and how welfare would be improved.

NB: The authors could have just spent a couple of seconds at the paper before submitting it. The forgot to BibTeX it, and none of the references show up. This is unfortunately a carelessness that I have encountered too often with French economists.

How to get fair elections in new democracies

There are several opportunities for new democracies in the Middle East, and if it is done well, democracies could actually establish themselves for good. This can only happen if politicians respect electoral results and the voters can trust the results as well. How do you get this to work?

Takeshi Kawanaka and Yuki Asaba offer some suggestion looking at electoral administration systems. But how do you get an independent and competent electoral commission? This is not just a matter of resources, the incentives for politicians need to be right. The critical layer is the party in power, which can change statutes, allocate funds and appoint commission members. Having a fraudulent electoral commission makes it easy to get reelected, but it can become costly if election results are challenged in the street. Rather, the ruler may want to have a credibly independent and competent electoral commission, as a win would then remain unchallenged. The ruler may also be motivated by avoiding costly bribes.

One aspect the paper ignores is that politicians are careers driven. A politician that turned out to be competent and honest in local government may have a shot at higher office. This is why he should not be corrupt, and the electorate will only promote the non-corrupt ones. Thus, to build a democracy you ned to work from the bottom up, so that proven incorruptibles end up ruling the country. Sadly, American nation building does the exact opposite, and it fails, the latest example being Iraq. By choosing to first hold national elections, the corrupt politicians got a free pass to power, and they have not looked bad since.

How societies can collapse

Some say that the western economies are doomed and that China is taking over as the main economic power. I do not think we are quite there yet, after all China is still not the largest economy, and by far, despite its huge population. And China may itself be at risk of a financial crisis due to its very inefficient banking system. At least it could diffuse an impeding real estate bubble, but this is not the topic of this post. The worst case scenario, however unlikely it may be, is the western society would collapse. It has happened before, so it would be interesting to learn how this could happen.

Rodrigo Pacheco, Newton Paulo Bueno, Ednando Vieira and Raissa Bragança study the collapse of the Mayan civilization, which was well organized, covered a lot of territory and had a long history. Yet it appeared to collapse within a few years in the 9th century. How could this unravel to quickly?

Their point of departure is that societies are inherently resilient. They can be subject to shocks, even large shocks, and they bounce back. Yet, sometimes they do not. What makes this happen? The main point is that dynamics are important. It is believed that a severe drought was the trigger. But this civilization had such drought before and survived. The last one was different because it brought about systemic changes. There precise nature is difficult to determine, after all we do not know that much about Mayan history. One hypothesis is that the drought brought some unrest which made it worse. For example, agriculture used terraces, which are costly to maintain and rely on the good shape of the ones uphill. Under a severe drought, maintenance may have been lacking, and after some time the terracing system fell apart, and with it probably the structure of society. In short, there needs to be the dynamics of a death spiral for a collapse to happen, but it will still take some time.

European credit ratings: a case of self-fulfilling expectations

Europe is a mess, and one has to wonder why. First, there is no reason that the credit difficulties of Greece should have any consequences on the Euro. I doubt the US Federal Reserve would feel compelled to do anything if a state were to default on its debt, and nobody would claim it should. Why should it be different in Europe? Because politics want it.

To make things worse, the credit rating agencies generate self-fulfilling expectations. These are of a different kind of those that make that Greece will have to default. Witness yesterday's announcement by Moody's while threatening a downgrade of French debt: "Elevated borrowing costs persisting for an extended period would amplify the fiscal challenges the French government faces amid a deteriorating growth outlook, with negative credit implications." In other words, high credit costs would lead to a downgrade and this would lead to even higher credit costs, etc. The rating is not about the intrinsic risk of default (what rating agencies are supposed to measure) but about the expectation of where the rating should, as signaled by the cost of credit. And this after Standard and Poor's downgraded the same debt "by error." The rating agencies are clearly not helping at this point.

Meta-analyzing the price puzzle

Meta-analysis is the analysis of the literature on, say, the estimate of a particular elasticity. This is common in medical studies, where each individual study has usually little statistical significance, but aggregating them may give the results more power. In Economics, there are few meta-analyses (although there is a society on the topic) as study samples are statistically much more sound and each study takes so much time that there is little incentive to do a lot o the same topic. Yet, this happens for some themes.

One such theme is the price puzzle: many vector-autoregression models show that a monetary contraction is immediately followed by a rise in the price level, which is rather counter-intuitive. Marek Rusnák, Tomáš Havránek and Roman Horváth found 70 articles from 31 countries providing 210 estimates of the response at five horizons. Can one then simply do a statistical analysis on these estimates? That is not so easy, as there may be a publication bias. As the puzzle is now well documented, journal editors are not particularly interested in studies that demonstrate it again. The authors claim that there is such a bias, but I have to confess I do not understand how they come to that conclusion.

The price puzzle is on average present, and prices eventually decrease as suggested but all theories. But there is substantial variance in the results, which the authors show can be explained by what variables are included or what variant of a vector-autoregression is run. This unfortunately confirms my frustration with this field: there is a dizzying array of methods and specifications that yield different results, and it is impossible to tell which one is right. Maybe we need a bit more theory to discipline this.

When should a child start school?

How early should one send a child to school. Too early, and she may miss critical time with parents and not be ready for school. Too late, she may have to deal with classmates that are not as fast or as developed (see previous post on redshirting). But it is not just a question for an individual, one needs also to figure when as a society school should start. France, for example, offers public kindergarten very early (2.5 to 3 years old), and it much more pedagogical than daycare. Most French kids enter first grade with reading skills. In the US, kindergarten is at age of sex or even seven, and there is little teaching. China is similar.

Dionissi Aliprantis uses the Early Childhood Longitudinal Study and exploits cross-state and cross-time differences in the cut-off date that makes children eligible for kindergarten as five-year olds. Children were analyzed in kindergarten, 3rd, 5th and 8th grade. It turns out that an earlier cut-off date is better, and an earlier birthday as well, as long as the child remains eligible. But the analysis only pertains to rather small start age differences, so it is difficult to extrapolate to even earlier start date as seen elsewhere.

The excessive taxation of married couples in Italy

In these times of fiscal austerity, governments are scrambling to find tax revenues and in particular to close loopholes and to chase tax evaders. While the goal is to raise more revenue (and be somewhat fairer, as tax evaders tend to have higher incomes), that can have the adverse effect of actually reducing tax revenue. While it is clear that the Laffer Curve is correct, there is little evidence that in aggregate any western economy is on the right side of its peak. But there there are some individual circumstances where it is.

Fabrizio Colonna and Stefania Marcassa discuss the taxation of married couples in Italy. Taxation in Italy is based on the individual, with deductions for children and the non-working spouse. As the incidence of these tax credits on the marginal tax credit decreases with the income of the first earner in a couple, typically the husband, there is a strong incentive for wifes of lower income husbands to work, full-time or part-time. This reduces the labor supply of the poor, increases poverty and increases the strain on welfare programs.

The paper estimates a complex labor choice model and runs two scenarios: the current one, and one where households can choose to file taxes individually or jointly. The latter boost women's labor force participation rate by 3 percentage points and reduces the proportion of women under some poverty threshold by half that. Others scenarios, such as gender-based taxation, are explored as well. In all scenarios, the assumption is that taxes are revenue-neutral. As poverty is reduced, the need for redistribution is reduced, something that makes the scenarios even more attractive.

Growth by saturation

When you compare rich and poor economies, you notice that many things differ between them One of them is that rich countries enjoy a much larger diversity of goods. It is not just that they carry more broads categories of goods, there are also more variations of a given good. For example, rich countries may have more car types and each model is sold with many more variations than in poor countries. Explaining this could be interesting.

This may be the motivation of Kozo Kunimune who builds a growth model with three essentially characteristics: there is constant growth in factor productivity, there is a ranking of goods with respect to the utility they provide, and consumers have a saturation point for each good. This implies that once they are saturated with a good, any "excess" production capacity can be dedicated to another good. As there is a strict ordering, goods are satisfied in succession and the number of goods defines the level of development. Also, the time needed to satisfy a new good decreases if the growth rate is constant, a feature that sounds empirically correct. But of course, there is no evidence whatsoever that we have such "lexicographic cum Leontieff" preferences. The model also violates elementary principles of Economics, such as local non-satiation. Not a particularly useful model.

Non-conformism in academia

In economics, there is the mainstream and various heterodox fractions that do not identify with the mainstream or each other. Too many, the heterodox seem like annoying and useless chatter. But they fulfill an important role, which is to keep those in the mainstream on their toes. This is part of the scientific process: challenge long and widely held views to check whether they are still valid. And even if the challenge is wrong, it makes those in the mainstream think about the axioms the build their theories on.

Vela Velupillai makes the case that dissenters are typically silenced by the mainstream, but may prevail in the long-run, citing many examples of mathematical economics, in particular the work of Pietro Sraffa. And this is really what tenure is good for: a dissenter will have much trouble publishing but may ultimately still contribute a lot to scientific advances.

We are in times were dissenters seems to have a more receptive audience. Indeed, the current crisis is seen by some as a failure of Economics, thus it is easy to criticize it. But it also highlights that dissenting can be very distracting if it is poorly focused, uniformed and populist. In this regard the recent dissenting by Colander, Krugman and Stiglitz has, I think been counterproductive, as I have occasionally discussed here. It is good to criticize the fundamental assumptions of the mainstream. It is better, but not necessary to offer alternatives. But it is counterproductive to dissent on the basis of a old read of the literature, and a literature that has in the meanwhile evolved to address these supposedly new criticisms.

Heterodox Economics has thus gained a fresh wind, simply because it is different from the mainstream. But what does it have to offer? Peter Skott, a heterodox economist himself, argues that it is still far from being an viable alternative. While heterodox approaches usually reject microeconomic foundations in macroeconomics, they should not throw out the baby with the bath water, i.e., ignore microeconomics altogether. And while the heterodox analysis for income distribution has focused on the labor income share, the large changes in income inequality happened within labor income. Finally, the heterodox literature is just as guilty of ignoring many of the suddenly relevant intricacies of the world of finance.

There is still a lot of work to do, and instead of pursuing an excess of mutual accusations and claiming Economics is useless, it seems much more appropriate to put more resources into making it better, heterodox or mainstream. After all, medical research was not defunded when the HIV/AIDS epidemic made ravages.

About beer

Beer has been an important part of human well-being, and this for thousands of years. While the economic literature has dealt rather little with it, many great papers have significantly evolved at the pub. Still, I have previously reported on a conference on the Economics of beer, and open source beer.

Now, let us talk about the economic history of beer, thanks to Eline Poelmans and Johan Swinnen. Brewing in the middle ages was he realm of monasteries, with rather small output and a lot of product diversity. With technological advances and reduction in transportation costs, commercial breweries took over and especially over the last hundred years they lead a remarkable trend towards consolidation. After all those mergers and acquisitions, product diversity was considerably reduced. This is all changing now, and tastes have become more sophisticated and local micro-breweries are on the upswing. In some ways, beer has become more like wine.

Why are school counselors so bad?

Recent news articles following up on the Occupy X movement have focused on youth unemployment and student debt. One aspect of this that strikes me are the absurdly bad choices students make. And from discussions with undergraduate students I ahad over the years, school counselors shares share part of the blame.

For one, they keep sending students into supposedly easy majors, even if job prospects are slim. If a student cannot handle the rigors of a serious university education, he should not be in university. He is less likely to get grants, more likely to take longer to graduate and then be in debt, less likely to get well-paying jobs there after and thus will face students debts for many years. Also, students with ambitions in better majors are told to switch to easier majors when they face difficulties, instead of helping them to overcome these difficulties. This is in particular the case for ethnic minorities and women, and one then wonders why they are underrepresented in science and technology. The problem is that counselors perpetuate or even amplify prejudices. An example is Neil deGrasse Tyson who was told that as a black he should go to basketball, not physics, and was not offered the help white students were getting when he struggled with classes, when blacks are not expected to excel in physics.

Also, counselors seem to obsessed with finding the "right college," which often an obscure little university where every major has only two or three faculty, and turn out to be expensive. The usual explanation is that the students needs small classes. This seems like another case of someone who is going to be highly in debt for a long time. And to come to Economics, the major is filled with undergraduates who did not get in or got kicked out of the business school, most often for failing on business mathematics and statistics. And what do counselors tell them? Get into a similar major, like Economics (or Psychology), ignoring that those quantitative skills are even more needed.

Why do counselors give such bad advice? I do not have an answer beyond wild guesses. But my casual observation tells me that the best psychologists or education professionals do not espouse this career, and for a good reason: on average the entry salary for a graduate of a Masters program in school counseling is an astounding US$33,000. In some sense, this is the bottom of the barrel that is trying to give advice to people on how to avoid falling to the bottom of the barrel. That is hardly inspiring. Schools should hire at least a few people who had successful careers to show how it is done, for example retirees.

Miss sharing with future generations? You are not missing much

Markets are not complete. Two major ways in whuch they are not complete is that we have borrowing constraints and that we cannot exchange with future generations. The latter can be a big deal when we think about the valuation of future amenities (like the environment) or long term risks. In particular, future generations could make us behave in certain ways if they could influence some of today's markets. This is precisely why the overlapping generation literature emerged, and a principal conclusion of it is that the government needs to intervene, in particular by providing an security that lives beyond generations: the government bond. While there is obviously a welfare cost to the lack of future generations on current markets, how large is it? The literature tells us the welfare benefit of the government bond is large.

Roel Mehlkopf just defended a dissertation on this topic, focusing on risk. In a nutshell, the cost is not that large, and it all has to do with distortions on the labor market. For one, those you ex-post need to transfer to another generation face a commitment problem in the sense that they want to reduce their labor supply, for example by retiring early. Once you take this into account, there is little to redistribute, and it can even be welfare-decreasing to transfer. This rationalizes why pension funds needs to be solvent at all times, even if they are solvent in the long run. One important implication is that when cuts are necessary in pensions, they should be larger for the young workers, as this reduces the labor market distortions.

Also, the dissertation points out that comparing to a situation with fictitious markets between non-overlapping generations can be misleading. Indeed, this implies that they all have the same weight in a social welfare sense. But there can be good reason for a social planner to deviate from this, and the analysis above, fr example, implies that future generations benefit more from risk sharing than current ones (who are at least partially locked in by past decisions). This should entice the social planner to give more weight to current generations, even beyond normal discounting of the future. And as only current generations for for the current government, we are not far from that optimum.

Physiology and Malthus

The Malthus model of economic growth (or the lack thereof) is now standard fare in undergraduate education. Basically, it shows that there is a limit to the size of an economy that hinges on decreasing returns to labor in production and on mortality increasing as standards of living, as measured by GDP per capita, decrease. Those two critical assumptions are based on observations that were certainly valid at Malthus' times, and are likely to still be true today.

Carl-Johan Dalgaard and Holger Strulik go a little bit further in this theory. As more food is available, people grow taller. But being tall requires more food to sustain the body, which provides an additional reason for stagnation. This makes it more difficult to break loose from this "development trap" and may explain why economies stagnated for so long. But as Malthusian theory, this dies not explain why the economy suddenly exploded in the 19th century.

Acquiring a firm for its workforce

Why do firms get merged or acquired? In the end, the hope is that it increases firm value, although the stock market response is often negative. It may also be for (anti-)competitive reasons, as firm try to buy the challengers. Or it may be to acquire a patent portfolio, technology, or access to a client list or new markets. I may forget some other good reasons.

Paige Ouimet and Rebecca Zarutskie show that mergers and acquisitions can also be the result of a drive to get access to the other firm's pool of workers. This is particularly true when the labor market is tight and the workers carry high human capital. Interestingly, wage tend to increase and turnover tends to decrease after such mergers. Thus not all employees should be afraid of M&As.

Why is funeral insurance so popular in Africa?

Probably the oldest form of insurance is existence is funeral insurance, which takes cares of burial (and now cremation) costs at death. In developed economies, its popularity has vanished, while it is still very common in Africa. One reason could be that when life insurance is available, people believe it is sufficient to cover funeral costs, and the beneficiaries are committed to take care of this. When life insurance is not available or when not commitment can be elicited from descendants, then funeral insurance ensure your body is properly disposed of.

Erlend Berg writes a model along those lines and finds that only middle income should favor funeral insurance. The rich do not face a tight budget constraint and the poor cannot afford it. Then using a marketing survey conducted in South Africa finds results that are consistent with the model. This lack of commitment in Africa for financial matters is pervasive. It is, for example, at the heart of the strange institution that ROSCAs are.

Why top MBA programs do not disclose grades

I have always been puzzled by the policy of many top MBA programs not to disclose the grades of their students. Even more puzzling is that they by and large manage to enforce this policy even from their top students, who should obviously want to signal that they are at the top of their class.

Daniel Gottlieb and Kent Smetters wondered about this as well. Such policies are voted by the students (who in the US own the grades) on the argument that it allows them to take more difficult classes without adverse consequences. Yet the evidence is that they learn less when such a policy is in place, which explains the general opposition to it from faculty. So, one can conclude that students are lazy (nothing new here), but is such a policy limited to top MBA programs? Why not in lesser programs, or other professional schools?

Gottlieb and Smetters point out that students have two signals for potential employers: their grades and the selectivity of the program. They are also risk averse, and at the start of their studies do not know how well they will do. In top schools, the selectivity signal is very strong and the students rely on it, while the "average" grade is superior in expected terms. In lesser schools, the selectivity signal is much weaker, and hence students try to distinguish themselves on the labor market in other ways, for example with grades.

To some extend, the same is happening on the Economics PhD market. When you look at the recommendation letters form the top schools, all candidates are the best in a generation in their field (I am exaggerating on a little). Thus the letter looses a lot of its value, and all that remains is the entrance selectivity of the PhD program. Lower ranked programs are much keener to differentiate their students and push the particularly good ones.

Adaptive versus rational expectations

There was a time where macroeconomics was ruled by adaptive (or backward-looking) expectations, like the much-ridiculed chartists. Then there was a revolution and rational (typically forward-looking) expectations were widely adopted, realizing that people are not stupid and will try to use the available information, including what other agents may do, to figure out what the future holds. Rationality, and in particular rational expectations, has recently come under attack because models failed to predict recent bubbles and crashes. I think this is mistaken, as detailed on several occasions on this blog.

Gregory Chow, however, longs for a return to adaptive expectations for three other reasons. The first is that it is empirically more plausible. Exhibit A is a regression of the stock prices of 50 blue chips in Taiwan on current dividends and past dividend growth. Despite a lowly R2 of 0.111, the fact that the coefficient on dividend growth is positive and significant is taken as evidence of adaptive expectations. I do not find this convincing, as a similar result could emerge with rational expectations if the dividend growth process is persistent.

The second reason is that "there is no reason to believe that the expected values [computed from an econometric model of the rational expectations] will have a sum, after discounting, which equals the actual current price." I think the underlying reasoning is that a statistician can only use a linear combination of past observations, thus economic agents will, too, and this all looks like adaptive expectations. But economic agents, and nowadays statisticians, are more sophisticated than that, and a gigantic literature in finance has shown that, for example, non-linearities and endogenous volatility, too name a few, are important. Even though statisticians have become much more sophisticated, they are still running behind economic agents and are far removed from being linearly backward-looking.

The third reason is that macroeconomists started using rational expectations simply because it was required to deal with the Lucas Critique, empirical evidence be damned. While I can be sympathetic to the argument that rational expectations was adopted without much direct empirical evidence, I also believe that economic agents do try and avoid systematic mistakes and that their expectations contain at least some rationality. And as much literature has shown, a modicum of rationality can bring markets darn close to prices that look like perfectly rational ones.

Is index-based weather insurance useful?

Whenever you are facing a risk, you want to be able to hedge against it (at least if you are risk averse). For this, there are all sorts of insurance policies. There are also markets in all sorts of instruments that allow you to find the right contingent claim for your situation. This includes farmers (and others) who want to hedge against meteorological risks. If you crop yields depend on weather patterns, you are looking for securities that pay out depending on some weather statistic. And they are available and have been heavily pushed by aid agencies in developing countries.

Chiratan Banerjee and Ernst Berg say they may not be such a great idea. They take the examples of rice farmers in the Philippines who bought wind-speed based indexes on the hypothesis that rice yields are lower when there are typhoons. But rice is remarkably resistant to typhoons and wind in general, the reason why it is so popular in the region in the first place. This means that rice farmers are heavily over-insured. That is especially bad and farmers are now confused about the concept of insurance as it looks like they face more risk than before.

Unemployment insurance in developing economies?

There is no doubt that absent moral hazard, insuring against unemployment shocks is welfare improving. But moral hazard, either through the unemployed not searching hard enough or rejecting job offers, can have a vicious effect on welfare if it is sufficiently widespread and successful. In addition, as unemployment insurance contributions typically do not depend on unemployment risk, only bad risks want to participate, and the insurance collapses without mandatory participation. With all this in mind, does it make sense to implement unemployment insurance in developing economies, where there is a large informal sector that makes mandatory contributions difficult to enforce and where moral hazard is, of course, rampant?

David Bardley and Fernando Jaramillo show that introducing unemployment insurance actually makes the formal sector more attractive and that we should thus not worry that much about the current level of informality. The presented model, however, does not allow for someone to collect UI benefits while working in the informal sector, a very real possibility that could easily overturn the results.

Religion as an insurance mechanism against aggregate shocks

I have never been fond of the claims that the world is better with religion. The principal claim is that religion gives hope for people in dire circumstances, and thus in Economic terms increases their utility despite having hit the budget constraint. But one could also argue that these people are being mislead, as religion provides them with subjective probabilities that are far off the objective ones, all the while making the budget constraint even tighter because of the tithe and other material donations.

Olga Popova studies whether this effect of religion on happiness not only applies to individual circumstances, but also for aggregate shocks. Looking at the transition countries, which each suffered through substantial falls in GDP after the collapse of the Soviet rule, more religious people suffered, in terms of happiness, less than others from the large economic reforms. Of course, it is easy to understand that for most, they were happier than circumstances would indicate because it was rather obvious that things would eventually improve, likely a lot. The question is why religious would believe this more? Because they are easily indoctrinated, and it is certainly true that there was a lot of excessive pro-market rhetoric at the time. Non-religious people were probably more among the skeptics. And they were also more likely to be among those who benefited from the previous regime, which definitely oppressed religion. Unfortunately, this study does not take (previous) party affiliation into account, which is likely very (negatively) correlated with religiosity. Too bad.

Individual characteristics are more important for academic success in university

What makes a good college students? Looking at the admission criteria of universities can be insightful. Public universities in the United States basically just look for high school grades and standardized test results, with some adjustment background characteristics (race, high school characteristics), if any. European universities in the end just care about grades, in most cases that a student was above some level. And US private universities look at a large array of characteristics, with extracurricular activities and personal essays being of particular importance, grades in some cases being even ignored. While these different types of universities have obviously different motivations, ultimately they are looking for potential in students. So what determines academic success?

Martin Dooley, Abigail Payne and Leslie Robb use administrative data about a dozen entering cohorts in four Ontario universities to explain what makes students stay longer in tertiary education and have better college grades. It turns out that the high school grades are pretty much sufficient. At least for Canada, it is reassuring to see that individual performance matters more than where you are coming from. Of course, one could wonder whether all the other characteristics that private US schools consider would matter here. But this kind of data was presumably not available, as Ontario universities, all public, do not ask for such information during the application process. Also, there is no record in the study about individual standardized test results. Including those would only reinforce the results, but may have important policy implications: imagine they do not matter. Then it opens the door to grade inflation in high schools, and the grade signal gets diluted. And then admissions officers need to find something else to rely on, such as some of the characteristics that matter less.

Contracts with empty promises

I always feel small talk has no specific purpose and is a waste of time. In fact, every time somebody asks me how I do, I launch into a well reasoned explanation of my current state of affairs, while my intelocutor is just expecting a "well-thanks-and you?" Yet, some people have found value in such chitchat, see a previous report. But what about contracts that have clause that have no chance of being met? Why would one allow empty promises in a legally binding contract?

David Miller and Kareen Rozen look at contracts that involve team work in a complex production environment, where opportunities for moral hazard abound. Performance clauses are hard to specify and you want to use peer monitoring and pressure rather than checking for the result of each individual task. Obviously, monitoring is costly, and along with statistical complementarities in the success rate, this implies that it could be optimal to delegate all the production to one person and the monitoring to another (the least productive one, according to the "Dilbert Principle"), and the latter may resort to wasteful punishment: naming and shaming, and even firing. It is wasteful, because it does not provide any direct benefit to the supervisor and it may not even be subgame perfect. Where are the empty promises? The one performing tasks can promise to fulfill them, but it obvious to all that they cannot be all successful because of some outside probability of failure. But the supervisor is willing to forgive failures, without knowing whether chance or moral hazard are at play.

Seemingly unrelated regressions and lamb carcasses

The great thing about the Internet is that one can discover unexpected uses of familiar techniques. Or one can search for new applications with one's tool set. So what about SUR and lamb carcasses?

Vasco Cadavez and Arne Henningsen are responsible for this paper. I have nothing to add to the abstract: The aim of this study was to develop and evaluate models for predicting the carcass composition of lambs. Forty male lambs of two different breeds were included in our analysis. The lambs were slaughtered and their hot carcass weight was obtained. After cooling for 24 hours, the subcutaneous fat thickness was measured between the 12th and 13th rib and the total breast bone tissue thickness was taken in the middle of the second sternebrae. The left side of all carcasses was dissected into five components and the proportions of lean meat, subcutaneous fat, intermuscular fat, kidney and knob channel fat, and bone plus remainder were obtained. Our models for carcass composition were fitted using the SUR estimator which is novel in this area. The results were compared to OLS estimates and evaluated by several statistical measures. As the models are intended to predict carcass composition, we particularly focused on the PRESS statistic, because it assesses the precision of the model in predicting carcass composition. Our results showed that the SUR estimator performed better in predicting LMP and IFP than the OLS estimator. Although objective carcass classification systems could be improved by using the SUR estimator, it has never been used before for predicting carcass composition.

A market for IP addresses

IP addresses face exhaustion, at the least those under the standard IPv4 format, and by some reports they should have been all used up already. What has helped delay the inevitable is probably the fact that there is now a market for IP addresses, yet it is not clear that the market is working efficiently. The reason is that IP addresses are allocated in blocks, and fragmenting the big IP allocation table makes it more difficult to manage it. For technical reasons, each allocation needs to be a square in the table. Thus, if a square is partially unused, it can only be split in multiple squares, increasing their number. Routers need to keep each possible square in memory, and their multiplication slows routing. And as IP addresses are privately owned and managed, there is no way to control this negative externality.

Benjamin Edelman and Michael Schwarz propose a market mechanism that should make the allocation of IP addresses more efficient. They suggest a "spartan rule:" in each bilateral trade, one of the two traders is designated as "extinguished," i.e., as prohibited from trading with other extinguished ones. As one can be extinguished only once, this implies that the number of cuts N in the IP table is limited to the number of initial holders of IP blocks. The analysis is static and under certainty, implying that the implicit rental price of an IP is zero as long as there is still a free one. But with the proposed rule, I do not see how one could necessarily reach exhaustion after the N cuts. It all depends on the initial allocation: one can end up with free IP addresses and no possible moves. In addition, once we add uncertainty and dynamics, there is going to be strategic behavior as being extinguished is a potentially costly absorbing state. I am thus not convinced of the arguments in this paper.

Of course, the easiest would be for everyone to switch to IPv6, which would give a sufficient number of IP addresses to last for a long time. But IPv6 devices cannot communicate with IPv4 devices (large scale IPv4 to IPv6 translation is cumbersome), which gives little incentive to switch until there is substantial critical mass. In other words, another situation like Y2K is approaching, and nobody has an incentive to do something about it. The more efficient market allocation will delay this, but also will make it even more urgent when it happens, because more addresses will need to switch, and they will have less time for it.

Women prefer cooperative work environments

Girls (and women) tend to hang out in cliques, while boys (and men) tend to be more individualistic. This may have evolutionary origins. as women try to have have very good friends to insure that their offspring is taken care off should they die. Men do not directly have such a need, and may have several wives who continue to take care of their offspring should the man die. Does this attitude translate into the workplace?

Peter Kuhn and Marie Claire Villeval conduct a experiment where people need to exert effort, but can do it by choice individually or in a team. Women are then more likely to choose the team. Once there is an extra reward in efficiency for working in a team, both genders choose the latter in equal proportions. Still, the most able players tend to work alone. This can be interpreted as men being more responsive to incentives and women having more confidence in team work. Unfortunately, the study does not differentiate between same-gender and cross-gender pairings (gender was visible in the experiment).

On job loss estimates from regulation

The current talk in Republican circles is that one can achieve significant job growth by deregulating. One may want to question this idea on two fronts. First, regulation has been initially imposed not for the fun of killing jobs, but because it improves the well-being of people. There is a trade-off, and sometimes it is worth having a little fewer jobs if it means improving the life of a lot of people. Second, the job loss numbers from regulation are often more fantasy than reality.

This is not a new question. Take the case of Australia, as discussed by Bruce Chapman. He looks at estimate of job loss in Australian mining from the implementation of an emission trading scheme. These 23,510 lost jobs are not as large as they appear. First, there would be job gains elsewhere, in particular in alternative energies. Second, when compared to normal job flows in the mining sectors, this number is quite negligible. Third, once you look at a somewhat longer horizon, say, ten years, a job loss is virtually undetectable. I would add finally, measurement of jobs losses has high uncertainty, and any result commissioned by one party in the debate needs to be taken as an extreme value.

So, do not have too high hopes that a sudden deregulation will create a job boom, especially in a country that has remarkably little regulation to start with.

Family firms are like public employers

Family-owned businesses have good reputation with the public, for reasons that have never been clear to me. Indeed, it is even good marketing to mention that a firm is family-owned. Why? The products are not likely to be better. I suppose such firms are possibly smaller and younger, thus the likelihood of a product to be discontinued is higher. I guess such firms have stronger ties with the community, in case this matters.

Andrea Bassanini, Eve Caroli, Antoine Rebérioux and Thomas Breda find that there is an important distinction between family-owned business and other privately-owned ones. Looking at France, they observe that they pay there workers less, which does not seem like a big advantage in the public eye. However, families tend to offer more job security. This mirrors the public sector that in the end offers the same value as private enterprises, trading off job security and pay. So after all, family-owned are more involved in the community by providing more insurance to workers through job security, like so often the French government does by pursing rather Keynesian policies. I wonder whether this would apply to other countries where the public sector is not necessarily leading with such policies.

Using energy taxes to dampen energy price fluctuations

Oil price fluctuations seem to preoccupy people less these days, maybe because they got used to higher prices or because other issues are hotter now. But remember how popular it was to call for the government, whatever the county, to reduce fuel taxes to ease the burden. Which bears the question whether this would be a good idea if you think harder about it.

Helmuth Cremer, Firouz Gahvari, and Norbert Ladoux did so and come to the conclusion that the fuel taxes should not move as much as the energy price. The reason is that the Pigovian motivation for imposing them, internalizing the externalities, has not changed, which would call for perfect smoothing. But this is to an important extend compensated by redistribution considerations as goods using energy are used by people of different incomes. In the end, a doubling of pre-tax energy prices lead to a post-tax increase of 64%. But that is only assuming that the tax was optimal to start with. In many countries it is currently much too low, thus the argument about reducing the tax in high price times is largely invalid, In fact, one should take advantage of reduction in world energy prices to increase the taxes, which would raise much needed money.

The imperfect market for re-insurance

The insurance market is thought to be rather competitive, at least for the most common risks. That is in part because insurance companies are willing to take risks thanks to re-insurance, where they can insure large event risks and to some degree over-exposure. But there are rather few actors on the re-insurance market. Is this bad, and does it have an impact on the insurance market?

Sabine Lemoyne de Forges, Ruben Bibas and Stéphane Hallegatte play with a model of re-insurance and find that there is a trade-off. The lack of competition leads to sub-optimal re-insurance provision, obviously. But is also allows the few players to take on larger risks, some of which may not have been insured otherwise. And, the larger the re-insurers, the more resilient the market can be. As a regulator, this means that means that you may to let the re-insurance companies grow larger than want is optimal in terms of competition.

Spain: an eventful history of economic crises

There is talk that Spain could get dragged into a financial crisis. While it is debatable whether this will happen or not, and whether it is inevitable, it is instructive to study Spain's economic history in this regard.

Concha Betrán, Pablo Martín-­‐Aceña and María Pons look at a century and a half of data and basically do in more details for Spain what Carmen Reinhardt and Kenneth Rogoff did for many countries in their best seller. And as the latter book, the picture is depressing. Crises of all sorts were a rather regular occurrence, we have been just blessed with rather few of them over the last half century. For example, in the second half of the 2oth century, Spain experiences half a dozen currency crises, a couple of banking crises, three periods with negative stock returns over several years, and the IMF had to intervene three times for a debt crisis. It takes from this that while crises are becoming less frequent, they still occur, and the current one was long overdue.

Is this what Republicans are really about?

Europeans have struggled for some time to understand the philosophy of the US Republican party, and especially how it manages to get such popular support in the electorate. On the surface, indeed, it all appears to be a platform that favors the rich at the expense of the more numerous poor, the latter having been indoctrinated for many years that governments are bad and, at the extreme, robber barons are better than a benevolent government. The consequence is a drive to increase inequalities in income and wealth.

John Roemer offers a glimpse into the American ideology for inequality. He says that "American philosophy" sees inequality as ethical, as it gives everyone what nature endows him with. That seems like a very fatalist argument (as in some religions) that ignores that redistribution is about the ex-post insurance of where someone is born. having the luck to be born in a good family and in a good country ought to be taxed to some degree to benefit the unlucky. A second argument is the old trickle-down one: if the most talented can keep all the fruits of their labor, they will work more (never mind decreasing marginal utility of consumption and how redistribution can improve global well-being). The third argument is that the government is good at nothing, and should thus be largely absent.

All these arguments are largely shared in the United States, and especially among Republicans. In fact, the latter are now going much farther in reversing redistribution than ever before. Just see how they they are vehemently opposed to any risk sharing through public health insurance, how they limit school funding and public goods in general. In fact, I am starting to wonder whether the hidden goal is to create a new underclass that would be in some ways reminiscent of the old slavery days. That would be consistent with the opposition to minimum wages, with the large prison population, and with keeping the poor uneducated. That would also be coherent with the Republicans willingness to increase the payroll tax (a flat tax applicable to everyone) while calling for a reduction in the income tax (a progressive tax). I hope I am wrong, though.

No convergence in the Caribbean

I have always found the Caribbean fascinating because it is a microcosm of the world, with tiny countries trying to get a workable government without the economies of scale the rest of the world enjoys. But as a readers of Economics research, the presence of this myriad of too-small countries lead to many frustrations, as they bias results in cross-country regressions. But sometimes, these micro-countries can be useful for research.

Roland Craigwell and Alain Maurin use them to study whether there is convergence in the Caribbean. It is well established that there is no convergence on world-wide country data, but it is very visible on subsets, such as US states. In the later case, all US states are under the same currency and roughly the same laws and government systems, there is some cross-state redistribution and no trade barriers. As you drop these features, which one gets you lack of convergence. In the case of the Caribbean, there is a partial monetary and trade union, laws and governments are more dissimilar than in the US and there is no redistribution. And as Craigwell and Maurin show, that is sufficient to make convergence disappear. Once more, it looks like once more institutions and to a lesser extent globalization are the keys to development for the poorest economies.

Better GDP estimates

GDP equals C+I+G+X-M. It also equals national income, at least in theory. But estimates differs widely, even for the United States, which is rather disturbing. How do you conduct proper economic policy when estimates of GDP, even after revisions can differ by more than 2% points and their growth rates have a correlation coefficients of only 0.63 (see the work of Jeremy Nalewaik)?

Boragan Aruoba, Francis Diebold, Jeremy Nalewaik, Frank Schorfheide and Dongho Song make the old diversification of risk argument: why not combine both estimates? After all, this is often done with forecasts, and the two estimates can be treated like forecasts of the true GDP. And consistently with this literature, the weights on each should depend on the variance of the errors. Of course, they are not observed, but the authors have some guesstimates, based on correlations with variables not used to construct either GDP measure that are supposed to be correlated with the true GDP. They then show that measurement matters, for example in dating business cycles. We'll see whether the US will adopt such averaging, as some other countries already do.

Addendum: I wonder how the recent major revisions to US GDP would have fared with this scheme.

Has the Internet reduced job market frictions?

When we teach about how the Internet has improved the efficiency of the economy, one typical example we give is about job search: the Internet makes vacancy postings instantly available and searchable. And job applicants can send CVs with little cost and time, or even have them available in CV banks. The problem is that Peter Kuhn and Mikal Skuderud have proven that this is not true. But that was with data from 1998-2000. What about today?

Peter Kuhn and Hani Mansour replicate the exercise, but with data from 2008-2009. They concentrate on young job seekers and find that those who use the Internet for job search reduce their unemployment duration by 25%, which is considerable (and makes us teachers prescient). And this is not just because of a particular group or specification. Running the same regressions in the earlier sample provides no noticeable effect. This means that somehow people have learned to use the Internet effectively, which is consistent with the success of the Monster Board or Craiglist and the proportion of those using the Internet for job search.

Economists' political bias and model choice

One can count on Gilles Saint-Paul for innovative research topics. During his career, he has addressed and impressive array of topics that range far beyond Economics strictu sensu. For this reason, I have reported several times about his latest research.

His latest opus is an introspection in our profession and how our political biases influence our modelling choices. He claims that an economist with conservative inclinations will favor a model with smaller fiscal multipliers. While the ethical thing to do would be to be driven by empirical evidence, this may just be a subconscious choice. But at least economists strive to be logically consistent, and if one choose a large multiplier, then then must also claim that demand shocks are substantial, as models with large multipliers rely on this. Looking at evidence from the Survey of Professional Forecasters, Saint-Paul finds that forecasters who believe that expansions are more inflationary also adhere to the belief that public expenses are less expansionary.

Saint-Paul goes further, though. His claim is that we live in a self-confirming equilibrium. We devise theories to understand our surrounding and take decisions, and those decisions then shape the economic environment. Theories can thus survive even if they deviate from the true structure as long as the decisions make it conform. This is a statement about a lack of uniqueness of the path to the rational expectations equilibrium. In a sense, this is not too disturbing, as long as decisions are still optimal and outcomes do not differ too much from the rational expectations first best. And if this true, we will never know what the rational expectations first best is. Of broader implications would be if the political agenda of an economist would lead an economy on an different path, on a different self-confirming equilibrium. Is this why Europe and the United States are different? Were Keynes and von Hayek that influential?

The next Nobel Prize

Monday, the next "Nobel Prize" in Economics will be announced and everybody is playing a game of predictions, so why not me? I have a wish that happens to coincide with my prediction: William Nordhaus.

Why? Because environmental economics has been long rumored to get it and it deserves to be recognized. Within that field, Nordhaus has made major contributions that brought this field to the mainstream. And he is a genuinely good guy, always helpful and willing to listen to you or help you out. Also, the signals I have been receiving from members of the Prize committee is that they really like his work.

I am afraid, though, that he may have to share his work with Martin Weitzman. Who has made the more seminal contributions to the field can be discussed, but Weitzman is all the opposite in terms of attitude. In addition, I do not like his way of trying to make a name of himself, like I showed previously. He has also been caught and punished for stealing horse manure, so his ethical standards are definitively not up to par.

Marx and Solow

For all the justified criticism one can have about the work of Karl Marx and the economic system that resulted from it, old Karl was onto something. The Industrial Revolution saw the rise of a new class, the capitalist, that generates a smaller share of its income from manual work and instead uses its brain and capital. That is in terms of welfare a positive evolution, were it for the fact that workers hardly had it better compared to their previous agricultural life and thus did not get a share of the new riches. What especially irked Karl Marx was the lot of the workers could not improve, either because they were not getting a larger share of income, or because there was no path to become capitalists themselves in large numbers, something later termed as a lack of social capilarity.

Jørgen Heibø Modasli finds some of these features in a model inspired by the Solow growth model, augmented by incomplete markets that require that one cannot borrow to become a capitalist entrepreneur and that the entrepreneur can only work for himself. This introduces a non-convexity and quickly a two-class system emerges, with workers not having any reason to save much as they have no chance to become capitalists. Also, the class division persists over time, even when credit and capital markets improve.

Yet, this is not entirely convincing. Indeed, economies with less incomplete markets, say, the United States, should see less inequalities, and inequalities should have declined over time as markets developed. This is hardly what we can see in the United States, where access to credit is widespread, yet income inequalities are high and growing, and social capilarity is largely absent.