Ariel Pakes is the Thomas Professor of Economics in the Department of Economics at Harvard University. Professor Pakes was educated in the Hebrew University of Jerusalem before obtaining his Ph.D. at Harvard University. Subsequently, he taught at the Hebrew University before being appointed professor in 1984. Prestigious universities such as Wisconsin University, Yale, the University of Chicago and NYU have had the pleasure to have him either by a visiting or a professor position across the years.
Professor Pakes was named the distinguished fellow of the Industrial Organization Society in 2007. The American Academy of Arts and Sciences elected him fellow in 2002. He was awarded the Frisch Medal by the Econometric Society in 1986, and was elected as a fellow of that society in 1988. Professor Pakes’ research has been in Econometric Theory, Industrial Organization (I.O.), and the Economics of Productivity and Technological Change. He has developed tools that allow the empirical analysis of I.O. models. Recently, he has investigated ways of simplifying estimation and inferences of complex behavioral models throughout the use of weaker equilibrium concepts and inequality constraints. His recent empirical work includes an analysis of the impact of US health reforms on hospital choices, hospital prices and financial incentives to physicians. It includes as well the impact of the break up of AT&T on productivity in the telecommunication equipment industry.
Most of your research is focused on developing tools to answer difficult empirical questions, mainly simulation and semi-parametric models. Could you briefly explain how are they useful and under which context they are used?
I can give you examples. An individual’s demand might be easy to estimate, but firm decision making depends on aggregate demand. To construct aggregate demand we have to sum up over individual demands and simulation can do that. Suppose that I want to both allow each individual to have different income and allow the price coefficient to depend on income. If I can simulate different people, I can predict what each would do conditional on their income and then add up demand over the simulated individuals. So simulation is really just an integral, an easy way to do a summation. Before you could do it, there was no way of getting reasonable demand estimates for firms. Daniel McFadden cited this problem when he did logit demand systems (it was in the psychology literature before that). He called it the Independent of Irrelevant Alternatives. Take the car market. If you bought a Rolls Royce and I bought a Mini Van and if there were no way to distinguish the characteristics I preferred from the characteristics you preferred, our second choice car would be the same: the car with the biggest share. This does not make any sense. Allowing for differences among indivduals that are associated with the price of the car they purchase, allows the second choice of the person who chose the cheaper car to be cheaper.
For the use of semi-parametrics, I will take an example from production function estimation.
We often estimate production functions in order to use them to analyze productivity. Usually, we do it after there has been a major change in the industry: when there is a merger between the two biggest firms or, in my case, when AT&T was broken up. To analyze what happens after such major events we follow firms over time. However some of the firms drop out. If you just keep the firms who stay all the time, you can get a very biased view. Often the firms that exit are the firms that the change impacted negatively. So if you keep only the firms that stay, you get a positively biased view of the impact of the change. Of course one has to be careful that model the model used is appropriate for the markets studied. For example, in biotech there are a lot of little firms that attempt to develop new products. When they develop something really good, they sell it. Why do they sell it? Because they are not as good at producing and marketing the good as some of the bigger established firms. So these firms sometimes exit because they were successful, not because they were failures. You need a model of who drops out to guide the analysis. Those models are actually quite complicated because you have to build in the pricing equilibrium and everything else that determines investment incentives. Once the overall model is specified, exit will only a function of certain variables. It will be a complex function that depends on a lot of details of the model, but you can control for the drop out probability by just conditioning on the variables it depends upon in a non-parameteric way. This and some other details that use semi-parametrics to take account of the fact that productivity may be correlated with input choices, enable you to get estimates for the production function, which in turn allows you to go back and analyze productivity. Olley & Pakes (1996, Econometrica 64: 1263–1297) provide the needed details.
As econometric tools and rich datasets become ever more available, more complex models can be used. How frequently are static BLP models and dynamic models used in antitrust and merger cases? What weight do the competition authorities put on such analysis?
It is changing slowly and they are being used more. The problem is not that people do not want to do it; it is that you need to have an answer for the court when the court convenes. Merger and anti-trust analysis goes in two stages. There is a first preliminary stage, where the authorities decide whether they are going to investigate the case thoroughly or they are just going to let it happen. That preliminary stage has to be quicker than you can typically do BLP. There are some exceptions. They are doing things like BLP in some Health Care cases now, because, they have been so interested in that industry that they have datasets up and running. It will probably become more and more like that, but the basic constraint there is just time. BLP is now starting to be used by both the authorities and the private sector. I do not think the dynamics have been used nearly as much, except for in research. The dynamic models are more complicated and require more assumptions. Of course all of these models are approximations. Since somebody is going to make a decision and the only question for the authorities is: among those who can answer the question in time, who has the best approximation? The time constraint typically kills the dynamic models, but there is a movement now to use simpler notions of equilibria. These notions encompass the standard notion, but also admit equilibria which are easier to compute, and might actually also approximate better (the paper in the QJE by Chaim Ferhstman and I, is an example).
Given stringent data requirements and time restrictions issues, full merger simulations are difficult to undertake. One solution is using the upward pricing pressure (UPP) index. What is your opinion of using such tool?
The UPP depends on a particular institutional structure. If the merger takes place between upstream firms in a vertical industry, UPP does not make any sense. The upstream firms are selling to a downstream firm who is then re-marketing to consumers. The way prices are set between the agents is through a bargaining process. The problem of this sort I have thought most about is bargaining between hospitals and insurers. What happens? The hospitals have costs; the insurers sell insurance policies and get premiums for it. The contracts between the hospitals and the insurers split the profits between the costs and the premiums. The question is, who gets more of the profits? It depends on what the outside alternative is for each agent. If I have the only children’s hospital in Boston no insurer who wants to insure families can dare not to have me because they will never get families. That means I will get a lot of the profits because the outside option of the insurer is very small. He will try to keep me in at any cost, and I will know that. That is a different equilibrium concept than the one that is behind the UPP. The equilibrium concept which implicitly is being used in the UPP calculation is Nash in prices. Depending on the setting, it can be the right thing to do.
Your papers on moment inequalities present a general framework that can be used to analyze single and multiple agent problems. Can you provide us with the intuition behind it throughout a policy relevant application?
The intuition for moment inequalities is revealed preference. Take an example from Joy Ishii’s work. She analyzes how a bank chooses a number of ATMs (Automatic teller machines). There is a cost of installing and operating an ATM that we do not observe. We do know how much it costs to buy one, but we do not know how much it costs to keep it going, service it, and fix it when it goes wrong. We need to estimate that cost in order to analyze the impact of different legislation on welfare. Here is one way to get an estimate. Joy estimated a model which allows you to compute how much more revenue the firm will get if it invests in one more ATM. It must be the case that if you choose not to do one more ATM, the cost must have been greater than the expected profit gain. That gives me an inequality from one side of the cost: I have a lower bound. Also, I know that the firm chose the last ATM. So, the profit it got from the last one is greater than the cost. That gives me an upper bound for the cost. Now I have a lower and an upper bound.
In the US there was a congressional committee that was deciding whether they should make the fee for using an ATM the same for all the ATMs; whether the customer belonged to the bank that owned it or not. The logic was that you should not have to walk 10 miles to your banks’ ATM, it is the same network; you might as well go to the closer one owned by a comptitor’s bank..
The banks said, “Well if you did that, somehow we have to pay for the ATMs”. So, we have to figure out what we have to charge customers at every bank just to maintain the system. Joy Ishii did the counterfactual of what will happen if everybody pays the same costs no matter where you go. The result was that the consumers did not benefit very much from implementing the proposed legislation. What happened was that there was a very big restructuring of the market. Big banks got smaller, smaller banks got bigger but the actual consumer surplus stayed around the same. The cost of the ATMs to consumers went up. The big banks were subsidizing ATM’s and making you pay through lower interest rates on demand deposits.
Even though there are prominent researchers in empirical industrial organization in the EU, there seem to be fewer in relative terms than in the U.S. Why do you think it is the case?
It’s true. Much of the empirical I.O. started in the U.S., whereas Europe was always strong in theory and theoretical econometrics, especially, France. There was not much of an empirical tradition, at least, that I am aware of. I think that it is interesting that the part of the empirical work that Europeans are picking up on is the part that combines theory with empirical work. And I think the reason is that everybody around here knows some theory. That is making it easy for you to catch up.
There is a long history of empirical work in the U.S. starting with agricultural research stations because they had good data. The United States had a lot of money to throw into that. After the Second World War when all the empirical work started, Europe did not have as many resources as the U.S. had. I am surprised of how many empirical people are here now. It is really different from when I used to come before.
6) How important is it for you research to meet with people working in industry?
I think a lot. Different industries have different institutions, and we cannot work with models that are general enough to adequately approximate behavior in all industries.. As a result you have to have enough knowledge to tailor the work to the details of the industry studied. Since I am mostly a methodologist, when I work on an empirical problem I tend to work with somebody who knows a lot about the industry studied.
My last paper was with Kate Ho, who knows about hospitals, health care and hospital choice; Steve Olley knew about telecommunication systems; Levinsohn studied autos.
What is true about I.O. now is that there is too much for one person to know. So working in groups is good. And also it is fun! It is just much more fun working with another person.
7) What are the next big questions in I.O. and what are the most promising methodologies being used?
In empirical work researchers are likely to start worrying about different equilibrium assumptions; not Nash in prices, or Nash in quantities. They are also likely to worry more about how firms form perceptions about what other firms are going to do, not just assume an instantaneous Bayes-Nash equilibrium. Some of the decisions that firms make involve very complicated processes, and it is very hard to think that firms know how to compute what standard theory says it does. The complexity issues show up a lot in dynamics. We will start to figure out ways of simplifying the analysis. There are equilibria that do not require so much of the firm or the consumer. They do not require so much information or computation.
I think the other set of research questions are markets where Nash in prices does not apply: vertical markets, partially regulated markets and platform markets. Estimating demand and costs will go into subsequent analysis of these issues, but the focus will be on how to do empirical analysis in these and in dynamic market settings.