Teenage drivers are a risky bunch. They are inexperienced and don’t always drive carefully, sometimes with tragic consequences. Various studies indicate 15-30% of teens have an accident in their first year of driving. In many countries driving fatalities are the leading cause of death among teenagers.

The policy question is what to do about it.

One can imagine a number of options, from the light touch (such as information campaigns and advertisements) to the dramatic (such as raising the legal driving age).

Many jurisdictions have introduced laws to restrict the driving privileges of younger drivers. But it’s not always easy to tell if such laws are effective.

One could look at places that have the laws and compare them to accident statistics from places without such laws. But this might be misleading.

It is possible those laws were introduced in places with a bigger problem. Suppose the laws have reduced driving fatalities, but only to the same level as places with less severe problems in the first place. With no difference in the teen driving fatality rate between jurisdictions with or without driving restrictions, it could be incorrectly concluded the restrictions have no effect.


Read more: More Mad Max than max safety: teenagers don't dream of safe cars


The identification problem

This is an example of what economists call the “identification problem” – figuring out how to identify the true causal effect of a policy intervention.

To identify the causal effect, one needs to know the right counterfactual – that is, what would have happened if the policy had not been introduced. To put it another way, the group affected by the policy needs to be compared with the right control group.

This is a big general issue on which economists have been working for decades. In that time many useful techniques have been developed to address the identification problem across the social sciences.

The development of this set of tools is what MIT economist Joshua Angrist (one of the leading scholars in this endeavour) has called “the credibility revolution”.

It’s a revolution because we now have ways to credibly identify the causal effect of different policy interventions. That allows us to provide sensible policy prescriptions based on empirical evidence.

It even permits scholars to understand the size or “magnitude” of the effects and to undertake careful cost-benefit analysis.

An Australian policy experiment

Back to those troublesome teenage drivers.

In 2007 New South Wales introduced a law that banned drivers in their first year of a provisional licence from carrying two or more passengers under the age of 21 between 11pm and 5am.

As economists Tim Moore and Todd Morris write in a working paper published by the US National Bureau of Economic Research in April, about 3% of all accidents by first-year drivers occurred while carrying multiple passengers between these hours. But these accidents accounted for about 18% of fatalities.

Moore (an Australian, now at Purdue University in Indiana) and Morris (at the Max Planck Institute for Social Law and Social Policy in Germany) saw the NSW policy as an ideal opportunity to test the effectiveness of teen-driving restrictions.

So how did they make sure they had the right counterfactual?

They used one of the classic techniques from the identification revolution, known as the “difference-in-differences” – or DID – method.

This technique was made famous (in academic and policy circles) by a path-breaking 1994 paper by David Card and Alan Krueger (both then economists at Princeton University) on how minimum wage laws affect employment.

To put it at its simplest, rather than comparing one group to another or one group before and after a policy change, the DID method involves comparing the changes over time in one group to the changes over time in another.

Moore and Morris calculated changes in the restricted period (11pm–5am) then compared those to the changes in accidents during the daytime (8am–8pm). This allowed them to control for other factors affecting crash risks.

What they show is striking. The restriction reduced crashes by first-year drivers by 57%, and hospitalisations and fatalities by 58%.

With the restrictions, crashes in the 11pm-5am window dropped from about 18% to 4% of fatalities involving first-year drivers. That’s an effective policy.


Read more: Automated vehicles may encourage a new breed of distracted drivers


Long-run effects

If you were sitting in an academic seminar hearing these results, you might ask: “OK, but what happens after the first-year restrictions roll off?”

Remarkably, Moore and Morris also find reductions in nighttime multi-passenger crashes in the second and third years. There are no clear differences in the years that follow, but by then crash rates are down to one-fifth of the first-year level.


Impacts on nighttime multi-passenger crashes

Timothy Moore & Todd Morris, 'Shaping the Habits of Teen Drivers', National Bureau of Economic Research, April 2021.


In other words, these restrictions seem to have a persistent effect even after the policy intervention is no longer in place.

There is a broader lesson in this. Policies can have long-run effects, even after the folks targeted by the policy are no longer “being treated”. This is well known in some educational interventions. Experiments with small financial rewards for students and parents, for example, have shown improvements in things like attendance and performance continue even after the incentives are discontinued. It is worth looking out for with policies in other areas.

In any case, NSW – and Australia more generally – seems to have cracked the case on teen driver safety.

Thanks to Moore and Morris, and their NBER working paper, it’s an insight from which the rest of the world can learn.

The Conversation

Richard Holden, Professor of Economics, UNSW

This article is republished from The Conversation under a Creative Commons license. Read the original article.