The future role of political science in foreign policy analysis has been a hotly debated issue. In this series, academics investigate the role of the academy in the coming years and the ways in which it can carry out useful, analytical work without abandoning its rigor while making its findings useful for international and government policymakers.
Religion, violence, terrorism, and development are analysed both empirically and statistically by political scientists and the body of literature has never been greater. Translating this richness into policy has long been difficult even as a number of individuals switch from one field to the other. At the same time, the responsibility of political scientists to contribute to the public understanding of political institutions has become an increasingly important part of the university charters and professional associations from ISA to APSA.
In the following, derived from the most recent edition of the International Journal, Jean-Christophe Boucher and Brian Bow launch this important conversation.
Assistant Professor, MacEwan University
“Much of the empirical knowledge produced in Canadian foreign policy is methodologically suspect.”
Associate Professor, Dalhousie University
“CFP has always been, consciously or unconsciously, an interdisciplinary field which reaches outside of the social sciences.”
The State of the Field: Canadian Foreign Policy
Historically, academics have lamented the state of Canadian foreign policy (CFP) research, primarily on account of its theoretical shortcomings.1 In their seminal work two decades ago, David Black and Heather Smith concluded that there was “limited cumulation: limited refinement of promising theoretical beginnings, limited pursuit of interesting debates and limited empirical research designed to test and refine theoretical and analytical propositions.”2 Brian Bow later echoed this criticism, arguing that “Canadian foreign policy as a field of study has never generated a substantial and coherent social science debate that might serve as an anchor to tie together the disparate policy areas and perspectives revolving around it.”3 To be sure, not all assessments have been so negative. In 2008, Paul Gecelovsky and Christopher Kukucha contended that scholars’ expressions of satisfaction with their research and teaching activities in Canadian foreign policy was an indicator of progress.4
In a sample of 531 peer-reviewed articles pertaining to Canadian foreign policy published between 2002 and 2012 in 5 leading periodicals- Canadian Foreign Policy Journal, International Journal, the Canadian Journal of Political Science, the American Review of Canadian Studies, and Études Internationales – I found that negative criticisms outlined above are, in fact, legitimate.5 This paper separates their methodological approaches into five categories: description, quantitative analysis, comparative study, critical study, and qualitative analysis. It demonstrates that the vast majority of Canadian foreign policy scholarship has been framed by “casual empiricism.” Few scholars have adopted the more refined methodological approaches of quantitative or qualitative analysis, or comparative or critical methods associated with a progressive research agenda. As a result, one is left with a degenerative field of study that revels in anecdotes and too often lacks empirical validity.
Put simply, a progressive research program consistently leads to new empirical discoveries and innovative solutions to empirical and conceptual problems. Conversely, research programs that simply reiterate prior knowledge (theoretical or empirical) or produce ad hoc discoveries by implementing semantic or marginal conceptual revisions should be understood as degenerative.
One might, at least minimally, expect a progressive research program of Canadian foreign policy to adopt the tools and methodologies that have been developed in studying the diverse nature of human behaviour and how societies are organized. Conversely, a CFP research program that does not generally implement such strategies would be either degenerative, incapable of producing knowledge that leads to the discovery of novel facts, or even worse, not a research program at all.
My study led to several conclusions. First, most peer-reviewed journals that publish articles on Canadian foreign policy, excluding the Canadian Journal of Political Science, have overwhelmingly accepted papers that use the descriptive method. Hence, much of the academic knowledge created in this research program does not follow the positive heuristics imposed by the rigorous methodological tools available in political science. It is disturbing to find that the leading peer-reviewed journals have not demanded that the community employ a deeper level of methodological rigour. This is not to suggest that all articles pertaining to Canadian foreign policy should reflect a strong grasp of advanced statistical models or ethnographic methods. The epistemic community of Canadian foreign policy includes an important number of practitioners who can provide compelling arguments with reference to specific debates. Nevertheless, it is not helpful for most periodicals to devote more than 75 percent of their articles to the casual empiricists.
Second, the data show that there are no important differences between the various peer-reviewed publications, again with the exception of CJPS. One would imagine that in a market where interesting theoretical and methodologically driven papers are scarce, some periodicals would choose to present a broader range of methodological approaches to the study of CFP. However, Canadian Foreign Policy Journal, International Journal, the American Review of Canadian Studies, and Études Internationales offer similar content. Perhaps, then, there are too many journals publishing work on Canadian foreign policy and thereby diluting the overall quality of the scholarship as a whole.
Third, there is an overall methodological immaturity in the Canadian foreign policy literature when compared to other disciplines in political science. Essentially, methodological approaches unite political science more than do theoretical foundations. Each specific research program in political science has its own particular set of theoretical premises and, although they often intersect, there is significant diversity within the field. In Lakatos’ terms, methodological approaches are positive heuristics because they are necessary for the development of a research program and identify valid ways to examine facts and data. In this context, when one subfield of study of a larger research program—Canadian foreign policy within political science, for example—does not follow the positive heuristics, it runs the risk of becoming marginalized within its own epistemic community and considered illegitimate.
Fourth, without a proper methodology with which to examine empirical data and offer new interpretations, the capacity of Canadian foreign policy scholarship to introduce novel ideas appears limited. From this perspective, CFP as a field of study shows signs of being empirically degenerative. It is no surprise, then, that scholars like Brian Bow have suggested that there is a “groundhog day” quality to debates in Canadian foreign policy. Without the capacity to produce novel ideas, debates become little more than a collection of well-worn and inconclusive anecdotes. An ideal, progressive research program in Canadian foreign policy would provide a balanced plurality of theoretical and methodological perspectives.
Imre Lakatos, an influential social scientist, considers a research program progressive when two necessary conditions are met—theoretical progress and empirical progress—which jointly lead to problem shifts and new intellectual discoveries. In this context, while empirical exploration is critical, it can be neither random nor uncontrolled. Rather, it must follow a clear path determined by a research program’s positive heuristic. For the same reasons that innovation in natural sciences rests on methodological requirements and state-of-the art equipment, research in social sciences is founded on its own set of methodological strategies through which scholars can observe and analyze empirical data with some validity.
To assess the empirical progressiveness of Canadian foreign policy scholarship over the last decade, my paper has analyzed and categorized 531 relevant peer-reviewed journal articles published between 2002 and 2012. The results support the view that Canadian foreign policy as a research program is in serious jeopardy. Essentially, more than 59 percent of academic research published over the last decade has adopted a casual empiricist methodology. By contrast, more sophisticated methodological tools, most notably quantitative analysis and comparative studies, have been largely ignored. In hindsight, much of the empirical knowledge produced in Canadian foreign policy is methodologically suspect, which ultimately hinders the community’s collective capacity to produce new ideas and assess the validity of contemporary theoretical assumptions. Further research should attempt to expand the database assembled here. Indeed, to better reflect the broad “intellectual” work in the field, one might also include monographs, edited volumes, and other such publications.6 Nonetheless, if the results of US studies are realistic indicators, one should not expect the inclusion of such sources to challenge my article’s preliminary findings.7
Hans Morgenthau once bemoaned the tendency among political scientists to rejoice in “[the] trivial, the formal, the methodological, the purely theoretical, the remotely historical—in short, the politically irrelevant.”8 His argument was, and remains, convincing. The politically relevant should never be sacrificed on the altar of the methodologically attractive. Nevertheless, methodological imperatives are necessary to navigate the sea of facts and discriminate the signal from the noise. And when it comes to the study of Canadian foreign policy, it appears that those imperatives are being largely ignored.
1. See, for example, Maureen Appel Molot, “Where do we, should we, or can we sit? A review of Canadian foreign policy literature,” Journal of Canadian Studies 1, no. 2 (1990): 78–90, and Michael K. Hawes, Principal Power, Middle Power, or Satellite? (Toronto: York Research Programme in Strategy Studies, 1984).
2. David Black and Heather Smith, “Notable exceptions? New and arrested directions in Canadian foreign policy literature,” Canadian Journal of Political Science 26, no. 4 (1993): 745–774.
3. Brian Bow, “Paradigms and paradoxes: Canadian foreign policy in theory, research, and practice,” International Journal 46, no. 3 (2010): 371–380.
4. Paul Gecelovsky and Christopher J. Kukucha, “Canadian foreign policy: A progressive or stagnating field of study?” Canadian Foreign Policy 14, no. 2 (2008): 109–119.
5. The sample was deliberately limited to peer-reviewed journals that consistently publish on Canadian foreign policy.
6. J.C. Sharman and Catherine Weaver, “Between the covers: International relations in books,” PS: Political Science & Politics 46, no. 1 (2013): 124–128; David Samuels, “Book citations count,” PS: Political Science & Politics 46, no. 4 (2013): 785–790.
7. Sharman and Weaver, “Between the covers,” 126.
8. Hans J. Morgenthau, “The purpose of political science” in James C. Charlesworth, ed., A Design for Political Science: Scope, Objectives, and Methods (Philadelphia: American Academy of Political and Social Science, 1966), 73.
Measuring Canadian Foreign Policy
A few years ago, I wrote a literature review for International Journal, which laid out some of my frustrations with the way Canadian foreign policy (CFP) had developed as a field of study.1 Jean-Christophe Boucher clearly has some similar frustrations, but his way of thinking about the nature of the problem is different, and so are his (mostly implied) remedies. This short response gives me an opportunity to clarify some things I said in that previous article, and to add some things I have since figured out.
I agree with Boucher that much of what has been published recently on CFP is “empirical-casual” and not very useful, except as political spin or publication-list padding. I disagree, however, with his apparent presumption that CFP as a field of study ought to be measured using the political science cookie cutters he has laid out. CFP is not, as he seems to assume, nor has it ever been, just a specialized niche within one subfield of political science. It doesn’t “belong to” the social sciences. CFP has always been, consciously or unconsciously, an interdisciplinary field which reaches outside of the social sciences. One might argue, in fact, that most of the field’s most important contributions over the last 100 years have come from historians, and it has been further strengthened by the work of anthropologists, sociologists, economists, lawyers, and other kinds of academic specialists, all of whom have their own set of disciplinary purposes, methods, and standards. Judging their work by political science standards is like judging hockey players with figure-skating scorecards (or vice versa).
Boucher is thinking about scholarly progress in terms of developing and systematically testing general-deductive theories. I think he is right to see this as the essence of the social science project, and I (still) think that it is important that CFP develop and maintain such a project within itself. But I think it is mistake to make Lakatosian theory-testing the gold standard by which all CFP research is measured and evaluated.2
First, there is something awkward about setting three positivist approaches (quantitative, qualitative, and comparative) side by side with critical approaches. It’s kind of like that recurring bit on Sesame Street: “one of these things is not like the others.” Critical approaches to CFP (from within or outside of political science) have their own methods, but they are not conventional social science methods, and they do not fit neatly into the Lakatosian theory-testing frame that Boucher has in mind.
Second, even if one did see CFP as just a subfield of political science, and political science as defined by Lakatosian principles, it still would not necessarily follow that everything published in CFP journals would have to be designed to evaluate theories through empirical hypothesis-testing. We would also need pieces that were mostly about developing concepts, clarifying or challenging theories through logic, or reviewing trends in the field. Because many articles designed to perform these functions often also have a supporting empirical component, I worry that many of these might fall into Boucher’s “descriptive” dumpster.
Third, and most important, one must recognize that a lot of what Boucher seems to dismiss here as mere “description” might actually be performing other important functions within the larger enterprise of CFP. One of these functions is the bringing to light of new facts about exactly what has happened, and when, and how, and fitting those facts into a broader context and longer-term trends—in other words, the stuff that historians do (and some kinds of anthropologists and sociologists as well).3 This work, obviously, is absolutely indispensable to the political science part of the field, not only as a generator of the raw material that is ultimately fed into the Lakatosian theory-testing machine, but also—stepping back a little bit—as a crucial reminder to political scientists of what the positivist, variable-driven machine does to history itself, when it mashes all of the context and contingency out of complex events and transforms them into user-friendly “observations.”
As I said back in 2010, social science approaches to CFP need to keep traditional approaches close by, if only to make sure social science doesn’t take itself too seriously. In some fields—such as some branches of economics—the process of unearthing and contextualizing facts is almost completely separated from the analysis of those facts, and both of the enterprises are poorer for it. One of the strengths of CFP as an interdisciplinary project is that there is constant engagement (or at least there are many recurring opportunities for engagement) between these two groups, and in fact the rosters of the two groups often overlap. But it is not good enough for political scientists and historians to just co-exist, or to have a friendly division of labour; each needs the other to correct its worst habits. Historians need to remind political scientists to think seriously about source and context, and historians need to be reminded to think seriously about transparency, consistency, and the bases for generalizations.
Finally, though I am a little reluctant to admit it, there is also a need to make room for what is sometimes called “policy research”: that is, research which supports arguments about whether or how policies work, or might work if pursued. A lot of the flimsiest sort of “casual-empirical” work that Boucher uncovers here could be characterized this way, and there is little doubt that the academic study of CFP as a whole has been partially displaced and undermined by the growth of policy research, mostly through think-tank and university research centre websites, but also by the subsequent shift in that direction by high-profile academic journals like International Journal. Policy research in CFP is often thin on empirical data, vague about how generalizations are drawn from data, or just plain careless about moving back and forth between empirical and normative arguments. But there has to be some room made within the field for policy research, because it is through policy research that we make connections between abstract theories and real-world politics, particularly when we are thinking about present or future choices. The price of admission should be that policy research would be held to a high standard as well, in terms of transparency, consistency, and clear logic. The one thing that would not necessarily be required for this kind of research is the kind of Lakatosian theory testing that Boucher is looking for.
It is a good idea to take stock of the field every once in a while, and to try to go beyond vague impressions, by actually categorizing and counting. But it would be better if we began by recognizing that there are different approaches to CFP, and that different approaches ought to be assessed differently. There is good work and bad work done by political scientists, according to (various) political science practices and standards, and there is good work and bad work done by historians (and others), according to their own practices and standards. CFP may have produced a lot of bad work, and the proportion of bad work to good might have increased over the last twenty years, but we can’t measure this properly unless/until we identify and apply criteria that do not automatically flunk virtually all contributions by historians and other non-political scientists.
On the other hand, if academics from these other disciplines are serious about participating in CFP as an interdisciplinary project, then they have some work to do in making their disciplinary standards more transparent and better known outside of their home fields. This is perhaps one area where they might learn something from political scientists. The rise of statistical and formal-modeling approaches in political science over the last twenty years forced proponents of so-called “qualitative” approaches to be a lot more self-conscious and explicit about concept formation, measurement/categorization, case selection, and the basic logic of inference. This ultimately led to the building up of a set of much more carefully delineated and justified qualitative “techniques,” which have made it easier for others to evaluate their research results according to an agreed-upon standard. Of course, historians and others do not have to choose those same techniques for themselves. And they would be right to object when Boucher tries to cram their work into this box, and then rejects it when that work doesn’t fit. But they also ought to be asking themselves what technical standards they would prefer to be judged by instead. If we are to take stock of the whole (interdisciplinary) field, we will need to have ways to (systematically and objectively) separate the scholarly wheat from the “casual” chaff, in every part of the field.
1. Brian Bow, “Paradigms and paradoxes: Canadian foreign policy in theory, research, and practice,” International Journal 65, no. 2 (spring 2010): 371–380.
2. As an aside, it is somewhat ironic that Boucher is confidently pushing the Lakatosian approach now, since that way of thinking about theory-development is increasingly being questioned by high-profile scholars in the most Lakatosian subfield of political science: international relations. See, for example, Colin Elman and Miriam Fendius Elman, eds., Progress in International Relations Theory: Appraising the Field (Cambridge, MA: MIT Press, 2003); Rudra Sil and Peter J. Katzenstein, Analytic Eclecticism in the Study of World Politics (New York and London: Palgrave Macmillan, 2010); and David Lake, “Why ‘isms’ are evil: Theory, epistemology, and academic sects as impediments to understanding and progress,” International Studies Quarterly 55, no. 2 (2011): 465–480.
3. As another aside, it is worth noting here that historians still usually publish their most important and rigorous research results in books, but often also present bits and pieces of their work in articles, in part as a kind of advertisement of their book-length work. It therefore shouldn’t be too surprising if historians seem to be short-changed by a survey of CFP research which looks only at journal articles.