Session 2: Publishing papers with computational methods
(Reflections from Stefanie Habersang)
Publishing with computational methods was the subject of the second session. The panel speakers Mark Kennedy, Richard Haans, Hovig Tachalia, and Muhammad Abdul-Mageed had a vibrant discussion about the possibilities and pitfalls in publishing interpretive data science. In the beginning the panel discussed the “coolest thing” that they recently saw on topic modelling. The panel shared examples from the material sciences, from discourse studies on Brexit, from the field of deep learning, and from management research where topic modeling is increasingly used as a first step to create an abductive leap in grounded theory methodology. Furthermore the panel discussed how topic modeling may help us in doing research. Computational methods are definitely helpful in enhancing human coding procedures, identifying general patters (that we might not see in smaller datasets), and in challenging existing frames. Similarly, computational methods can also help us to reduce type 2 errors and decrease the likelihood that we miss interesting findings.
However, the panelists’ also acknowledged the challenges that come with using a new method and communicating this method to the general reader. There are a couple of strategies that the panel highlighted as helpful to convince editors and reviewers to publish a paper that builds upon a new method: (1) Using computational methods to validate previous findings (theory testing and validation), (2) Showing that results persist even if models change (e.g. as additional robustness checks for new theory building); (3) Using online appendices to explain complex methodological issues and keeping it simple in the actual method section; (4) Publishing a methodological paper beforehand; (5) Optimizing and actively managing the reviewer pool to get fair and proficient feedback; and finally (6) Presenting the paper draft as often as possible to many different people before submitting (get ideas out to potential editors and reviewers early on). An important learning from this session was that we should not take institutions (e.g. journal standards) for granted. Although most journals change very slowly and stick to tried-and-true methods, many editors are becoming increasingly open for new methodological ideas and representations. As such, the overall recommendation of the panel was to build a community and dare to publish interpretive data science also in general management journals.