JAY F. NUNAMAKER, JR. is Regents and Soldwedel Professor of MIS, Computer Science, and Communication, and director of the Center for the Management of Information and the National Center for Border Security at the University of Arizona. He received his Ph.D. in operations research and systems engineering from Case Institute of Technology, an M.S. and B.S. in engineering from the University of Pittsburgh, and a B.S. from Carnegie Mellon University. He received his professional engineer’s license in 1965. He was inducted into the Design Science Hall of Fame in May 2008. He received the LEO Award for Lifetime Achievement from the Association for Information Systems (AIS) and was elected a Fellow of the AIS. He was featured in the July 1997 issue of Forbes magazine on technology as one of the eight key innovators in information technology. He is widely published, with an h-index of 60. He specializes is in system analysis and design, collaboration technology, and deception detection. The commercial product, GroupSystems’ ThinkTank, based on his research, is often referred to as the gold standard for structured collaboration systems. He founded the MIS Department at the University of Arizona in 1974 and served as department head for eighteen years.
ROBERT O. BRIGGS (corresponding author: [email protected]) is a professor of information systems at San Diego State University. He earned his doctorate in management information systems at the University of Arizona. He researches the cognitive foundations of collaboration and uses his findings to design new collaborative work practices and technologies. He is a cofounder of the collaboration engineering field and co-inventor of the ThinkLets design pattern language for collaborative work processes. He has designed collaboration systems and collaborative workspaces for industry, academia, government, and the military. He co-chairs the Collaboration Systems and Technology track for the Hawaii International Conference on System Sciences. He has published more than two hundred scholarly works on collaboration systems and technology, addressing issues of team productivity, technology-supported learning, creativity, satisfaction, and technology transition.
JUSTIN S. GIBONEY is an assistant professor of information technology management at the University at Albany. His research focuses on behavioral information security, deception detection, expert systems, and meta-analytic processes. He emphasizes design science research and system building to solve real-world problems with technology. He has received several grants and has published journal and conference papers.
This Special Issue focuses on the unique contributions of applied science/engineering research (AS/E) to the information systems (IS) literature.1 Scientific method comprises four modes of inquiry: exploratory, theoretical, experimental, and AS/E research. Each mode has different research goals, so each creates different classes of scholarly knowledge and consequently has different research products. Each must therefore use a different logic for demarcating what is known from what is only conjectured, believed, hoped, or feared. Each consequently has different standards of rigor. There are clear boundaries between the four modes of inquiry, but they compose an elegant, integrated whole that moves humanity from cruder to more sophisticated understandings of reality. To clarify the role of AS/E research and its place in IS as a discipline, consider first a brief reprise of the other three modes. One who is familiar with the philosophy of science might be tempted to skip the next three paragraphs but, in so doing, might miss the delight of discovering outrageous, wrongheaded assertions that offend one’s scholarly sensibilities, or of having one’s convictions affirmed, or possibly both, so consider reading on.
The goal of exploratory research is to discover and describe phenomena, their correlates, and the conditions and contexts under which they manifest. A phenomenon is an observed variation in the value of the property of an entity in the universe [6]. The products of exploratory research are several, among them: detailed descriptions of phenomena and the conditions under which they appear; classification schemes and taxonomies, process models, and when regular patterns of correlation have been observed across a wide range of conditions, models of observed correlations (called grounded theories by Stebbins [5], although other authors apply that label to other concepts). Two important standards of rigor for exploratory research are: first, that reports of the contexts and conditions under which phenomena and their correlates are observed should be reported in rich detail; and second, that authors should be scrupulous to avoid any imputations of causality to their observations. The disciplines of exploratory research offer no logic by which to assert causality. Causal words such as “affects, determines, impacts, triggers, influences, interferes with, or impedes” insert fatal flaws into exploratory findings, rendering them indefensible. To contribute value, exploratory research must be conveyed in the language of relationships, for example, “We discovered a strong association between A and B; C is related to D; E correlates with F,” and in cases where unequivocal empirical evidence shows a stable relationship across many conditions, “G predicts H.” There is more to exploratory research, for example, that it is validated by concatenation rather than by replication, so if your curiosity persists, we commend you to the thoughts of Stebbins [5]. Let us leave this topic, though, with a final thought: but for exploratory research, there would be nothing about which theorists could theorize, and thus no theories for experimentalist to test, and thus no scientific knowledge for the AS/E researcher to bring to bear. Exploratory research is the foundation upon which all other science builds.
The goal of theoretical research is to predict and explain discovered phenomena and correlations. Theoretical research produces only one kind of model, called by Gregor [2] theories that predict and explain, also known as general and covering laws, received theory, formal theory, among other labels. We will call them simply, “causal theories.”2 Science is replete with other kinds of useful models that also bear the label “theory.” These are not products of theoretical research, but of exploratory and AS/E research. A causal theory is a collection of two kinds of statements, which we will call axioms and propositions, even as we acknowledge that other authors label them differently, and still others use the terms “axiom” and “proposition” to label other concepts. In this context, axioms state assumptions about causal mechanisms in the universe that could account for observed phenomena. Propositions are functional statements of cause and effect that posit a causal relationship between a causal construct and a consequent construct that represents the phenomenon of interest. It Is beyond the scope of this introduction to articulate the logic of causal theory development, but for an example of such logic, see [2]. Among the several standards of rigor for theoretical research are (a) that the constructs in the proposition must be defined with sufficient specificity that they can be distinguished from other closely related constructs [1]; (b) that theoretical propositions must be derived by internally consistent deductive logic from the axioms; and (c) that the resulting theoretical model must be falsifiable [3].
The goal of experimental research is to test the degree to which a theoretical proposition is consistent with reality. The experimenter tries to imagine evidence that could break a theoretical proposition in as efficient and compelling manner as possible, and expresses the approach as a hypothesis. A hypothesis is a comparative statement that contrasts the value of a dependent variable that measures the consequent construct across at least two treatments that manipulate the value of an independent variable that represents the causal construct of the proposition to be tested. The first standard of rigor for experimental research is that hypotheses must be derived by rigorous deductive logic from the theoretical propositions they are meant to test. If, as often happens in IS research, hypotheses are advanced without supporting logic, or are derived by inductive reasoning from prior reports of observed correlations, then these studies are not instances of experimental research. They are instead instances of exploratory research using experimental techniques, and so would have to be reported with exploratory rigor, including rich detail about the conditions under which phenomena were observed, and excluding causal language. The research products of experimental work are many, but chief among them are reports of experimental studies. The standards of rigor for experimental research are numerous and unforgiving, because experimental research offers the only logic in scientific method by which causality may be asserted. The standards for establishing the rigor of experimental research pertaining to, for example, construct validity, internal validity, and external validity are well articulated elsewhere (e.g., [4]) so we will not belabor them here. It is accepted that experimental research does not prove the theories it tests; at best it only fails to break them. Theories are therefore not advanced as truth, but rather as useful models, approximate representations of reality, to be held only until they can be broken. Breaking a theory creates new knowledge, and paves the way for developing new theories that model reality more closely than did their predecessors.
If science were to end with experimental research, then it would bring no benefit to society beyond its entertainment value. However, there is more science to be done. This brings us to AS/E research. In the field of medicine, AS/E is called “translational research.” Various other disciplines have focused on certain aspects of AS/E and localized them to address their needs. In information systems, for example, design science research prescribes a high-level process for conducting AS/E research streams centered on the designed artifacts that compose information systems.
The goals of AS/E research are:
To discover and describe important classes of unsolved problems in the field
To derive generalizable requirements for solutions to classes of unsolved problems
To design generalizable solutions for classes of unsolved problems
To develop exemplar instances of generalizable solutions
To test the degree to which exemplar instances solve the classes of unsolved problems
To create design theories—bodies of knowledge that practitioners can use to develop their own instances of a generalizable solution
The contributions unique to AS/E flow from its goals: problem descriptions, generalizable requirements for solutions, generalizable solutions, exemplar instances of solutions, empirical validations of solution efficacy and generalizability, and toward the end of a successful AS/E research stream, design theories. The standards of rigor for AS/E contributions also flow from the goals. Researchers must address not a single local problem but a class of problems. The researchers must argue the importance of the class of problems and must demonstrate that the problem remains unsolved. Requirements must be generalizable, which means that no matter what solution emerges, it has to meet those requirements in order to solve the problem. Proposed solutions must be generalizable, which means they can be adapted to solving the class of problems across a wide range of contexts where it manifests. Exemplar instances of the solution must be sufficiently complete and robust that its efficacy can be demonstrated. Tests of the solution must make it logically possible to determine the degree to which the solution improves outcomes of interest. Design theories must be sufficiently complete that practitioners can use them successfully to implement their own instances of the generalizable solution, and sufficiently parsimonious to minimize cognitive load.Given the definition of AS/E as “the use of scientific knowledge and methods to solve practical problems,” one might object, “But then it’s not really science, is it? Because it doesn’t create new knowledge. It just scoops up knowledge created by the real scientists and finds some mundane but practical use for it.” Au contraire. To conduct sound AS/E, a researcher must be the master of all four modes of inquiry and must practice them vigorously and rigorously to excel. As a consequence, AS/E research tends to spin off numerous contributions to exploratory, theoretical, and experimental research even as it contributes its own unique classes of knowledge.
Discovering and describing important classes of unsolved problems in the field is an instance of discovering and describing phenomena, their correlates, and the conditions under which they manifest, because problems are defined in terms of unacceptable values for phenomena of interest. To make even the first AS/E contribution, therefore, researchers must use the disciplines of exploratory research. Sometimes, the literature does not yet contain theoretical models to explain the variations in the outcomes-of-interest, but such models could be useful to explain unacceptable current conditions, to suggest counterintuitive solutions, and to predict the effects of design choices toward improving outcomes. The AS/E researcher may therefore need to conduct theoretical research to derive a useful theory. If derived with scientific rigor, such a theory could be generally useful even to basic researchers with no interest in the problem space. Thus, an academic field driven by AS/E research can become a reference discipline for basic research in another field, so AS/E makes contributions to theoretical research. When an AS/E researcher tests a solution informed by a rigorously derived theory, if that test conforms to the disciplines of experimental research, then the study simultaneously evaluates a solution and tests a theoretical proposition. The study contributes to both experimental and AS/E research. Thus, AS/E is not a pseudoscience, sweeping up the leavings of the “real scientists.” It is the rich and rigorous practice of all science in service of practical goals. And that, in the end, is the point of science—to increase the likelihood that people will survive and thrive. In information systems, AS/E brings all of science to bear on the unique and defining purpose of our field: to understand and improve the ways people create value with information.
The first paper in this special issue, “The Last Research Mile: Achieving Both Rigor and Relevance in Information Systems Research,” by Jay F. Nunamaker, Jr., Robert O. Briggs, Douglas C. Derrick, and Gerhard Schwabe, builds the case that researchers who shepherd their scholarly insights about information systems through the last research mile, that is, from conception through successful transition to the workplace, have the potential to produce more scholarly knowledge in all four modes of inquiry than do those who do not traverse the last mile. They elaborate the last research mile as three stages: proof-of-concept research to demonstrate the functional feasibility of a solution; proof-of-value research to investigate whether a solution can create value across a variety of conditions; and proof-of-use research to address complex issues of operational feasibility. They explain why last-mile researchers need exploratory, theoretical, and experimental research to attain their AS/E goals. They also argue that going the last research mile negates the assumption that one must trade off rigor and relevance in IS research, showing it to be a false dilemma. They demonstrate their positions with examples from IS research spanning more than forty years.
The balance of the papers in the special issue are recent instances of AS/E research. Shi Ying Lim, Sirkka L. Jarvenpaa, and Holly J. Lanham, in their paper “Barriers to Interorganizational Knowledge Transfer in Post-Hospital Care Transitions: Review and Directions for Information Systems Research,” focus on systems to create physical value with information—health and well-being. The study winnowed an initial pool of 3,781 papers down to 70 that gave detailed qualitative reports of high-risk transitions from hospital care to care by community providers to better understand barriers to knowledge transfer in these interorganizational collaborations. The analysis showed that time pressure tends to inhibit multilateral knowledge transfers, accommodation of fluctuating absorptive capacities, and reconciliation of knowledge and goal conflicts. Having discovered and reported an important class of unsolved problems in the field, they propose an agenda for health information technology capabilities to address these barriers.
A related paper, “Anatomy of Successful Business Models for Complex Services: Insights from the Telemedicine Field,” by Christoph Peters, Ivo Blohm, and Jan Marco Leimeister, identifies a variety of factors that make it difficult to develop sustainable business models for widespread provision of telemedicine services. Using a design research approach, the authors work with practitioners in the field to develop a generalizable framework for classification and assessment of business models in all domains of complex services, the identification of white spots for future business opportunities, and the identification of patterns for successful business models. They contribute a specific business model framework for the telemedicine industry, and present an approach for systematically designing business models for complex services. They illustrate how such business model frameworks can be enriched by considering competition and the role of IS in the business models.
In their paper, “It Is Not Just About Competition with ‘Free’: Differences Between Content Formats in Consumer Preferences and Willingness to Pay,” Benedikt Berger, Christian Matt, Dennis M. Steininger, and Thomas Hess address another pervasive monetization struggle, to wit, the many providers who struggle to monetize the provision of online content. The authors argue that the widespread availability of free content online cannot fully account for low willingness to pay, and report that customers for many kinds of content are willing to pay for it, but that they find the offline delivery formats preferable to the online formats for the same content, while other potential customers find some online formats preferable to other online formats, but do not own the devices that could provide them the format they prefer. Among their several discoveries, they found that the smaller the screen on a consumer’s device, the less the consumer was willing to pay for given content. They discuss the implications of their findings for the revenue models of content providers and propose a broader research agenda for monetizing content.
Finally, in “Evaluating Team Collaboration Quality: The Development and Field Application of a Collaboration Maturity Model,” Imed Boughzala and Gert-Jan de Vreede argue that collaboration quality directly affects an organization’s performance. The authors conduct an AS/E study to address an important unsolved problem in the field: measuring the quality of collaboration with and across organizational boundaries. They present a generalizable solution to the problem based on a new collaboration maturity model for assessing the maturity of an organization’s teamwork processes. They developed the model in the field with professional business unit managers, then assessed its utility in a proof-of-value study within a large automobile manufacturing enterprise. The paper makes theoretical and practical contributions to collaboration science and collaboration engineering.
Each of the AS/E studies in this issue identifies an important class of unsolved problems in the field, and each of them brings scientific knowledge and methods to bear in designing generalizable solutions to those problems, developing exemplar instances of those solutions, and testing their efficacy. Each contributes new knowledge to the IS literature. These papers were developed based on the initial versions presented at the Hawaii International Conference on System Sciences. We commend them to your reading.
Notes
1.Like most concepts in the philosophy of science, AS/E appears under many labels. In some parts of the world applied science is highly regarded, whereas engineering research is regarded as little more than vocational training. In other parts of the world, the reverse is true. We accommodate the linguistic richness of science by using both labels, then abbreviating both of them so that the labels cease to distract from the concept.
2.Some postmodern philosophers object, arguing that causality is an obsolete concept that may not even exist in reality. We bypass that concern by asserting causality as a property of the model rather than as a property of reality. The universe is infinitely complex and our minds are not. A useful theory is a simplification of reality that humans can understand, that is nonetheless sufficiently consistent with reality that its logic can be used to predict and explain phenomena. If assuming causality increases understanding of reality, even imperfectly, then it is useful.
REFERENCES
1.Bacharach, S. Organizational theories: Some criteria for evaluation. Academy of Management Review, 14, 4 (1989), 496–515.
2.Gregor, S. The nature of theory in information systems. MIS Quarterly, 30, 3 (2006), 611–642.
3.Popper, K.R. The Logic of Scientific Discovery. New York: Basic Books, 1959.
4.Shadish, W.R., Cook, T.D., and Campbell, D.T., Experimental and Quasi-Experimental Designs for Generalized Causal Inference, Houghton, Mifflin and Company, 2002.
5.Stebbins, R. Exploratory Research in the Social Sciences. Thousand Oaks, CA: Sage, 2001.
6.Weber, R. Evaluating and developing theories in the information systems discipline. Journal of the Association for Information Systems, 13, 1 (2012), 1–30.