ABSTRACT:
The Special Section opening this issue of JMIS calls our attention to the complexity of software that runs our enterprises and their products. This is a weighty topic that is implicitly addressed by numerous research studies performed by our field, but meets with a minimal amount of sustained explicit attention. There is extensive air space between the theoretical complexity theory and its variants, such as dynamic systems complexity, computational complexity, programming complexity, and others, on the one hand, and the complexity of the human-machine systems, increasingly run by software either produced or assimilated by our enterprises, on the other hand. Our sociotechnical field can have great input. Let us consider the following.
The size of the software assets of Google, one of the firms that are largely constituted by their software assets, has over 2 billion lines of code [9]. The interconnectivity among the software-run systems, if defined by the Internet, is many billions of dynamic connections. The number of human actors who have access to these systems, mostly for the good and in some cases for the ill, runs to several billions. The number of the entities in the systems, if enveloped by the sensors and actuators of the Internet of Things, goes into tens or hundreds of billions. Our dependence on the software-run grids that run our energy, water, healthcare, and other services is vast and growing. Indeed, like the driverless vehicles that are in the offing, these grids will be increasingly autonomous (in a simple reading, run by software). The decision-making capabilities of software-data-hardware subsystems relying on deep learning are increasing apace, and the redistribution of work from humans toward these subsystems raises serious concerns about human employability, or even the supersession of the human agency. The impactfulness can be gauged by the potential use of software in cyber-hostilities of a great variety. And the speed of action, if defined by the number of states per unit of time (to revert to theoretical complexity), is growing on the persisting Moore’s curve (riding on nanosheets at present). In fact, that curve is overridden by the artificial neural network chips whose power doubles every few months.
Even if we believe that we are dealing here with organized complexity, to use Weaver’s distinction [10], the extent and depth of this rapid growth is daunting and challenging the dichotomy he proposed over seventy years ago. There is an obvious need to bridge the chasm between the impressive intellectual achievements of the theoretical complexity science and the concerns that have to be devoted to the complexity of the organization of the human-computer systems. Significant progress has been indeed made over decades in computer science to manage programming complexity, with several methodologies that making very large programs possible. Similar progress is necessary in the study of complex sociotechnical systems.
Questions to be addressed by our research in this domain include in the first order the complexity-dependent robustness and resilience of large human-machine systems, the complexity-driven task apportionment and interfaces in such systems, strategic options rendered by complexity management, organizational capabilities furnished by various degrees of information systems (IS) complexity and the cost-benefit trade-offs, flexibility and complexity, realistic complexity measures, and a host of others. There is a certain body of work in our and cognate fields to build on toward the results that can address these issue. As one example, further research is needed to build on the systems-architectural work of Baldwin and her colleagues [3, 6]; insights can be gained from complexity economics innovated by Arthur [1]; and the inclusion of complexity analysis in the design-science methodological guidance [7] would be important. As I have written this, I note that scholarly attention is being paid to the related issue of organizational complexity in the digital era [4].
We need to endogenize humans and the various degrees of human organizations in our analyses and syntheses of very large and emerging systems. Design science is only one of our discipline’s methodologies aimed at complexity management. This work is of a great moment. As Lessig [5] told us two decades ago, the code is a normative that regulates us, which has been amplified by the recent analysis by Pistor [8] who asserts that the software code competes with the code of law. With the advent of autonomous vehicles, drones, and similar devices, their software may be making ethical decisions we consider inhering in humans [2]. They may not be programmed by us if they are made in real time by (non-transparent) deep-learning systems. We have observed the failures of flights, where pilot-plane systems have failed, arguably due to the complexity of the overall system (and yes, training is — or should be — endogenous to such a system).
Guest Editors Robert O. Briggs and Jay F. Nunamaker Jr. have included three papers in their Special Section on The Growing Complexity of Enterprise Software. With blockchain technology heralded as the highly promising decentralized infrastructure for cryptocurrencies and other transactional systems, there are numerous fundamental questions about the governance, as novel approaches will be needed to organize and manage the emerging complexity. The first paper of the Special Section investigates the corresponding decision issues. The second paper deals with cognitive load in the context of the crowdsourcing of innovations. The cognitive load is a crucial aspect of human agency in complex systems, and the authors elaborate this construct, with the empirics focusing on germane cognitive load that can durably support the human performance. With information technology supporting the ever higher levels of multitasking, the authors of the third paper establish empirically that the effect on deception detection in collaborative teamwork is quite salutary. The Guest Editors further introduce the papers to you.
In the first paper of the general section, Abhishek Kathuria, Prasanna P. Karhade, and Benn R. Konsynski develop a multi-level theory to explain the supplier participation on two-sided digital platforms, using the restaurants’ participation on food-delivery platforms as the case in point. As well known, the offerors of two-sided platforms generally need to attract, often by incentivizing, one of the sides to the transactions that will be supported by the platform. Here, the researchers focus on the study, from multiple theoretical perspectives, of the decisions made by the supplier restaurants in their consideration of a digital food-delivery platform. Several predictors of this participation are surfaced by the analysis, along with the comprehensive novel approach to the study of platform use.
Mobile commerce is to many of us the face of e-commerce, and mobile couponing is a prominent form of location-based advertising. While the push format of this couponing is generally considered intrusive, the mobile pull is far more acceptable. And yet the extant research work has focused on push advertising. Thus, the study of pull advertising is highly contributing, and is presented here by Dominik Molitor, Martin Spann, Anindya Ghose, and Philipp Reichhart. The researchers study the effects of the choice architecture, that is, of the presentation of options to the potential customer via the interface design. This being location-based advertising, the options revolve around the display of distance-related information. In an extensive randomized field experiment, the authors surface the effective ways to design the mobile couponing interface and attract the users to the offering.
Hongyi Zhu, Sagar Samtani, Hsinchun Chen, and Jay F. Nunamaker Jr. contribute to our understanding and practice of healthtech with their design-science based system for the monitoring of activities of daily living. The multi-sensor system allows for the identification of individuals within a multi-resident setting in an unobtrusive, privacy-preserving fashion. The work brings to this task a novel deep transfer learning approach using a convolutional neural network. Rigorous tests against the competing systems show the superiority of the approach taken here. Notably, the superior performance is exhibited even with a small labeled data set. The work expands our prior knowledge in several directions and has the potential to be retargeted at a number of other settings where individuals need to be supported against the background of human aggregations.
Informational cascades in software adoption, that is following and imitating the behavior of others, foreclose a proper analysis of options and may lead to adopting suboptimal, for the given adopter, software. This undesirable phenomenon results in the users’ not availing themselves of better apps and leads to misallocation of resources. In the context of apps, this herding behavior manifests itself in the discontinuance of use in a rather short interval after such adoption. Here, Xia Zhao, Jing Tian, and Ling Xue approach the study of this decision-making deficiency with the aid of an extensive data set on post-adoption discontinuance of app use. The study introduces a number of unobvious conclusions regarding the effects of user ratings, app rankings, and app complexity.
Two subsequent papers contribute to our understanding of co-creation. The first of these, by Liang Chen, Pei Xu, and De Liu, is set within the context of crowdsourcing contests, where creative ideas are sought. The winners may be selected by a panel of experts or by crowd voting. The researchers deploy several theoretical lenses to study the effects of a second approach, which has been gaining influence. As the major finding, the authors establish that crowd voting positively and significantly affects the extent and the breadth of contest participation. They also present several additional results that should encourage contest offerors to employ this mode either as the principal one or a supplementary to an expert judgment.
Conferring recognition, such as badges, on members of user communities in order to encourage participation is a related, if less cognitively laden, issue. In their work, Samadrita Bhattacharyya, Shankhadeep Banerjee, Indranil Bose, and Atreyi Kankanhalli investigate the longer-time effects of such recognition or multiple recognitions. Relying on reinforcement theory, and an extensive quasi-experiment on one of the large business-review sites, the authors present several results regarding the behavior of both recognized members (particularly those recognized several times) and of those worthy of recognition that is withheld from them. Reading this together with the preceding paper, we gain a more granular understanding of the behavior of individuals co-creating value in online aggregations.
The concluding paper of the issue analyzes the online-to-offline (O2O) business model of e-commerce, under which the customers acquired online are directed by the intermediary platforms to the appropriate offline stores. The suppliers offer coupons on the platforms and the customers need to make shopping decisions based on the deals offered and the physical locations of the stores. There are very many coupons and there are many participating suppliers: hence, recommenders are in order. Yuchen Pan and Desheng Wu present a novel approach to recommendations under these circumstances, where there are many alternative customer-supplier dyads and the rating data is sparse. The approach relies on establishing the networks of customers who co-use the suppliers, with rating matrices available, and combining these with the physical location of the business. The comparisons with the pre-existing recommendation models are favorable.