ABSTRACT:
It is imperative for the development of the Information Systems (IS) discipline that we put forth a sustained and leading effort to contribute as a scholarly field to the fruitful and innovative use of the frontier artificial intelligence (AI). This does not at all mean: “Drop everything and do AI.” It means, however, that we should carefully consider how the generative AI and its successors will change the specific domains of our research. As a scholarly field, we need to develop the generalized research capabilities to understand and deploy AI with the consideration of higher values: the societal well-being, human flourishing, inclusive economic benefits, and long-term containment of threats that can be foreseen now. We will thus contribute to the global well-being, to the organizational development, to the rise of productivity, and to the expansion of human capabilities with the assistance of the AI systems and approaches.
The Janusian view, recognizing both the bright and the dark sides of a major technology, is always appropriate when seeking the understanding of it. This applies especially to the general-purpose technologies, of which the AI has become one. In fact, the contrast is particularly stark in the case of AI, whose ultimate goal is matching and perhaps surpassing what distinguishes humans in the universe: intelligence. We can observe, study, and further the dramatic enhancement of organizational capabilities as AI agents and other systems are integrated into the IS of business companies and other organizations. JMIS has published numerous papers investigating both augmentative AI and the systems that are capable of replacing humans—and their effects.
As AI is aiming at the ultimate goal of artificial general intelligence, with the capacity to move to the ever higher levels of intelligent performance, we can justifiable speak of (rejoice at or fear) the singularity: AI matching our intelligence and bootstrapping itself beyond that. We need to contribute to the containment of the highly significant vulnerabilities and long-term threats presented by the ever-more powerful AI. Even without focusing on the singularity, we can recognize that autonomous AI systems, with exponentially rising power and mass accessibility, can be in a position to represent existential threats to humanity. The concise statement by 25 leaders of the AI research and practice on the directions of risk containment has just been published [1]. It is quite comprehensive and its compass is the management of the extreme long-term risks. It is well for all of us in the IS community to assess where we can help. Yes, it is possible that we are high on the rising slope of the hype curve. I remember this notion applied to the spreading Internet, and then to the emerging Web. The two together went on to change the world. In a longer view, we are approaching the threshold of applied quantum computing. Think futuristically what the several digital orders of magnitude increase in computing speed combined with the AI-based capabilities for autonomous decision-making based on the real-time learning with new AI models from the untold volumes of data can do for (and to) the world.
Our field is indeed in an excellent position to field cutting age research of AI, both as a means to the organizational ends and as a tool in the achievement of the specific goals—in biogenetics, healthcare, or cybersecurity, to name only three examples. With our sociotechnical approach to research, we investigate not just the technology, but the larger systems in which it is embedded in its interaction with humans and human communities of various scales. JMIS has published, is publishing, and aims to publish the best papers that investigate the roles, the capabilities, and the design of so embedded AI. As one example, our Guest Editors are working on a special issue devoted to the generative AI in the platformization; numerous papers are in the review pipeline.
The two papers opening the present issue of JMIS address the problematics of trust in AI. The first of them, authored by Hanzhuo (Vivian) Ma, Wei (Wayne) Huang, and Alan R. Dennis, investigates the role that can be played by AI in combating fake news on social media. As these free-for-all media have become the go-to source of news for large segments of the public, fake news propagated there have become a societal problem. Indeed, deceptive “news” can be manipulated for advertising gain or for disrupting the social order. The recommendations of news items to the readers deemed appropriate for them are very frequently performed by AI. Should the potential readers be informed about this recommendation source? Would such disclosure reduce the impact of fake news? That is the intent of such disclosure, considered for legislation or regulation in several countries. The authors empirically compare the effect of the labelled AI recommendations versus those without labelling, or made by a human source. The empirics surface unintended consequences of the AI labelling of the news items (both true and false); in fact, some of the effects oppose the intentions. The authors find theoretical explanations for the effects their empirics surface. Our mission to understand in order to enhance the societal well-being is well served by this research.
In working with the AI systems (or agents, if you will), the issue of trust arises with respect to the system’s creator (such as the software producer) and the trust in the system itself. Does the trust in the creator firm automatically transfer into the trust in its creation? The question is nodal with respect to the AI system acceptance and the producer’s success. Here, Kambiz Saffarizadeh, Mark Keil, and Likoebe Maruping investigate the matter empirically, basic themselves in the theory of trust transference. The authors show that such trust transfer is not to be taken for granted. Their research shows how it hinges on the alignment between the two sides of the trust (in the producer versus in the system) and, in turn, on the steerability by the third component of the producer-system-user triad, that is the user. The steerability is in its essence the user’s ability to exert control over the goals and values reflected in the AI system—unless the system is autonomous, the possibility included in the empirics and the theorization of the work.
The authors of the next paper pursue the value-creation aspect of AI systems, as faced by the managers who are expected to invest in the expensive yet still budding and evolving technology—and rather fast, to keep the competitive parity or to create the competitive advantage of their firms. Prioritization is of the essence. Magno Queiroz, Abhijit Anand, and Aaron Baird develop a typology of the AI investments as related to the managerial needs and preferences with respect to the degree of the agency delegation and the timings of the potential investment. The theory-building paper is a notable contribution not only to our understanding of AI investments, but to the information technology (IT)-based value creation in general.
Cybersecurity is always on the mind of the IT managers as well as the general managers up to the level of the C-suite. It is also within the purview of the corporate boards. Two papers deploy different methodologies to showcase what our field can offer to this critical issue. In the setting of cloud computing, which has become a preeminent mode of provisioning, Steven Ullman, Sagar Samtani, Hongyi Zhu, Ben Lazarine, Hsinchun Chen. and Jay F. Nunamaker, Jr. deploy representational deep learning to prioritize and manage cloud vulnerabilities to attacks. These days, we largely compute in the public (i.e., shared) clouds. User firms access the cloud resources through virtual machines, software environments that aim to isolate them from other clients of the cloud system, while offering flexibility in the resource use, which is a top advantage of the cloud approach to computing. Users often install untrusted software on their virtual machines and misconfigure their subsystems. The authors have developed a methodology and the artifact that attacks the problem by using unsupervised deep learning to identify similarly vulnerable cloud resources and clustering them for remediation. A case study validates the approach. This exemplar of design research showcases our capabilities to use the AI approaches to contribute to the solution of a salient class of problems.
The next paper, by Michel Benaroch, offers a different kind of contribution to cybersecurity. While companies increasingly recognize the importance of cybersecurity and fund their pursuits of it, it is not a profit center. The potential deleterious effects of the security gaps need to be made explicit. This work provides the link between the cyber failures and the reputation a company enjoys (or not) for its general IT capability. Starting by theorizing the adverse effects of IT failures on the firm’s IT capability reputation and market value, Benaroch compares empirically these in the ex-ante state (i.e., before a failure) and the ex-post, after the failure, situations. This research shows how damaging cyber incidents are to the firm’s market value. The work also contributes to the capability-based view of the firm – and can help place the monetary value on the control processes and resources required to prevent cyber failures.
It is often said that we (actually those of us in the more developed countries) live the experience economy, where consumers favor experiences over things. Some experiences may be delivered in a digitized form, some of the others have a hybrid digital-physical form (think wellness activities or - perhaps better – wine tasting). In general, some parts of an experiences can be delivered to the consumers’ digitally, while others need to be sensed physically. In their paper, Johanna Lorenz, Leona Chandra Kruse, and Jan Recker develop a theory of value creation and capture through the mixed physical-digital experiences. The theoretical model is induced from several cases. This path-breaking work will be no doubt built on in the future research directed at the progressing digitization of experiential products.
As I previously mentioned, corporate boards and their individual members are now legally responsible for the security of the firm’s IS operations. Naturally, as a major corporate asset and enabler of corporate capabilities, corporate IS have for a long time been within the boards’ oversight responsibilities. Moreover, the continuing IT innovation has to be progressing apace to maintain or enhance a firm’s competitive position. And yet, many boards are at sea in their abilities to provide such oversight. What to do? In the next paper, Xiaowei Liu, Alain Pinsonneault, Wen Guang Qu, and John Qi Dong supply one answer and research its ramifications. The researchers show that the overlap, or interlock, between the board of a user firm and those of the IT suppliers furthers the IT knowledge transfer to the user firm on an appropriate level of consideration. It may be added that interlocking its directorate with that of a user firm, an IT supplier learns at well from the major use cases, and maintains a valuable relationship with a client.
We all use tethered durable goods on a daily basis, even if we do not call them that. These cars, phones, PCs, or tractors (well, not all of us use those, but they work with extreme precision) are tethered remotely to their supplier through the software that runs them. The tethering presents itself in the software updates by their vendor that keeps modifying the functionality, performance, or compatibility of the product throughout its lifetime. Strategically, the tethering also enables the vendor to realize its policy with respect to the product line: The vendor may degrade the performance of the current line to encourage the customers to switch to the new one or, as the ultimate in such encouragement, to disable the current line with the software update that is not backwards compatible. In the next paper of the issue, Ramesh Shankar offers a formal analysis of the vendor’s decision-making when releasing a new version of a line of tethered goods or when making major updates. There emerge several options the vendor can pursue at such a decision point, and Shankar’s model (in a monopolist version here) surfaces policy recommendations for the updating vendor. Since ever more goods become tethered to deliver their functionality, the model is both of theoretical and of practical value.
Crowdfunding has been gaining in prominence as a potent fund-raising tool, adapted to many contexts and serving a variety of objectives. Engaging donors in donating is, of course, the key to the success of crowdfunding campaigns. The next paper, authored by Theophanis C. Stratopoulos and Hua (Jonathan) Ye, investigates, based on an extensive dataset, the effects of leaderboards, considered a gamification feature, on donor engagement. The context here are medical campaigns, helping individuals to offset medical expenses. The societal worth of such campaigns is beyond question, as they promote equity, offer real help to individuals in distress, and build social solidarity. The campaigns’ success hinges on attracting a high number of individuals, some of whom contributing smaller amounts, yet particularly in attracting the individuals willing to contribute substantially. The design of the crowdfunding site should encourage both segments to contribute. By investigating the effects of a large number of campaigns on a popular site, the authors find he nuanced effects of leaderboards in actuating the undesirable crowding-out effect, while being generally effective in raising the funds.
Maturing e-commerce attracts both customers and sellers. The competition among the vendors is keen and gaining intensity. To attract customers and introduce exit barriers, the leading vendors use technology to provide customized shopping experience. This may be based on the data collected about the customers (“Here are the recent offerings to your taste in your size”), virtual reality (“Here you are in this garment complementing your recent purchase”), and other technological means. The concluding paper of the issue presents the results of a research into the effects of such customization. Hongseok Jang, Kyung Sung Jung, and Young Kwark study the effects of such IT-based tailoring on the two types of consumers, the experienced ones and the relative novices. The authors’ game-theoretic model surfaces the insights that will help both the brands and the e-retailers select the best posture to address their heterogeneous customers.