INTRODUCTION
The recent proliferation of artificial intelligence (AI) has brought about significant changes in business and society, enabling transformative performance gains across various sectors (Belcic & Stryker, 2025). This revolution has been accelerated by popular, commercially successful, and readily available generative AI products, such as ChatGPT, Gemini, and Claude AI, which have democratized access to these powerful tools (McIlveene et al., 2024). Small businesses are no exception to this trend. Recent research indicates that nearly 39% of small businesses in the United States utilize some form of AI for functions such as customer acquisition, marketing, and data analysis (AllBusiness, 2025; Crenshaw, 2024). For many small businesses, AI holds the promise of leveling the playing field against larger competitors (Demirci, 2024). However, a critical paradox is emerging: despite widespread adoption, many small businesses remain stuck in the experimentation phase, with many struggling to implement AI strategically and unlock its full potential (Hilgers & Roback, 2025).
This paradox raises the question of why so many small businesses struggle to implement strategic and effective AI strategies. This research proposes that there is a fundamental misalignment between the capabilities needed to deploy AI strategies and the limited resources available to most small businesses. The resource-based view (RBV) theory (Barney, 1991) provides a powerful lens for explaining this disparity. A key component of the RBV theory is how firms respond to changes in their environment to create, or fail to create, competitive advantages (Teece, 2007). Some can marshal the necessary internal and external resources to develop the required capabilities, whereas others cannot. When it comes to AI, many small businesses lack the necessary resources and capabilities, including financial, human, and technical expertise, to fully leverage its promises. For example, small businesses may face financial constraints that limit their ability to use only free versions of AI tools, lack employees with the technical expertise to evaluate AI outputs, or operate without formal governance structures to guide the ethical and appropriate use of AI. These limitations reflect the broader condition of “resource poverty” that we elaborate on later in this work.
This lack of resources and inability to effectively leverage AI can lead to serious consequences for the organization. Beyond the wasted potential and weakened competitive position, this environment can lead to lapses in judgment, unethical behavior, and harm to the organization’s stakeholders. Stakeholder theory (Freeman, 1984) offers a robust framework for understanding how the firm’s actions and behaviors affect a range of internal and external actors, i.e., stakeholders, and how the firm must balance the interests of its stakeholders to achieve success. McIlveene et al. (2024) examined the potential negative consequences of firms’ failure to consider their ethical obligations to stakeholders regarding AI. Furthermore, these adverse outcomes include loss of employee trust, breaches of customer privacy, and harm to the broader societal well-being.
This research explores these challenges through a dual theoretical framework that integrates the resource-based view (RBV) and stakeholder theory. From an RBV perspective, many small businesses struggle to move beyond AI experimentation because resource constraints limit their ability to develop strategic capabilities over time. Stakeholder theory complements this view by highlighting the ethical and social consequences that emerge when AI initiatives are implemented without adequate governance, accountability, or attention to stakeholder interests.
Although a growing body of research examines AI and IT adoption among small and medium-sized enterprises (SMEs), which we refer to as small businesses throughout this paper, much of this work focuses on identifying adoption determinants rather than explaining how firms develop strategic capabilities after adoption. For example, dominant frameworks such as the Technology–Organization–Environment (TOE) framework (Tornatzky & Fleischer, 1990) emphasize technological readiness, organizational characteristics, and environmental pressures that influence adoption decisions. While this literature provides valuable insight into whether and why SMEs adopt AI, it offers limited guidance on how resource-constrained firms build the internal capabilities necessary to leverage AI strategically and responsibly over time.
Addressing this gap, this study shifts attention from adoption decisions to the longer-term process of capability development, integrating RBV and stakeholder theory to explain how “resource poverty” shapes both strategic outcomes and ethical risks during AI implementation in small businesses. In contrast to adoption-oriented frameworks such as TOE and process-focused perspectives such as dynamic capabilities, this dual-lens approach connects internal capability development to exposure to ethical and stakeholder risks. This approach reveals how conditions of “resource poverty” not only constrain strategic capability-building but also heighten the likelihood of ethical lapses and stakeholder harm during AI implementation, an insight that existing frameworks do not fully capture. To this end, we further explore the implications of the RBV and stakeholder theory, present illustrative case studies, and propose a roadmap for the ethical and strategic implementation of AI.
LITERATURE REVIEW
The Resource-Based View and the Challenge of Capability Building
The resource-based view, first articulated by Barney (1991), posits that resources are not distributed equally among firms. Therefore, some firms can utilize their unique resources to create sustained competitive advantages and outperform their peers. Over time, these sustained competitive advantages become highly impactful. Empirical research (Crook et al., 2008) indicates that approximately 22% of the relationship between resources and performance is attributable to sustained competitive advantages.
Resources, in this context, are tangible and intangible assets that firms use to plan, implement, and execute their strategies (Barney & Arikan, 2005). Resources include physical assets, such as facilities and equipment, as well as human capital, including management expertise, and organizational resources, such as organizational structure (Barney, 1991). However, according to the RBV, for a resource to create and maintain a sustained competitive advantage, it must meet four distinct criteria: the resource must be valuable, rare, inimitable, and non-substitutable (Barney, 1991). Using this framework, it is clear that off-the-shelf AI tools, such as ChatGPT and Gemini, do not, by themselves, create a sustained competitive advantage. Simply possessing an IT tool, no matter how powerful, does not lead to outperformance (Wade & Hulland, 2004).
Therefore, for small businesses, the key point is not the possession of resources but the development of the capabilities to effectively utilize strategic resources, such as AI, in a dynamic environment. One effective approach involves integrating digital twins with generative AI into business processes, enabling firms to enhance operational efficiency, drive innovation, and strengthen data-driven decision-making (Huang et al., 2024). Although the concept of a digital twin is relatively novel within the small business community, it has been discussed in academic literature for the past two decades. A digital twin is a virtual model or replica of a physical object, system, entity, or process. It is designed to simulate, optimize, or monitor the performance of the physical model (Grieves & Vickers, 2017; Negri et al., 2017). The benefits of integrating digital twins with generative AI are enormous in identifying product flaws, reducing costs, predicting sales, and monitoring failures (Rathore et al., 2021).
Successful small businesses can address changes in the external environment - such as those driven by the widespread adoption of AI - by adapting and developing new competitive advantages. These new “dynamic capabilities” (Teece, 2007) enable firms to refresh their competencies in alignment with their current realities. In this context, small businesses must recognize that possessing a specific AI tool is less important than developing the dynamic capabilities necessary to identify strategic use cases and effectively train employees.
While this may seem straightforward, small businesses often suffer from “resource poverty” (Welsh & White, 1981). Small businesses frequently lack the financial capabilities to invest in advanced IT systems, suffer from a lack of qualified IT professionals like AI specialists, experience a lower level of more sophisticated IT knowledge among employees, and lack an existing and scalable IT infrastructure upon which to build (Kroon et al., 2013; Thong, 1999). Additionally, research (Thong, 1999) has highlighted the significance of a firm’s CEO’s personal characteristics in the adoption of IT. More innovative CEOs tend to adopt more innovative IT solutions. However, this can also mean CEOs of small businesses can push through IT solutions without the proper vetting or the appropriate level of governance to ensure the protection of stakeholders.
Stakeholder Theory as the Ethical Compass for AI Strategy
Stakeholder theory (Freeman, 1984) is a theory of organizational management and ethics that posits that organizations have an obligation to constituencies within the firm’s sphere of influence, beyond shareholders. These other constituencies include various groups, such as employees, customers, and suppliers (Freeman et al., 2001). Stakeholder theory examines how a firm manages its relationships with various constituencies, which can significantly impact its performance (Phillips et al., 2003). Firms that can manage the various demands of their stakeholders benefit from stakeholders’ cooperation with and agreement to the firm’s plans and decisions (Waddock & Smith, 2000). The ethical obligations related to stakeholder theory often center on expectations of good corporate citizenship behavior (Deng et al., 2013).
Mcilveene et al. (2024) synthesized stakeholder theory and AI ethics by establishing a framework for applying the theory to the unique ethical challenges inherent with firms deploying AI solutions. This research examined the ethical implications of AI for key stakeholders of the firm. Firms must consider the impact on employees when deploying AI, as task automation can lead to reductions in force, which not only affects individual employees but also society at large when this occurs on a large scale. Organizations must also consider how they use customer data to ensure it is protected and used responsibly, or risk losing the trust and support of this fundamental stakeholder group. Finally, firms must be cautious not to inadvertently create unintended adverse outcomes with AI, such as using tools that introduce bias into decision-making. By pairing AI with stakeholder theory, firms can utilize a type of “ethical checklist” to ensure the responsible use of these powerful tools.
Synthesizing the resource-based view and stakeholder theory is crucial for understanding how small businesses can effectively and responsibly implement AI. The RBV theory helps explain why so many small businesses fail or never move beyond initial steps with AI. Namely, they lack the internal resources and dynamic capabilities necessary for success. Stakeholder theory examines the firm’s external environment by considering how AI plans affect stakeholders when they are strategically integrated into the organization’s fabric. This joining of these two theories helps address a gap in the current literature by explaining how “resource poverty” in many small businesses heightens risks to the firm’s stakeholders during AI adoption. The remainder of this paper aims to bridge this gap by proposing an integrated framework for the responsible adoption of AI by small businesses.
IMPLEMENTATION BARRIERS: CONNECTING INTERNAL RESOURCES TO EXTERNAL RISKS
As previously discussed, “resource poverty” (Kroon et al., 2013; Thong, 1999; Welsh & White, 1981) poses unique and identifiable challenges for small businesses that can hinder the responsible implementation of AI. These resource gaps not only prevent small businesses from successfully and ethically utilizing AI to create sustained competitive advantages, but they also pose ethical risks to the firm’s stakeholders. The following gaps, first identified in the literature review, will be explored below: financial, human, technological, and organizational.
Perhaps the fundamental resource constraint for the successful and ethical implementation of AI is insufficient financial resources, which necessitates the use of free “prosumer” AI tools, such as ChatGPT, Gemini, and DALL-E (Fraraccio, 2025). While this may seem like a logical solution, using the free versions of these programs can be problematic for several reasons. The terms and conditions for use are often long, complex, and vague. Additionally, free programs are susceptible to data breaches and privacy risks. Finally, free tools can expose small businesses to plagiarism and copyright infringement (Neubauer, 2023). This practice creates direct harm for key stakeholders, such as customers, if sensitive data is entered into insecure systems, and the business’s owner, through the potential erosion of trust, reputational damage, and even legal jeopardy. For example, a small accounting firm might use a free AI tool to analyze its clients’ sensitive and proprietary data. By uploading the data to the AI program’s training data, they severely compromise their client’s privacy and expose the firm to a myriad of adverse outcomes.
The second key resource constraint that small businesses face in their strategic use of AI is gaps in human capital. As noted earlier, Thong (1999) found a lack of specialized IT training and lower general technology knowledge among small businesses. This human capital gap creates several problematic scenarios for customers when employees lack a clear understanding of the limitations and potential shortcomings of AI use. For example, untrained employees using AI for customer solutions may not fully understand the possibility of “hallucinations,” in which AI output is false, misleading, or completely fabricated (Maleki et al., 2024), and may provide clients with potentially detrimental solutions. For example, consider a local hardware store where an employee uses an AI chatbot to answer a customer’s technical question. The AI “hallucinates” an incorrect and unsafe answer, leading to product misuse and potential harm to the customer. Providing incorrect and unsafe answers harms multiple stakeholder groups; first, the customer, who receives a flawed and potentially harmful solution. Second, the employee is placed in a position requiring them to use tools for which they have not received adequate training. Third, the firm risks losing business and trust and incurring reputational damage.
Quality data is the third resource constraint when adopting AI solutions. Small businesses often have limited, incomplete, or biased datasets due to size and time constraints (Chen, 2024). When this data is fed into the training dataset of an AI-LLM (Large Language Model) for analysis, the model “learns” from this faulty data and, in turn, makes faulty conclusions and recommendations. In other words, “garbage in - garbage out.” Finally, the last identified resource gap for small businesses and AI adoption is organizational, stemming from reliance on an owner-driven, ad hoc approach to this powerful tool (Thong, 1999). While having an AI champion at the top of the management hierarchy is beneficial, it also creates the risk of implementation without proper governance. This scenario implies that AI decisions are made by an individual, such as the owner or CEO, who is likely overburdened and may lack the necessary technical expertise and resources to make informed decisions. For example, a small advertising firm allows its employees to use free versions of AI programs without an acceptable use policy. This lack of governance creates conditions that can lead to serious harm to the organization by failing to manage the technical, human, and financial risks associated with AI use. Multiple stakeholder categories are impacted when this occurs. Customer data is not adequately protected, employees use tools without proper guidance, and the firm risks catastrophic reputational and legal liability failures.
AN ETHICALLY-GROUNDED AI CAPABILITY ROADMAP
Small businesses must carefully and ethically implement AI to remain competitive and protect their stakeholders. A haphazard or “laissez-faire” approach is too risky given the stakes. Therefore, we propose the following roadmap to assist small businesses with this critical task of building and implementing the capabilities needed to develop sustained competitive advantage through AI while also protecting stakeholders.
Our roadmap is based on Nolan’s (1979) seminal work, in which he proposed the Six Stages of IT-Driven Organizational Growth model. Nolan developed this model to help organizations understand and manage the evolution of their technological capabilities. This model was developed during a time of explosive IT spending and growth. His approach was to provide leaders with a framework to identify their current stage in the tech adoption cycle and guide them on how to move forward successfully. Nolan emphasized the need for a mindset shift, encouraging leaders to focus less on the technology itself and more on the strategic management and use of data and information. Ultimately, Nolan’s model acknowledges the necessity of striking a balance between control and experimentation with new technology, thereby ensuring that chaos is contained while fostering innovation and organizational learning.
We have updated Nolan’s Six-Stage Model for AI Organizational Growth. In addition to making the model AI-centric, we have condensed it into a four-stage model to reflect the less complex internal operating environments of small businesses, rather than the large organizations Nolan initially had in mind when he developed the model. These four stages, as illustrated in Figure 1, comprise experimentation, standardization, integration, and transformation.
Stage 1 - Experimentation: During this stage, small businesses can begin exploring artificial intelligence by using low-risk tools to assess potential advantages and gradually adopt these technologies. This minimal-risk and cost-effective approach builds human capital by developing general AI capabilities and knowledge. A digital twin process integrated with generative AI is designed to perform simple tasks that typically require human input. For example, small companies may employ complementary generative AI tools or a free version, such as ChatGPT, to automate essential customer interactions, respond to emails, schedule jobs, or summarize documents. The objective is to evaluate the feasibility with minimal financial commitment. Using ChatGPT in this preliminary phase can help simulate and prototype digital interactions, thereby reducing obstacles faced by small businesses (Wang et al., 2024). This stage has zero negative stakeholder impact as non-sensitive data is used for broad and internal tasks.
Stage 2 - Standardization: After moving beyond the initiation and experimentation stage, organizations are urged to begin standardizing the use of AI. Initial standardization can be achieved by developing and implementing an acceptable use policy that clearly specifies which AI tools may be used, for what purposes, and who is responsible for oversight and enforcement. Small businesses should transition from free tool versions to low-cost “pro” versions and use them for higher-value tasks. Resource capabilities are developed for the organization through formal policies on AI use. Stakeholders are also better protected by implementing this governance framework. For example, small manufacturers can use sensors to collect data and store it in databases. Then, AI models help detect patterns and automate routine analytics, creating a foundation for repeatable and scalable solutions. Additionally, a policy manual for AI processes is developed and shared across organizations, and responsible managers should regularly review and monitor these processes to ensure compliance (Nolan, 1979; Wade & Hulland, 2004).
Stage 3 - Integration: After standardizing AI use and governance, small businesses should strategically integrate their AI models to enable real-time analytics and enhance data-informed decision-making. AI models can interface with dashboards, databases, and digital platforms to convert data from the standardization stage into actionable insights. For example, generative AI tools such as ChatGPT can be used to support digital twin applications for fault prediction, to optimize data streams, or to simulate processes. The integration of digital twins with generative AI significantly enhances forecasting accuracy, operational optimization, and workflow adaptability. This integration enhances AI capabilities for small businesses, enabling them to achieve sustained competitive advantages (Rathore et al., 2021). Additionally, stakeholders should be informed that they are interacting with AI-assisted technologies, and a clear path to speaking with a human employee should be readily available.
Stage 4 - Transformation: This aspirational stage involves strategically leveraging AI to create sustained competitive advantages by developing new products, services, and models. A salient use case for this stage is the integration of a digital twin with AI, which becomes central to how the business operates, competes, and innovates. Small businesses not only enhance efficiency but also develop new value propositions through continuous learning and adaptive intelligence. This transformational stage represents the culmination of the development of dynamic capabilities across human, organizational, and technical resources to achieve sustained competitive advantage. When small businesses achieve this level of transformation, they begin to create shared value (Porter & Kramer, 2011), where enhancing business success and advancing social good become mutually reinforcing goals. Employees are more productive, customers are better served, and the firm strengthens its community, enabling a virtuous cycle of business and societal prosperity.
Maintenance: Once a competitive advantage has been established through the four-stage method, its long-term sustainability depends on ongoing monitoring, integration of user feedback, and timely adjustments. Urbancova et al. (2024) emphasize the critical role of employee feedback in enhancing both internal performance and external competitiveness. They argue that continuous, bidirectional communication between employees and management is essential for effectively evaluating and refining the use of tools such as artificial intelligence. Given the rapid pace of technological advancement, particularly in AI, this feedback mechanism becomes even more vital in dynamic environments. Therefore, establishing a continuous feedback loop that supports incremental updates informed by insights from end users and customers is essential for maintaining a strategic advantage. For a detailed discussion of feedback integration in organizational software implementation, see Tkalich et al. (2025).
CONCLUSION
The proliferation of artificial intelligence (AI) represents a fundamental shift for society and business. This technology is particularly important for small businesses, as it helps level the playing field and allows them to compete more effectively with larger competitors. While many small businesses have adopted AI, a paradox has emerged in which they struggle to utilize this technology to build sustained, strategic, competitive advantages. We explored this phenomenon using a dual-theory framework. First, we used the resource-based view to explain the “resource poverty” that hinders small businesses’ ability to develop the necessary dynamic capabilities. Second, we employed stakeholder theory to highlight the ethical risks and stakeholder harm that can arise when small businesses implement AI without adequate governance.
Our work implies that the adoption of responsible AI among small businesses is a strategic necessity, not merely an ethical consideration. A typical “move fast and break things” attitude and approach to technology adoption are inappropriate for a technology as powerful as AI. We present a four-stage model for AI implementation that enables small businesses to develop the capabilities necessary to create sustained competitive advantages while also considering stakeholder impacts. For academics, this work bridges the gap between two major strategy theories (RBV and stakeholder theory) and applies them to the timely issue of AI and small businesses.
This research has several limitations, including its conceptual nature. The four-stage model we present is theoretical and not empirically tested. Future research should conduct surveys and experiments among small businesses to test the model’s stages. Additionally, in-depth case studies of small businesses implementing AI should be conducted to understand the more profound implications of actual deployments. Finally, additional tools should be developed to support small businesses in implementing effective AI governance and policies.
