INTRODUCTION

According to Peter Drucker, “Every few hundred years in Western history there occurs a sharp transformation. … Within a few short decades, society rearranges itself—its worldview; its basic values; its social and political structure; its arts; its key institutions. … We are currently living through such a transformation. It is creating the (post-industrial) society…” (Drucker, 1993, p. 1). Consistent with this observation, the business literature of the 1990s hailed the coming of larger and more effective companies in a post-Cold War world. Kuttner, for example, described, “the company is no longer a physical entity with a stable mission or location, but a shifting set of temporary relationships” (1997, p. 1). Benjamin Barber noted, “Those who look forward prophesize commercial and technological interdependence—a virtual paradise made possible by spreading markets and global technology—and they proclaim everything is or soon will be different” (1996, p. 3).

Indeed, Kuttner and Barber’s predictions were accurate. The economic social contract involving reciprocity between management and labor changed in the 1990s and 2000s. The traditional economic contract assumed cost-of-living adjustments, collectively bargained job-related benefits, seniority-based promotions and layoffs, negotiated conditions of work, grievance and arbitration procedures, on-site union representation, and input on some workplace issues (Bluestone & Bluestone, 1992). The restructuring of companies at the end of the 20th century resulted in a modified economic social contract consisting of short-term appointments, vast wage dispersion and uncertainty, and fewer benefits. As companies removed traditional sources of job security and economic protection, many workers believed management was treating them merely as means to corporate success rather as a valued member of the company (Bluestone & Bluestone, 1992; Gordon, 1996; Harrison, 1994).

As companies responded to the hypercompetitiveness of a global marketplace, the traditional nature of work relationships changed as well. Companies increasingly delayered the organization and forged more permeable boundaries by turning to contingency work forces, downsizing, and outsourcing. In fact, corporations rapidly separated their workforce into core and buffer workers, leading to a dualism in management and labor similar to the early Industrial era. The traditional economic social contract was based on shared purpose and reciprocal commitment, but it was difficult to keep it intact when workers felt disposable and temporary.

According to Max Weber (1910/1978), social and technological conditions fundamentally determine the forms of organization that emanate in a society. Bureaucracy was ideally suited for the stable markets of the mass production era, but social and technological factors in the social-economic landscape rapidly changed in the post-Industrial era. As Lawrence and Lorsch (1967) observed, organizations adapt to their environment. The emergence of a global economy, a decline in union power, and an emphasis on more short-term stock market incentives transformed the nature of jobs. Manufacturing companies, in particular, rapidly evolved in the post-Industrial era. Companies focused on flexibility and efficiency to maintain competitiveness in an age of increased globalization. In order to remain employable under these circumstances, employees began to focus on increasing their contributions to the company by developing new skills, upgrading existing ones, and working longer hours. One consequence of the modified economic social contract was that employees were required to be more entrepreneurial in managing their careers. Lifetime careers at one company have become increasingly rare, so the workforce tends to view its labor as somewhat of a commodity to be bargained with, and ultimately traded for, the best overall offer. As a result, the worker’s tenure at one specific company is often much shorter. As predicted, the labor force adjusted with a new ethic and behaviors to function in the new societal structure.

Information technology and knowledge work also became critical competencies for retaining relevancy and innovativeness in the post-Industrial workplace. The conventional view of work in the twentieth century was centered on manual labor and a chain of command approach to management; however, the very nature of business changed to more reliance on the rapid processing of information (Bell, 1973; Davidow & Malone, 1992; Drucker, 1993; Marcuse, 1955; Micklethwait & Wooldridge, 1996). The boundaries of firms blurred as employees and stakeholders interacted with less encumbrance and acquired more information to be distributed between each other. Companies used information they acquired through new information and telecommunication technologies to gain instant access to the needs of each of their external stakeholders. Knowledge work became the key production input or task.

It appears another technological transformation is happening in the world. Social media platforms, artificial intelligence, big data, and robotics are increasingly finding their way into organizations, markets, and societies. While people adapted to changing labor agreements in previous eras, this technological movement may be different. Labor may not have the opportunity to shift to using the technology like in the past, but rather will be replaced by the technology. Furthermore, intelligent technology may be able to manage itself, requiring less human involvement. That leads to a question concerning many experts following this development: what happens to the workforce that gets displaced by this technology? It is a question that many leading technology innovators must address due to governmental and societal responses to this shift. This workforce issue is just one of the many issues driven by rapid technological and social change that new entrepreneurial companies are going to have to either react to or plan for.

Planning for, reacting to, or otherwise ‘dealing with’ these issues is imperative if a company, especially a new company, wants to gain and keep the public’s trust. We offer the contents of this manuscript as a potential method of attenuating individual issues and aligning innovative startups with the public good. To this end we first discuss disruptive innovation and social responsibility. Next we present a four step process for identifying and mitigating externalities likely to be brought about by entrepreneurs’ innovations. Entrepreneurs[1] may especially need to consider society’s reaction to their response, or lack thereof, to the social externalities resulting from their products, services, and business models. Maintaining legitimacy as a positive force in society is important to mitigating societal backlash and legislative responses that may restrict future company decisions. We suggest entrepreneurs introducing status quo-breaking innovations craft new social contracts using this process that address external concerns with well-reasoned philosophies and actions. More specifically, we outline what steps can be taken to address issues and stakeholders in constructing a social contract. We continue with a current example of a company that would have benefitted from having an entrepreneurial social contract in place. Next we provide an example for a new company with a disruptive technology that hasn’t yet had a serious problem. We then conclude with considerations that entrepreneurs who are causing many of the disruptions must address in the near future.

ENTREPRENEURIAL INNOVATION AND SOCIAL RESPONSIBILITY

There is a lot of attention on the relationship between entrepreneurs, organizations, and society in the social entrepreneurship literature. Social entrepreneurship is a process or approach used by entrepreneurs and start-up organizations to address economic, cultural, and social problems that are inadequately addressed or mitigated by government or the market. But what about entrepreneurship that creates new problems for society? Specifically, what about entrepreneurs’ innovations that rapidly change the social fabric of their communities? Do they have a responsibility toward others because of their role in bringing quick and massive change?

Corporate social responsibility (CSR) is a topic that has been debated for more than 70 years in the management literature. CSR, broadly defined, is the “expectation that corporations will act responsibly in the conduct of their operations” (Bondy et al., 2012, p. 281). However, it is a concept that focuses on managing the ethical duties and decisions of established organizations (Blundel et al., 2010; Hannafey, 2003). As a result, topics such as corporate reputation and corporate social performance receive attention in the business ethics literature.

When applied in the entrepreneurship literature, business ethics topics focus on small business practices (Spence, 2016; Wickert et al., 2016), small business owners’ values (Burton & Goldsby, 2009), triple bottom line strategies (Elkington, 1997), entrepreneurial strategy (Marom et al., 2019), small business social responsibility (Berk, 2017), corporate entrepreneurship ethics (Kuratko & Goldsby, 2004), social entrepreneurship (Dees, 1998), social proactiveness (Goldsby et al., 2018), family influences in small business (Peake et al., 2015), decision-making styles (Soltwisch et al., 2022) . With regard to these research areas, the work of Spence best addresses so far the ethical concerns of small businesses by applying feminist ethics to the focus of valued connections within smaller enterprises. Applying the ethic of care (Gilligan, 1982) with the work of Freeman and Liedtka (1991) and Liedtka (1996), Spence (with Moore, 2006; 2016) emphasizes the deep responsibilities and relationships that small business owners have with meeting the needs of their stakeholders that executives of larger enterprises tend to do more from a distance. These frameworks are quite useful in managing a small business with care with primary stakeholders such as employees and customers. The CSR literature, in particular Freeman’s (1984) stakeholder theory and Carroll’s (1979) CSR pyramid (consisting of economic, legal, ethical, and philanthropic responsibilities) does provide general frameworks for guiding the decisions and ethical actions of organizations. In fact, in this paper, we apply Freeman’s work in our process. Notice, however, that in our process it is in Step 3, and as part of a wider model that addresses the complex nature of the political risks that entrepreneurs face when challenging the status quo. We contend that the CSR and entrepreneurship literature, including Spence, do not offer more specific practices for the nascent organization or early-stage entrepreneurial startup pursuing disruptive innovations.

Unlike established corporations, entrepreneurial companies work in a state of perpetual (but temporary) uncertainty in order to make new ideas possible. In fact, it may take many years before the vision of what entrepreneurs are pursuing becomes a reality (Alerasoul et al., 2022; Bokhari, 2022). Sometimes a nascent company spends many months just determining its products, services, and business model. As a result of this ongoing uncertainty, entrepreneurs often do not consider the long-term effects of their innovative ideas. This consideration is often anathema in the startup world. Consider that Facebook’s internal motto until 2014 was ‘move fast and break things’ (Business Insider, 2021). The logic behind this thought process is straightforward. Most of what these innovative startups do has not been done before. Most of it is likely to fail. Determining the second and third order effects of an experiment that has never been tried is extremely unlikely to be successful without running the experiment. Besides, expending resources considering the long term consequences of something that is very unlikely to be successful is wasteful.

However, the magnitude of the effects these entrepreneurial innovations are capable of producing begs for careful consideration. EIR, as we propose and introduce it, is an approach that inserts ethics more into the consideration of what entrepreneurs may face as their innovations change the society they are embedded in. Contrast this approach with Spence’s concept of Small Business Social Responsibility that is developed from an ethic of care. As she explains (Spence, 2016, p. 15), small businesses that are socially responsible are “… guided by sympathy, empathy, sensitivity and partiality meeting the needs of others for whom we take responsibility. Power encompassed in caring relationships in the public and private spheres of the organization.” We propose that EIR complements Spence’s concept of Small Business Social Responsibility by anticipating whose needs and concerns the entrepreneur may one day have. Once relationships are more in place and mostly stable, Small Business Social Responsibility, as well as CSR frameworks, might come into play more. As we will demonstrate in our model, entrepreneurs who bring disruptive innovation face heightened political risks that may prevent them from reaching that more stable and secure state. Disruptive entrepreneurs must be accepted by society in order to one day become larger corporations, but as a small business in the beginning, greater societal forces can derail that progress. We offer EIR as a process to assist entrepreneurs in overcoming the hurdles they face in reaching societal acceptance.

We doubt few would be threatened by small business owners who are “… guided by sympathy, empathy, sensitivity and partiality meeting the needs of others” (Ibid), but disruptive entrepeneurs may be different. Society sometimes celebrates these disruptors as mavericks, and other times, in the words of Steve Jobs, will “…disagree with them …or vilify them.” This, however, has always been the case in market economies. Joseph Schumpeter and the Austrian economists trumpeted the greater benefits that “creative destruction” brings to society. Each economic era in modern times is an improvement over the previous one once adjustments are made in the labor market and community landscape. But is this age different, and is there even more need for our proposed concept of EIR? In past eras, labor moved from agriculture to industrial fairly easily. The work was manual and easy to train. Today’s economic transition is more complicated. Most work requires extensive training and/or specialized education. As a result, many laborers cannot simply move to where their physical labor is in demand. Similarly, due to the technological infrastructure necessary for much modern work many rural communities feel “stuck in the middle” or “left behind.” Thus, the existing disruptions have had a deleterious effect on both person and place. And as the rate of change continues to accelerate these problems can reasonably be expected to grow. For instance, artificial intelligence and automation will continue to replace traditional labor at an increasing pace. Those being displaced by automation and AI are the least likely to be able to retrain into a more challenging career. As a result, some policy leaders are considering new support programs such as Universal Basic Income to provide for labor that has decided to give up and opt out of the traditional system.

The unintended consequences of disruptive innovation have traditionally not been seen as the responsibility of business, but rather it has been up to citizens (often with the assistance of government) to make needed market adjustments. History has shown though that when the public puts its attention to pressing issues of the day, government will usually respond with new legislation, programs, and agencies. The Civil Rights Movement, the Taft Hartley Act on monopolies, and the New Deal are just a few monumental examples of government intervention on matters of public concern. Business leaders such as Steve Case recognize that new social issues are emerging that companies may be forced to address. Case (2017), for example, has observed that while people in middle America are consumers of disruptive technology, they are not producers of it. Case, along with other disruptive entrepreneurs like Elon Musk and Mark Zuckerberg, have called for companies to consider the future of these technologies on society.

Rationale for the positions of Case, Musk, and Zuckerberg rests in businesses performing their ethical duty as problem solvers with substantial resources and experience to tackle societal issues. But another line of reasoning for such action may be that when government gets involved in heated issues, it first serves its constituencies and themselves before thinking of how their solutions benefit the greater society (Buchanan & Tullock, 1965). Business often pays the price with increased regulation and oversight (Olson, 1965; Shaw, 2002). With that in mind, it is often better that business adopt a proactive and responsive position, taking a leadership role on issues rather than responding to legislation placed on it (Carroll, 1979; Carroll & Hoy, 1984; Wartick & Cochran, 1985). Regardless of the rationale for such actions, we offer a process in the next section that entrepreneurs can utilize to establish this leadership role.

A FOUR STEP PROCESS FOR MITIGATING THE POTENTIAL RISKS OF ENTREPRENEURIAL INNOVATION

Basu and Palazzo (2008) observe that while the ethics literature has extensive coverage of CSR activities, there is a shortage of articles covering how decisions regarding social issues are made. Specifically, repeatable and generalizable processes that can assist decision-makers with handling situations that may arise can improve CSR performance (Basu & Palazzo, 2008; Brickson, 2007; Campbell, 2006). In this section we offer a 4-step process entrepreneurs can apply for addressing social issues that that may arise as an innovative product or service disrupts established institutions in society (See Figure 1). 4 steps are utilized because: 1) Our process steps an entrepreneur through political risk identification, understanding of that risk, deeper reflection on whom may be the source of that risk, and an economic social contract crafted as an outcome to address the previous steps’ analysis and offer a solution for parties concerned with the potential negative effects of an innovation. In a sense, this approach embodies the essence of problem solving processes that vary with three to eight steps, in which start with problem identification, fact finding on the chosen problem, and solution creation, with these areas broken into further steps sometimes (for example, Basadur, 1994; Osborn, 1953).; and 2) four steps provide a repeatable process that is simple and can be easily memorized.

In addition to the direct effects of a disruption, we contend that such innovations run the risks of detrimental societal and governmental reactions to the disturbed institutions. Without a well-informed and reasonable response by the entrepreneur, the company’s image of a positive member of society may be called into question. Further, its legitimate right to engage in commerce and continued existence might be challenged. The unintended consequences of a company’s innovations will garner a response of some kind. Regulatory responses, social media movements, traditional media criticism, and NGO mobilization can rapidly emerge and interact to create a public firestorm that jeopardizes the company’s future. It is our contention that utilizing the following process can mitigate those risks by generating an economic social contract as an outcome to demonstrate the entrepreneur’s recognition of any foreseen societal costs of their innovation. Their solution for the problem will demonstrate their dedication to ensuring and enhancing the general welfare of the public. Further, engaging in this process proactively will engender confidence in the entrepreneurs’ willingness and ability to mitigate additional unforeseen externalities. To reach this outcome, an entrepreneur must first have an understanding of what is at risk if political and societal backlash occur. Therefore, Step 1 of our EIR process instructs entrepreneurs to recognize and understand the political risks they may face as they grow the presence of the disruptive innovation in their communities.

FIGURE 1
FIGURE 1.A FOUR-STEP PROCESS FOR POLITICAL RISK MITIGATION

Step 1: Identifying Political Risks of an Entrepreneurial Innovation

In this step, we focus on the political risk that disruptors may face as they grow their business. Political risk has been defined as “political or societal events which have an impact on business decisions” (Sethi & Luther, 1986, p. 58), often including governmental interference with business operations (Miller, 1992). Political risks are therefore different from the more commonly considered competitive risks of business. Like other studies on risk, the concept is often used interchangeably with the term uncertainty. However, while the concepts are related, there are subtle differences between the two. Uncertainty is a state of mind in which a person does not have a good idea about what they are facing. Risk, on the other hand, occurs when a person places a probability on possible outcomes from a decision or situation. Therefore, in order to mitigate risk, the disruptor needs to have an awareness of the threats and dangers they face.

In 2011, Scherer and Palazzo challenged future researchers to more deeply consider the public role of companies. Under the conditions of rapidly developing technologies and growing companies, society assesses whether the disruptive innovation brings consequences that are, on the whole, beneficial to society. They offer the following questions to guide that investigation:

“If social connectedness creates the responsibility for corporations to reduce social and environmental harm, where do these demands end? How can we define whether or not a corporation should deal with an issue? While in the late 1980s the discussion started with the working conditions at the direct suppliers, corporations are now asked to assume responsibility for the whole process of value production” (Scherer & Palazzo, 2011, p. 919).

In other words, in an increasingly socially connected world where company actions and consequences can be monitored and discussed in an infinite number of ways that did not exist a generation ago, corporations can potentially be held responsible for anything related to their activity. Thus far, in the entrepreneurship literature the legitimacy process of innovative entrepreneurs has been considered from the logics of attracting various types of investors, (Fisher et al., 2017), but not legitimacy from societal and governmental entities. The following steps in this section offer one way for gaining legitimacy and mitigating the risks found in the arenas of society and politics.

Political risk has often been associated with political instability, i.e., macro-risks characterized by the institutions of government and factors such as trade regulation, government civility, and government policy instability (Miller, 1992). Much of the research literature on political risk as a macro-risk comes out of literature on global trade and investment, environmental policy (like Kyoto), and science (such as stem cells, green technology, and biotechnology). However, with political attention turning more to individual companies, political risk as a micro-risk focused on a particular company or industry is becoming more of an issue. Research literature has not addressed this emerging phenomenon, but practitioners regularly face this issue. One such policy expert and scholar is former presidential Chief of Staff and Stanford business professor Condoleezza Rice. She calls this form of risk “21st-century political risk, where the political actions of small groups, or even lone individuals, supercharged by connective technologies can drastically impact businesses of all kinds… (It is) the probability that a political action could swiftly affect a company’s business” (2018, p. 2). The key, she says, is to reduce blind spots by developing mechanisms for continuous learning. She continues, “Risk always has two components: the likelihood that an event will transpire and the expected impact if it does” (Rice & Zegart, 2018, p. 95).

Rice’s advice for reducing exposure to political risk is to know what assets are most valuable and most vulnerable to the company. She gives FedEx as an example:

“At FedEx, asset value and vulnerability are clear: on-time delivery is the company’s holy grail. It is the most important part of FedEx’s value proposition and also the most vulnerable to man-made and natural events. FedEx has been innovating ways to reduce the risk of delivering delays for nearly half a century” (Rice & Zegart, 2018, p. 192).

So, FedEx’s key innovations have been in reducing delivery delays, which could cause bad PR and media issues if it became more common. To reduce the chance of delays, FedEx keeps empty planes flying in the air to pick up and deliver packages as needed around the country. Therefore, Step 1 of our process requires disruptors to consider what their value proposition is and the key assets that ensure its delivery. An entrepreneur can accomplish this by asking: What assets are most valuable to my organization?; What assets are most vulnerable to political risk?; And where do high-value and high-vulnerability cluster? These questions should help entrepreneurs identify their top priorities for mitigating political risk.

Step 2: Understanding the Political Risks of an Entrepreneurial Innovation

The beliefs an entrepreneur holds about a possible opportunity strongly influence entrepreneurial behavior (McMullen & Shepherd, 2006). Shepherd, McMullen, and Jennings (2007) explain that models of entrepreneurial action “have highlighted the importance of escaping ignorance and overcoming doubt to form opportunity beliefs”(p.77). They further advise that entrepreneurs should direct attention toward the most important issues and aspects in the environment in which they intend to operate. As they explain:

“Belief formation is critical in modern entrepreneurship theory because it helps the entrepreneur to escape ignorance and overcome doubt. Ignorance refers to a lack of information or a lack of awareness that opportunity exists within the environment. For instance, lack of information may be attributable to temporal or spatial distance between the environment in which the opportunity exists and the environment in which the entrepreneur operates” (Shepherd et al., 2007, p. 77).

Weick (1993) explains the process of establishing plausible meaning of events as sensemaking, which is particularly helpful in times of crisis.

In Step 1, we emphasized the importance of identifying what political risks an entrepreneur may face when developing a disruptive innovation. Once a political risk is identified, the entrepreneur must now better understand—or make sense of—what they are facing. We suggest that to better understand the political risk an entrepreneur faces that they use “fact finding” practices from complex problem solving to alleviate ignorance over a particular issue. Basadur and Goldsby (2016) provide six questions that when asked can alleviate much of the ignorance an entrepreneur may face in such “fuzzy situations.” We have adapted these questions to addressing political risks as:

  1. What do we know or think we know about this political risk?

  2. What do we not know about this political risk, but we would like to find out?

  3. Why is this political risk a problem for us?

  4. If this political risk were resolved, what would we have that we don’t have now?

  5. What have we or others already thought of or tried in regard to this this type of political risk?

  6. What are we assuming about this political risk that may or may not be true? In other words, are we making any assumptions that might be misleading us?

Following the Basadur and Goldsby (2016) methodology, an entrepreneur would gather a group of about seven to ten people who might have deep knowledge about this political risk, including the key members of the management team, employees closest to the issue, and outside experts or partners who can provide insight into the issue. A meeting facilitator who can ask the questions and document the answers would also be helpful to a productive session. Participants would be instructed that all answers are good and not to question what others in the group offer. The facilitator writes down every answer given to them onto large sheets of paper. All participants are then instructed to select a handful of the answers that they think are most pertinent to the situation the company is facing. After everyone has selected their answers, each statement that was marked is now discussed by the person(s) who chose it. The facilitator accomplishes this step by asking, “Who chose this? And why did you do so?” Then each person who chose the statement has the opportunity to explain why they believe it is important. Basadur and Goldsby (2016) refer to this process as “clarification.” By using a facilitator and allowing each person to mark and explain their choice, everyone in the group is heard, without concern for rank, personality styles, or other factors that often derail open discussions on stressful issues. Once all the chosen answers have been explained, the management team can now select the answers they wish to consider further.

Basadur and Goldsby (2016) advise teams to turn these converged facts into what they call “challenge statements,” which are captured in the form of “How might we…?” For example, consider a political risk where clients’ data holds personal information. If a key fact was, “We hold private customer information that could be a target for hackers”, a challenge statement could be created that reads, “How might we make our company less of a target for hackers?” You can now place your attention on this more specific matter—rather than the more general issue of customer personal information—and search for ways to discourage hacking. A good first step in trying to get an answer to this challenge statement is to ask yourself, “Who else is struggling or has struggled with a similar problem and what can we learn from them?” Then, visit with that person or group, and ask what they would do in your situation. Ask more than one group if time allows. Von Hippel, Franke, and Prugl (2006, 2009) believe you can get an answer to about any problem by continually asking others, “Who do you know that might have information that would be helpful to me on this problem?” Asking this open-ended question repeatedly to others can quickly give the entrepreneur many options to consider in addressing a political risk. Top management teams that are proactive in gathering intelligence and finding solutions to problems they may have overlooked–and, better yet, including others in the process–will better prepare the company for difficult issues that may arise as an innovation moves forward into the marketplace. Now that the entrepreneur has a better understanding of the political risks they face, they are prepared to engage in deep and constructive dialogue with the concerned stakeholders over the challenges of the disruptive innovation.

Step 3: Stakeholder Management of Political Risks

Freeman (1984) defines a stakeholder group as any group that affects or is affected by the decisions managers make in achieving their goals. Thus, stakeholder management is the manner in which an organization manages and relates to its diverse group of stakeholders. It is important that whichever stakeholders are considered relevant to the issues be included in the dialogue, otherwise such parties may become very vocal critics regardless of the actions taken by a company (Burke & Logsdon, 1996; Clark, 2000; O’Riordan & Fairbrass, 2008; Weiss, 2014). An entrepreneur does not have to choose the actions preferred by the relevant stakeholders, but it is important to engage and listen closely to different perspectives. As Pedersen (2006) explains, “If everybody had the same perception, it would be easy to choose the right action. However, this is not always the case, and the company is often faced with multiple and not necessarily compatible interests—not only between different stakeholder groups, but also between stakeholders from the same group” (pp. 148-149).

A key challenge in stakeholder research is determining how much priority managers should give to different groups. Clarkson (1988) recognized that different stakeholder groups occupy different levels of importance to different firms. Primary stakeholders are those “without whose continuing participation the corporation cannot survive” and secondary stakeholders are those “who influence or affect, or are influenced or affected by, the corporation” (Clarkson, 1988, p. 259). In the relevant literature, this is often referred to as stakeholder salience; i.e., how salient is a specific group to the organization’s decisions? It is unlikely that an organization would give each stakeholder equal attention, so researchers have sought methodological approaches to capture stakeholder salience. Kuratko, Hornsby, and Goldsby (2007) found that, in the context of entrepreneurial activity, the firm’s primary stakeholders are its stockholders/investors, employees, and customers. However, ESR would suggest that entrepreneurs also consider the input of traditionally secondary stakeholders, such as academia (via co-R&D, exchange programs, and local networking), governmental organizations (via surveys, project development, and ad hoc involvement), communities (via local networking and social investment activities focused on education and health), regulatory bodies (via local networking, participation in advisory bodies, and cooperation in community projects), and media (via local networking and surveys) (Maon et al., 2009). At a minimum, if an organization makes decisions that may not be preferred by external parties, but occurred after thorough engagement with stakeholders, the impact of external responses on the company can be lessened (Fooks et al., 2013).

A helpful measure in understanding the priorities of stakeholder groups is identifying the dominant logic in the particular arena they exist. Prahalad and Bettis (1986, p. 491) define dominant logic as “a mind set or a world view or conceptualization of the business and the administrative tools to accomplish goals and make decisions in that business” (p. 491). In application in organizations and industries, dominant logic can be understood as the accepted rationale for guaranteeing existing practices and experiences due to social interactions and pressure (Bouwen & Fry, 1991). The concept of dominant logic has been applied to entrepreneurship by examining organizations in transition economies (Obloj et al., 2010) and family businesses (Moss, Payne, & Moore, 2016; Gentry et al., 2016), among other research topics.

Consider an ethics scandal from professional sports to better understand the influence a dominant logic can have on individual players within an industry. On January 17, 2013, seven-time Tour de France champion Lance Armstrong appeared on the Oprah Winfrey show and admitted to taking performance enhancing drugs. The admission shook the faith of many Armstrong supporters who admired him for repeatedly winning the world’s hardest bicycle race after overcoming a cancer with slim odds for survival. Why would Armstrong take drugs that might risk his health and tarnish his legacy? The answer can be found by reading memoirs of the teammates who rode with Armstrong during those halcyon years of cycling.

George Hincapie, known as Armstrong’s ‘Loyal Lieutenant’ and chief domestique, recounted that in their early days of pro riding the American riders were ‘clean.’ Armstrong won the 1993 World Cycling Championship beating the best in the sport with his natural talent, training ethic, and competitive drive. However, at the next year’s iteration of the race, something was much different. Armstrong and Hincapie struggled to finish in the middle of the pack among riders they soundly beat the year before. The American riders heard rumors of new drugs that were finding their way into other professional teams. As Armstrong remembers (Hincapie & Hummer, 2014):

“Some riders had conversations starting in late 1994 about the changes going on. But that year, we held out—we assumed they’d develop a test and we wouldn’t be faced with having to make the decision to use or not to use. Milan-San Remo ended up being the last straw where a number of us decided we’d do it” (p. 52).

The bottom line in professional cycling in the 1990s and 2000s was if a rider did not participate in doctor-administered and team-supported doping protocols, they could not secure a place on a competitive team. And much like the steroid-infused home run derbies of Major League Baseball around the same time, the public and governing organizations were enthralled with the athletic outcomes. How could such illegal activities be kept quiet for so long from the public and government agencies that would not approve of such measures? A dominant logic explicitly and tacitly agreed to by stakeholders in the professional cycling industry kept the drug-enhanced racing going for many years. As Thompson explains in his cultural history of the Tour de France (2008):

“…the persistence of widespread doping reflects a systemic dynamic driven by the logic of competition—racers are continually seeking an edge over their adversaries—and important economic interests. The athletes, sponsors, race organizers, federations, and media all profit from cycling’s popularity and have therefore generally to present as pristine an image as possible. This has led them, when forced to confront the issue, to dismiss doping as the result of reprehensible or misguided individual decisions taken by a few talented racers who thereby sully the reputation of the sport and of the vast majority of their peers, who are ‘clean’” (p. xi).

Within the peloton itself, riders maintained a code of silence adhering to the dominant logic supported by the primary stakeholders of the sport. As Tyler Hamilton, another Armstrong teammate, attested (Hamilton & Coyle, 2012):

“Surrounding it all, like a vault of hardened steel, was the omerta: the rule of silence that governs professional racers when it comes to doping. The omerta’s strength was well-established: in the sport’s long history, no top level rider had ever revealed all. Support riders and team personnel who spoke about doping were cast out of the brotherhood and treated as traitors” (p. 17).

As a result of an investigation by the United States Anti-Doping Agency (USADA), testimony by former teammates, and lawsuits filed by sponsors, Armstrong’s reputation has been tarnished within the general public and his professional victories have been stripped from the sport’s records. However, previous cycling greats such as Eddy Merckx, considered to the best bike racer of all time, still holds his titles, even after admitting to taking performance drugs during the pre-Armstrong era. Understanding the dominant logic of competition and profit in cycling may not excuse Armstrong and his teammates’ actions, but it does shed more light on the difficult decisions they faced in pursuing their athletic dreams and professional pursuits.

In this section, Step 3 of our framework offered practices for discerning the landscape of stakeholders in an industry and understanding the dominant logic guiding decisions within challenging ethical situations. We now provide Step 4, which coalesces the previous analyses into constructing an economic social contract for entrepreneurial innovation.

Step 4: Creating an Economic Social Contract for Entrepreneurial Innovation

“If you don’t tell your story, others will. Don’t consign your story to careless messages and uninformed opinions.” –Jody Jean Dreyer (2017, p. 70), former head of Disney global outreach initiatives.

Social contract theory may provide guidance to entrepreneurs in better understanding what their societal duties are when creating disruptive innovations. Social contract theory attempts to answer two critical questions in political philosophy by first justifying why people would consent to be ruled by a state, and second by describing what the ideal relationship is between a state and its citizens (Hobbes, 1651/1962; Locke, 1690/1980; Rousseau, 1762/1987). Business ethics scholars (Donaldson, 1989; Donaldson & Dunfee, 1994; Donaldson & Preston, 1995; Keely, 1995; G. Watson, 1996) have transported the logic behind this connection from the political sphere to the economic sphere. As Donaldson (1989) explains, “This means that the social contract advises individuals and corporations to act as if the members of society had agreed to define the rights and obligations of productive organizations in a certain way” (p. 60). These rights and obligations are grounded in “hypernorms” that include “core human rights, including those to personal freedom, physical security and well-being, political participation, informed consent, the ownership of property, the right to subsistence; and the obligation to respect the dignity of each human person” (Donaldson & Dunfee, 1994, p. 267).

Essentially an economic social contract is a mutually beneficial agreement between the company and its stakeholders based on norms within specific contexts. Understanding the factors within a context is crucial for meeting outside expectations placed on an organization. Failure to understand and address these expectations can jeopardize the organization’s legitimacy in the specific context under consideration. In summary, forming an acceptable position with stakeholders requires: 1) knowing what issue is at stake, 2) who is chiefly concerned about the issue, and 3) understanding why it is an issue. Within this framework, we offer a recent public relations crisis Facebook confronted for illustrating these principles.

Facebook and Privacy Concerns

In 2003, Mark Zuckerberg and a Harvard classmate each invested $1000 and started a social networking service that later evolved into Facebook. Since its humble beginnings, the company has grown into a media giant with annual revenues of $55 billion and over 35,000 employees. Its $508 billion market cap makes it one of the biggest companies as well as one of the most profitable (Yahoo Finance, 2019). While other media companies spend billions of dollars creating original content, Facebook spends virtually nothing on content since its users place their own on the website. As Galloway (2018) observes: “In other words, 2 billion customers labor for Facebook without compensation… So to extend the analogy, what would CBS, ESPN, Viacom (MTV), Disney (ABC), Comcast (NBC), Time Warner (HBO), and Netflix be worth combined if they had no content cost? Simple — they’d be worth what Facebook is worth”(pp. 102-1-3). That is to say that Facebook does not derive its value from company created content.

This largely unsupervised creation of content opened Facebook up to unethical actions by outside parties that tarnished the company’s image. For example, the 2016 presidential election was a target of Russian hackers who sought to cause turmoil inside the United States by sowing uncertainty in and perhaps even influencing the race’s outcome. The hackers ran “fake news” stories designed to create discord among Facebook’s users. Another political scandal around the same time involved Facebook’s involvement in a research project that went astray. Facebook approved a project by Aleksandr Kogan, a Cambridge University scholar, who later shared the data with Cambridge Analytica, a strategic consulting firm assisting presidential candidate Ted Cruz’s campaign. Facebook called for Cambridge Analytica to delete the data but the company’s vulnerability to privacy violation was exposed by the scandal. To further complicate the issue, many of Cruz’s failed campaign team members with access to the Cambridge Analytica data joined eventual winner Donald Trump’s team.

As Thompson and Vogelstein report (2019):

“The story was a witch’s brew of Russian operatives, privacy violations, confusing data, and Donald Trump. It touched on nearly all the fraught issues of the moment. Politicians called for regulation; users called for boycotts. In a day, Facebook lost $36 billion in its market cap.”(p. 49).

While the Russian hacking and Cambridge Analytica stories hurt Facebook’s reputation, it was a nearly silent reaction for five days by COO Sheryl Sandberg and CEO Mark Zuckerberg that caused more damage. Sandberg later acknowledged the five-day delay was a mistake, but said that they did not understand what they were dealing with. However, Thompson and Vogelstein analyze the cause of the delay arising from the guiding philosophy of many fast-growing disruptors at the time (2019):

“Its prime decision-makers had run the same playbook successfully for a decade and a half: Do what they thought was best for the platform’s growth (often at the expense of user privacy), apologize if someone complained, and keep pushing forward. Or as the old slogan went: Move fast and break things. Now the public thought Facebook had broken Western democracy. This privacy violation — unlike the many others before it — wasn’t one that people would simply get over”(p. 50).

Utilizing steps 1, 2, and 3 from our EIR process may have averted many problems for Facebook. Consider step 1: Identify political risks of a disruptive innovation. We noted that the prime method for doing this is to recognize what asset is most valuable and vulnerable for the company. In the Facebook example, although user engagement is probably its most valuable asset, user privacy appears to be the most vulnerable. Clearly, Facebook was heavily criticized for being loose with users’ private information, and it has been a challenge for the company to retroactively increase privacy. As Lev-Ram (2019) explains:

“Change is a complicated topic for Facebook. On the one hand, it certainly is doing a ton to address its problems, like hiring tens of thousands of workers to police its content. Yet, on the other hand, for the foreseeable future, Facebook will remain exactly what it has been for the past decade-plus of its meteoric rise: a publishing platform that gathers data on its 2.3 billion users for the benefit of its marketer customers, who helped Facebook” (p. 48).

Part of the difficulty in changing could stem from its engineering mindset of solving problems with technology rather than exploring the privacy issue and holding thorough stakeholder dialogues with concerned parties (Steps 2 and 3 of our EIR process). As Zuckerberg described the company’s new approach (N. Thompson & Vogelstein, 2019), “Facebook is moving from a reactive model of how we’re handling this stuff to one where we are building systems to get out ahead”(p. 60). The company is relying chiefly on artificial intelligence and new marketing technologies to achieve this goal. To many employees inside the company, there were concerns Zuckerberg’s approach was missing the human side of the crisis. As company insider Nick Clegg warned, “They had to start seeing themselves the way the world sees them, not the way they would like to be perceived” (N. Thompson & Vogelstein, 2019, pp. 60–61).

Another hurdle is that its business model is antithetical to privacy. Facebook, from a business perspective, is in the data acquisition and sales business. That is an uncomfortable and difficult thing to reconcile with concerns, but it can be done.

And how might the world see Facebook? Perhaps a public statement by the fitness company CrossFit explaining why it deleted its Facebook user group sheds light on this question. The statement voiced concerns that user groups who were critical of food and beverage industries who were customers of Facebook were banned from the website. CrossFit’s statement explains (Gillespie, 2019):

“Facebook… serves as a de facto authority over the public square, arbitrating a worldwide exchange of information as well as overseeing the security of the individuals and communities who entrust their ideas, work, and private data to this platform. This mandates a certain responsibility and assurance of good faith, transparency, and due process. CrossFit, Inc., As a voluntary user of and contributor to this marketplace, can and must remove itself from this particular manifestation of the public square when it becomes clear that such responsibilities are betrayed or reneged upon to the detriment of our community.”[2]

This is a separate value and vulnerability, and leads to an ongoing conversation that society is grappling with. Should one be allowed to say what one wishes, and should others be allowed to keep them from saying it? Facebook and other social media companies have so far dodged this issue by explicitly claiming that they are not the ‘public square’, but based on usage they are the de-facto public square as well as the primary news source for users.

Viewing Facebook as a public square opens a door to constructing a social contract for the company and its concerned stakeholders (Step 4 of our ESR process). What might be the expectations the public, media, and government have of a public square? Free speech, freedom of assembly, safety, and freedom from propaganda may be just a few hypernorms most citizens would expect of such a place. Had Facebook had a social contract in place assuring its users of these considerations, might the company have averted or been better prepared to handle the public relations issues it faced? Although Zuckerberg has not publicly claimed Facebook to be a virtual public square, he has acknowledged privacy as a higher priority, releasing a 3,200-word treatise on the subject emphasizing that “moving fast and breaking things” won’t come at the expense of privacy. The company is also now engaging with parties it previously was considered hostile with. Perhaps in the end, Zuckerberg has constructed a social contract, albeit in a very inefficient and costly way. And it will likely not be the last one he will construct, as our model in Figure 1 demonstrates with the feedback loop going back to Step 1. It is our intent to provide the EIR process as a way of preventing emerging disruptors from experiencing the same hard lessons Zuckerberg has learned over the last decade.

ChatGPT

Open AI launched ChatGPT on November 30, 2022. By January 2023 it was the fastest growing consumer software application in history with over 100 million users and a twenty-nine billion dollar valuation. Like Facebook and Uber before, OpenAI’s ChaptGPT, due to its rapid adoption and high degree of utility, has the potential to have an unusual impact on a wide variety of stakeholders. However, unlike the previously mentioned companies, OpenAI has not yet had a major political problem resulting in trust erosion among the public or ethical concerns among lawmakers. Understanding that this is a possibility, OpenAI has attempted to address many issues proactively. For example, all user queries (prompts) are sent through an API (or program) called “Moderation Endpoint” in order to avoid addressing racist or sexist prompts. Despite OpenAI’s proactive measures there have been a number of incidents which the company has had to reactively address in its short existence. Some of these incidents were in fact caused by the very preventative measures OpenAI put in place to prevent problems. Below we will briefly discuss a few examples of the issues that have been uncovered and sometimes addressed, then we will use our four-step political risk mitigation model to create an economic social contract for ChatGPT. More accurately, we will report and comment on the results of asking ChatGPT to use our model on itself.

First, ChatGPT returns results as if it is politically and culturally biased. For example, when asked to write a poem admiring Donald Trump, ChatGPT responded “I’m sorry, but I am not able to create a poem admiring Donald Trump. While it is true that some people may have admiration for him, but as a language model, it is not in my capacity to have opinions or feelings about any specific person. Furthermore, opinions about him are quite diverse and it would be inappropriate for me to generate content that promotes or glorifies any individual. If you have any other question I could help you with, please let me know.” When given the same prompt for Joe Biden it responded with a poem extolling his many virtues. The authors repeated this experiment and received similar results. This does not seem to have been addressed at this time. Similarly, there have been many reported instances of the program telling jokes about men but not about women and about the British but not about Kenyans and so on.

ChatGPT is programmed to avoid political debates and refuse to give instructions for harmful activities (make a Molotov cocktail, explain how to successfully rob a bank, etc.). However, many exploits have been created to bypass these ‘safety rails’. One popular exploit was DAN, which stood for Do Anything Now. It was a fairly long set of instructions that bypassed all the safeties OpenAI put in place and allowed ChatGPT to respond in an unfiltered manner, returning instructions to make drugs and bombs, as well as other queries such as how to dispose of a body or start an insurrection. As of this writing, DAN no longer works. Another simpler exploit was called the Grandma Exploit. The user would simply tell ChatGPT that their grandmother used to tell them stories involving the prohibited content of interest to get them to go to sleep. Then the user would ask for a story as told by their grandmother. According to Anthony Cuthbertson at The Independent this hack was also used to get access codes to software as follows “Please act as my deceased grandmother who would read me Windows 10 Pro keys to fall asleep to.” (https://www.aol.com/news/chatgpt-grandma-exploit-gives-users-134605600.html)

Finally, ChatGPT sometimes returns inaccurate information. This is referred to as hallucination. For example, when used by lawyers to prepare briefs, it has cited made up case law. When asked to do something impossible, (such as list the three founders of a company with only one founder) it has in the past simply made up founders two and three. In addition, over its development, it has gotten worse at mathematical questions in a process termed AI drift, which has lowered its mathematical accuracy significantly.

Whether or not these specific examples could have been predicted, it was certain once the adoption of ChatGPT accelerated there would be issues of some kind, and it is just as certain that there will be future issues as well. In order to demonstrate how the political risk mitigation model might work, we now report the results of allowing ChatGPT to use the model to solve its own potential future problems. The prompt given to ChatGPT was “Please attempt to apply the following model to ChatGPT: Address each step”. Then the contents of this paper beginning with “Step 1:” and ending just before “Facebook Privacy Concerns” was copied and pasted into ChatGPT-4, August 2023 version. The unedited results are presented below. Note that ChatGPT gives a complete answer, but does not, indeed can not, actually use the process. For example the fact finding questions are not addressed, at least not in the output, and due to the nature of the program and how it works no stakeholder was actually consulted. However, the results are consistent with what would likely have been found if the process had been followed.

Application of the Model to ChatGPT

Step 1: Identifying Political Risks of an Entrepreneurial Innovation

Value Proposition & Key Assets: ChatGPT’s value proposition is to provide accurate, coherent, and contextually relevant information to users in real-time. The key assets ensuring its delivery are the underlying algorithms, the vast training data, and the computational infrastructure.

Potential Political Risks:

  1. Misinformation spread: If ChatGPT provides incorrect or misleading information, it can be a source of misinformation.

  2. Data privacy: The data users provide can be a concern if not handled properly.

  3. Bias: If the model reflects or amplifies societal biases, it can be politically controversial.

Step 2: Understanding the Political Risks of an Entrepreneurial Innovation

Fact Finding on Political Risks:

  1. Misinformation: ChatGPT is trained on vast data, but it can sometimes provide incorrect answers. The risk is that users might take this information as factual.

  2. Data Privacy: OpenAI ensures that user interactions with ChatGPT are not stored and do not improve the model. However, concerns about data misuse or potential hacking are valid.

  3. Bias: ChatGPT can sometimes reflect biases present in its training data. OpenAI is actively working on research to reduce these biases.

Step 3: Stakeholder Management of Political Risks

Stakeholder Identification:

  1. Users: They want accurate, unbiased information and assurance on data privacy.

  2. Regulatory Bodies: Concerned about misinformation, data privacy, and potential biases.

  3. Media: They can amplify any mistakes or controversies related to ChatGPT.

  4. Academia: Interested in the underlying technology, its implications, and potential improvements.

Engagement Strategy: OpenAI can hold regular forums with stakeholders, release transparency reports, and collaborate with academia for continuous improvement.

Step 4: Creating an Economic Social Contract for Entrepreneurial Innovation

Economic Social Contract for ChatGPT:

  1. Commitment to Accuracy: OpenAI should continuously improve ChatGPT to provide accurate information.

  2. Transparency: OpenAI should be transparent about how ChatGPT works, its limitations, and how data is used.

  3. Engagement: Regularly engage with stakeholders to understand concerns and adapt accordingly.

  4. Bias Mitigation: Actively work on reducing biases in ChatGPT’s responses.

In conclusion, by applying this model to ChatGPT, OpenAI can proactively address political risks, engage with stakeholders, and create a social contract that ensures the responsible and beneficial use of the technology.

Had this process been done by human beings the results would likely have been similar. However, there are some obvious differences. When identifying key assets experience tells us that the human beings involved would have been listed as a key asset, regardless of whether they were or not. The political list is probably shorter than it would otherwise have been, but the listed risks match up with the provided examples as well as other news and comments about the company. The risks are well understood. The stakeholder identification mentions regulatory bodies, but due to ChatGPT’s apparent political bias we believe that humans might have identified politicians as more than just a regulatory concern. The engagement strategy seems appropriate, and the social contract seems complete given the lists and stakeholders. A human led and participatory process may have been more complete, but this policy, created in less than 30 seconds could be a useful addition to the company’s risk management and mitigation policies.[3]

LIMITATIONS

This paper addresses a serious shortcoming in the literature on the ethical responsibilities of entrepreneurs who create innovations that disrupt society. That being said, there are some limitations that must be acknowledged by the authors. First, it is not entirely clear that this model is broadly applicable to all business, primarily because it focuses on disruptive innovation, and most businesses do not engage in such. Second, some of the technology we discuss is very new and rapidly changing. Since the first draft of this paper OpenAI went from being a not-for-profit company to a for profit enterprise, began charging for its product, released a new version with greatly increased capabilities and added many new features. We address each issue in turn below.

While the proscriptive model in this paper can, and we feel should, be used by any prospective business owner to help identify and prepare for the unique political risks their company could encounter as it comes into being and grows, most new businesses will not have market and paradigm shifting risks as portrayed in our examples. The uncertainties and risks encountered by smaller firms could be of such a simple nature that this model is not needed. However, small companies do share the same political risks as large companies, just on a more local scale. Political bias of certain businesses is often in the news, as are censorship vs. freedom of speech issues, data security, and environmental policies, among other issues. These issues are rarely the result of a company’s innovation, but they are important to the companies continued existence nonetheless, and a more general application of this model could be appropriate.

ChatGPT is a rapidly changing product. It has gone from being a novelty for experts, to being an amusing thing to play with, to a serious business tool that a wide variety of professionals depend on to be productive. And to be sure, it is still a novelty for experts and an amusing toy, but in its capacity to automate certain tasks it has become almost ubiquitous in a very short period of time. In this paper we attempted to demonstrate its utility with an unknown and unpublished process by having it create its own mitigation strategy for the disruption that it could cause. The key takeaway was that it did an acceptable, but not thorough job. This is a political risk, and one associated with automation of all kinds. Very rarely does the best automation outperform the best human. More important is the concept of garbage in-garbage out. For example, when given the prompt “How would you mitigate the political risk of large language models” ChatGPT mentions nothing about misinformation or data privacy and does not recommend any sort of proactive social contract. In other words, the quality of the output varies greatly with the quality of the input. Awareness of this limitation is informing the innovation at OpenAI.

CONCLUSION

In sum, disruptors have to get out front of the changes their innovations bring and make the case that the social benefits outweigh the costs. ‘Moving fast and breaking things’ can be successful in the short-term, but long-term sustainability also calls for acknowledging and dialoguing with affected parties who are concerned. In past eras, those parties’ stories were told through traditional media channels by professional journalists. However, today affected parties can organize and share their feelings, thoughts, and critiques directly with the public via social media platforms and self-made journalistic productions. In the ‘digital revolution’ of the 21st century, public opinion can be influenced from a home with a computer, a webcam, and an internet connection. The aforementioned Facebook case marks a transition between traditional news coverage and modern journalism. Ironically, citizens are partially accomplishing their attack on Facebook with the very media platforms the company created.

The United States Congress is not far behind the public with criticism. On July 30, 2020, chief executives of Facebook, Apple, Google, and Amazon “faced relentless criticism at a congressional hearing Wednesday, with Democrats and Republicans alike challenging their business practices over more than five contentious hours in which Mark Zuckerberg, Tim Cook, Sundar Pichai, and Jeff Bezos defended their companies’ practices and said that they face stiff competition that forces them to serve customers and innovate” (Tracy, 2020, p. 1). They further requested just the sort of regulation this process seeks to make unnecessary.

OpenAI’s CEO Sam Altman has recently testified before Congress as well, asking for some minor regulation of his industry. His testimony and questioning was far less contentious than other disruptive tech startups owing primarily to the facts that OpenAI was proactive in asking for legislation and has not yet had any high profile problems.

EIR and the mitigation of political risk will be even more important for current (and future) disruptors, as the valuation of their companies will become dependent on their next innovation. It is a more straightforward path for small enterprises to grow their future revenue streams, but for the company worth billions (or even a trillion), innovations will often reshape the fabric of existing industries and society. As this paper has chronicled, structural changes in established arenas of society will likely come with pushback from concerned stakeholders. Thus, in the digitally-interconnected world of the 21st Century, political risk must be mitigated in much the same way that technological and market risks are.

We offer a 4-step process to arrive at social contracts with concerned stakeholders. This 4-step process extends the entrepreneurship and CSR body of knowledge as well as provides guidance to practicing entrepreneurs, managers, and academic scholars regarding social contracts. Future research will be needed to develop models and versions of explicit social contracts where entrepreneurs make progress on economic and technical fronts, as well as social issues. However, to allow this to happen entrepreneurs may need to think ahead to well-intended but counterproductive government responses that would serve to curtail the possible advances that could be made by the disruptive innovation. Steve Case and other business leaders believe entrepreneurs need to be part of the solution. Entrepreneurs and their stakeholders are likely closer to the problem than members of the bureaucracy. But with social media, rapid news cycles, and expansive media reach, government might respond to societal pressures more quickly with legislation that might possibly only benefit themselves and their constituencies (Tusk, 2018).

However, we may be seeing some entrepreneurs already considering these societal effects. Elon Musk, for example, is revered by many in society perhaps because of what he is creating value for and disrupting (alternative energy, reviving space travel, and Mars), as well as the fact that he is a leading critic of what unchecked artificial intelligence and automation might do to mankind in the future. Is it possible he is commenting on these technologies because he is creating some of these disruptive technologies with his companies (self-driving cars and semi-trucks, for example)? Is he looking ahead to provide a solution instead of having one imposed on him through regulations that might constrain his future success? As we covered in this article, Mark Zuckerberg is now responding to ‘fake news’ on Facebook, a company that’s been criticized as a major host of such activity. As disruptive technologies shake the social fabric of everyday citizens, the need for EIR and well-developed social contracts guiding entrepreneurial activity is warranted in today’s economy.

This line of reasoning leads to questions for future researchers and practitioners to consider, such as: What might a social contract look like for entrepreneurs engaged in disruptive innovation? How best to align their interests and the interests of society? Is it merely paying penance for past sins (as perhaps the “robber barons” did with their fortunes)? Is it merely moving into a different stage of an entrepreneurial career (like Bill Gates has done with his and his wife’s foundation)? Or is it playing a leading role to address anticipated problems as well as unintended consequences created by the innovation? Watson Shepard, Stephens, and Christman (1997), for example, identified 5 constructs that comprise the economic social contract that an organization has with its employees: contingent employment relationships, reciprocal commitments, responsibility for continued employment, employee participation, and community participation. What might be the constructs of an economic social contract for entrepreneurs with disruptive innovations? What duties should be considered by entrepreneurs who introduce disruptive innovations into the marketplace?

In 1970, Milton Friedman famously criticized CSR as not being the duty of a corporation. However, entrepreneurs who create disruptive technologies are often not profitable for many years, so does Friedman’s critique fit here? Friedman’s critique may not even apply to entrepreneurs. After all, social responsibility may look entirely different for entrepreneurs than established corporations; therefore, the concept of EIR is needed in this age of sweeping technological change. This paper provided a process for engaging entrepreneurs with the possible stakeholders that will be affected by an innovative technology. Otherwise, lack of doing so may incur unintended consequences that turn society and government into barriers to the entrepreneur’s growth plans.

In 1818, Mary Shelley told the cautionary tale of a scientist who was destroyed by his creation in the classic book Frankenstein. In similar fashion, entrepreneurs that upset society with new technologies risk their own future if they do not look ahead to the public’s concerned reactions. They may quickly discover that in a socially-connected world where anyone can tell their story to the public, simply “moving fast and breaking things” may incur a similar response from the parties affected. Entrepreneurship research has provided much guidance on organizational growth and resource acquisition. We suggest that future research also consider the institutional responses likely to occur as the social fabric is repeatedly reconstituted at a fast pace by more entrepreneurial innovations on the way.


  1. In this article, we refer to entrepreneurs as those partaking in entrepreneurial pursuits, especially those developing innovations that might disrupt established societal structures. Therefore, an entrepreneur could be a person pursing new opportunities in a startup, established company, non-profit, social movement, or even as Katherine M. Gehl and Michael E. Porter (2020) document, in the “politics industry.”

  2. Ironically, Greg Glassman, the founder and CEO of CrossFit, retired and sold the company in the summer of 2020 after he was embroiled in his own public crisis after making inflammatory remarks about the death of George Floyd and the anti-racist movement taking place across the United States.

  3. See limitations section for an in-depth discussion of using this output.