EN, essay

Visions, Issues, and Measures to Discuss AI and Society

This paper outlines three aspects of the introduction of artificial intelligence (AI) technology from a social science perspective: (1) visions, (2) issues, and (3) measures to those challenges.

Visions

Anything one man can imagine, other men can make real.

 These are the words of the French novelist Jules Verne.

 Technology does not develop on its own. Technological development needs a purpose, and it interacts with the needs of society and the vision that people have for it. To confirm this, let us return to 100 years ago.
 Figures 1 and 2 show images of “firefighting.” Figure 1, titled “Aerial Firemen,” shows people flying through the sky to fight fires and conduct rescues. Figure 2, titled “Fire Fighting Vehicles and Fire Evacuation Equipment” in Japanese, depicts a water truck and a gondola-like machine that carries people.

If we put the visions of the 20th and 21st centuries together, Figure 2 is closer to reality. On the other hand, people (especially in the West) continue to chase the dream of humans flying, such as the development of drone cabs.

Figure 1 Aerial Firemen, France in the21st Century (1900), from wikipedia
Figure 2 Fire fighting Vehicles and Fire evacuation Equipment, Japan and the Japanese, special issue on Japan after 100 years (1920)

 In recent years, policy visions have also been formulated to depict “what kind of society we want to live in” in a back-casting approach. For example, Society 5.0, proposed by the Japanese government, aims to create a “human-centered society that achieves both economic development and solutions to social problems through a system that tightly integrates cyberspace and physical space.” The Sustainable Development Goals of the United Nations propose 17 goals and 169 targets for each country to work toward in 2030, including eliminating poverty, addressing climate change, and enforcing human rights. By establishing a vision, it is possible to promote the technology and policies that provide the necessary means to realize it.
 AI technology is sometimes referred to as if its introduction is the goal and the “means become the end.” However, technology is only a means to an end, and we must start by envisioning the kind of society we want to live in.
 The two figures show that even for the same “firefighting” activity, different technologies are required, and different social systems must be developed. However, there is no need to aim for only one vision. Thus, it is possible to envision multiple futures. Unlike machines, which can only learn from past data, humans have the power to explore various possibilities and make them come true.

What do we want to do, improve, or innovate?

Assuming that the vision is shared, when the time comes to introduce AI technology, the people involved need to reconcile their awareness of whether they want to “improve” or “innovate” the system. The differences between improvement and innovation are shown in Table 1.

Table 1. Difference between improvement and innovation

ImprovementInnovation
Current frameworkAffirmation and follow-upDenial and destruction
Unit of variationSeveral weeks to a yearSeveral years to several decades
ObjectiveOptimizing the current situation and improving efficiencyCreation of new values
LawCan be handled under current lawMechanisms and institutional changes may be necessary
BenefitsVisible and immediate resultsExisting structural problems can be solved. For example, Uber and other gig economy companies have transformed the way we work, live, travel, and think about time
DisadvantagesShort-term results will be achieved, but once optimized, dramatic changes are unlikely, and there will be a point where no further time and cost efficiencies can be achieved.
As long as the status quo is affirmed, partial optimization does not become total optimization; on the contrary, it can become inefficient. For example, a company that introduced telework in COVID-19 may need people to “come to the office with a stamp” to settle accounts
Because of its destructive nature, it is likely to shake up existing social systems, and many people will be affected economically and socially. Therefore, it is necessary for society as a whole to consider how to support these people.
In addition, it takes time to bring about change, and since there are many people involved, if the vision is blurred, it may end up being a mishmash of partial optimizations.

 Improvement is the affirmation or continuation of the current framework, aiming to improve the efficiency and optimization of work. For example, when a machine replaces some of the work done by humans, there is no change in the purpose of performing the work itself. The only change is the means and subjects of the work. The unit of change is also quick. Changes can be expected in as little as a few weeks to a few months.
 On the other hand, there are some disadvantages. There may be short-term effects in terms of cost and work efficiency. However, once a system has been streamlined, it is unlikely to change dramatically in the following years, and it will soon reach a point where there is no more efficiency to be gained in terms of time or cost. Furthermore, as long as the current framework is maintained, it may suffer from bottlenecks. For example, even if a company introduced telework during the COVID-19 pandemic, workers need to come to the office for a “stamp” to settle accounts because documents are settled on paper and require a stamp for approval. In addition, partial optimization is not the same as total optimization, and it may result in inefficiencies. This point will be addressed in the next section.
 Innovation is based on the negation or destruction of the current framework. For example, Uber launched a matching service between drivers and customers to allow individuals to dispatch cars in their spare time, thus blowing a hole in the conventional wisdom of the car-dispatch industry. It also brought about a shift in the way we work, live, and think about travel and time. By uncovering obvious or latent needs, we can bring about solutions to existing structural problems.
 On the other hand, there are additional things that need to be considered by society as a whole, such as helping those who are affected by the destructive effects. The number of people involved increases, and after a long period of adjustment, the vision may become blurred. There is also the possibility of settling for a mishmash of partial optimizations (mishmash of improvements) through coordination among the parties involved.
 Which are we aiming for—improvement or innovation—when we introduce improvements, reforms, and AI? It is necessary to reconcile the awareness among the parties involved.

Issues

More work for Mother

At the end of the “Improvements” section above, we discussed partial and total optimization. People and people—or people and machines—form an operational system. If some machines in the system do not operate properly, the effort of the people will need to increase. On the other hand, even if the efficiency of the machines improves rapidly, the workload of the people may increase. Here is a case in point, introduced in the book More Work for Mother: The Ironies of Household Technology from The Open Hearth to the Microwave, by Ruth Schwartz Cowan, an American historian.
 The book describes how household chores have changed with the introduction of household technologies. The industrialization of the 19th century and the household appliances of the 20th century made individual tasks such as laundry less labor-intensive. As a result, tasks that would have previously been outsourced to servants or contractors became mothers’ work. The ability to do laundry at home inevitably increased the amount and frequency of laundry. At the same time, however, other tasks associated with laundry, such as drying and folding, were not automated. As a result, the mother’s busyness did not change, but rather her daily extra work continued to increase.
 This is also true today. Robot vacuum cleaners automatically clean the floor, but they cannot pick things up. Therefore, it is necessary for a human to first move the furniture from the robot vacuum cleaner’s path so it can move easily and return the furniture to its original position after the vacuum cleaner has gone through. Household chores are the sum total of tasks related to the house, and there are many so-called “nameless household chores” and “invisible household chores.”
 In the case of “improvement” of a part of a system where people and machines are intertwined, the key point is whether the work tasks to be automated or streamlined can be aligned in time and space with the overall system. Infrastructure, in particular, is a complex system of people and machines. Machines can work around the clock, but humans cannot. It is not in the best interest of humans to push themselves too hard to keep up with machines. On the other hand, adapting machines to human reaction speed and processing speed is like the aforementioned coming to the office to “stamp,” which is a fallacy and defeats the purpose of mechanization. When introducing technology, we must also consider “innovation,” such as a vision of how to work and reconfigure the roles of people.

B2B2B2…C: The Industrial Structure of Japan

I would like to continue thinking about these systems. Japan’s industrial structure itself is a large system, not just a series of “tasks” such as cleaning and laundry. This size poses a challenge in terms of the speed of response and information communication when developing and operating AI.
 Many of today’s IT giants that provide services, such as Google and Amazon, are developing AI and providing services in-house. Because of their close proximity to their customers, they can respond quickly to the problems and issues surrounding AI systems.
 In contrast, Japanese companies have a long supply chain, and many are business-to-business (B2B) companies rather than business-to-consumer (B2C). In other words, it is not uncommon for Company A, which provides AI services, to provide its systems to another vendor, Company B, which then provides actual services to users. In this case, Company A does not have direct contact with the users or consumers of its AI services. Company B explains the AI services to consumers, but that does not mean that Company A does not have to think about how to explain it, nor does it mean that Company B can distort Company A’s explanation when it conveys the information to consumers. In addition, when an accident or incident occurs in a certain area, the question arises as to how far back the responsibility should go and whether it can be traced at all (Fig. 3). For example, infrastructure projects, such as civil engineering and disaster prevention projects, involve many organizations, creating issues related to information sharing and responsibility.
 Modern society can be described as a complex system of people and machines. Improving the efficiency and optimization of part of the system can be effective in the short term. However, when looking at the entire system, it is necessary to take the perspective of total optimization rather than partial optimization. Another feature of AI is that not only system developers, but also vendors and users, can provide new data to build a system that promotes learning. In such a case, how to guarantee the quality of the product and data and how to build a robust system against malicious intervention need to be addressed.

Figure 3 Difference between B2C and B2B Company

Measures

Discussions are currently underway to create a mechanism to guarantee not only the privacy and security, but also the explainability and robustness, of AI systems. In Japan, the Ministry of Internal Affairs and Communications (MIC) released the AI R&D Guidelines in 2017 and the AI Utilization Guidelines in 2019, outlining issues that developers and AI service providers should be aware of. The Ministry of Economy, Trade and Industry (METI) also released its “Contract Guidelines for the Use of AI and Data” in 2018. In addition, the Ministry’s “GOVERNANCE INNOVATION: Redesigning Law and Architecture for Society 5.0” report states that in today’s fast-changing and innovation-driven world, the basic policy is not to set rules in advance (rule-based regulations), but to adopt “goal-based regulations” that include incentives and enhanced accountability to ensure human rights, fairness, and safety. The Cabinet Office also released the “Social Principles of Human-Centric AI” in 2019.
 Internationally, the Institute of Electrical and Electronics Engineers released a report titled “Ethically Aligned Design” in 2019, and international organizations such as the OECD, the European Commission, and the United Nations released a series of guidelines from 2019 to 2020 that summarize key points to be considered when operating AI.
In this context, a broad international consensus on what to watch out for in the development, operation, and use of AI is being formed. We are now moving to the stage of thinking practically about how to put these principles into practice, which actors should do what, and how to distribute roles in the system. As mentioned in the “Issues” section, there are many issues related to AI that cannot be considered by developers alone. It is important to establish new governance mechanisms, such as industry guidelines, third-party organizations (organizations outside the company) such as insurance companies, and social systems such as judicial and legal systems. It is also important to build a new governance system that fosters user literacy and provides relief after incidents and accidents.

From social experiment to experimental society

Modern society is full of “wicked problems” that make it difficult to share knowledge, agree on values, and make trade-offs. In addition to the debate over AI, a group of issues can be asked of science but not answered by science, such as life science manipulation techniques, environmental issues, and pandemics such as COVID-19. These are referred to as “trans-science” problems.
 In the trans-science age, science and technology alone cannot solve various societal problems. The relationship between society and science and technology is becoming increasingly complex as science and technology, which are supposed to solve problems, create different social problems. There are also emotional and ethical issues that people feel uncomfortable with, even if there are no technical or legal problems. If we try to solve them as technical problems of AI, we may end up with a different kind of problem: ethical washing. It is precisely because we live in such an era that it is important to think about the issues of science and technology and society, as well as the future society, from various angles and in cooperation with diverse people.
 There is a way of thinking about the penetration of technology into society called the Collingridge Dilemma. The dilemma is that it is difficult to predict the influence of a technology before it is used in society, but it is difficult to control the technology once it has spread. While “social experiments” in limited spaces such as special zones are promoted to control the influence, AI technologies that focus on learning, such as deep learning, are said to be in a perpetual state of beta testing, constantly changing as they receive feedback from users. From this perspective, it can be said that we are already living in a society that is constantly conducting “social experiments,” in other words, an “experimental society.”
 There is a current need to promote dialog involving people from different industries and fields, including discussion of a vision of what roles and responsibilities each of us should take on in an experimental society and what kind of society we want to live in.