up:: π₯ Sources
type:: #π₯/π°
status:: #π₯/π§
tags:: #on/articles
topics:: π€ Artificial Intelligence
Author:: Nick Bostrom
Title:: Strategic Implications of Openness in AI Development
URL:: https://readwise.io/reader/document_raw_content/23864531
Reviewed Date:: 2023-01-09
Finished Year:: 2023
Strategic Implications of Openness in AI Development
Highlights
A central concern is that openness could exacerbate a racing dynamic: competitors trying to be the first to develop advanced (superintelligent) AI may accept higher levels of existential risk in order to accelerate progress.
One reason firms engage in open non-proprietary R&D is to build βabsorptive capacityβ: conducting original research as a means of building skill and keeping up with the state-of-the-art (Cohen and Levinthal, 1989; Griffith et al., 2004).
Another incentive for innovation in the open non-proprietary regime is that the originator of an idea may profit from owning a complementary asset whose value is increased by the new idea.
a software firm might choose to give away its software gratis in order to increase demand for consulting services and technical support (which the firm, having written the software, is in a strong position to supply).
It has been argued, for example, that military applications of AI, including lethal autonomous weapons, might incite new arms races, or lower the threshold for nations to go to war, or give terrorists and assassins new tools for violence (Future of Life Institute, 2015). AI techniques could also be used to launch cyber attacks. Facial recognition, sentiment analysis, and data mining algorithms could be used to discriminate against disfavoured groups, or invade peopleβs privacy, or enable oppressive regimes to more effectively target political dissidents (Balkin, 2008). Increased reliance on complex autonomous systems for many essential economic and infrastructural functions may create novel kinds of systemic accident risk or present vulnerabilities that could be exploited by hackers or cyber-warriors
Risks
If, as a first-order approximation, we model the impacts of near and medium-term AI advances as a continuation and extension of longstanding trends of automation and productivity-increasing technological change, therefore, we would estimate that any adverse labour market impacts would be greatly outweighed by economic gains. To think otherwise would seem to entail adopting the generally luddite position that perhaps a majority of current technological developments have a net negative impact.
A number of specific areas of concern can be identified, including military uses, applications for social control, and systemic risks from increased reliance on complex autonomous processes. However, for each of these areas of concern, one could also envisage prospects of favourable impacts, which seem perhaps at least equally plausible. For example, automated weaponry might reduce human collateral damage or change geopolitical factors in some positive way; improved surveillance might suppress crime, terrorism, and social free-riding; and more sophisticated ways of analysing and responding to data might help identify and reduce various kinds of systemic risk.
two paramount problems tied to the creation of extremely advanced (generally human-level or superintelligent) AI systems (See Bostrom, 2014a):
β’ The control problem: how to design AI systems such that they do what their designers intend.
β’ The political problem: how to achieve a situation in which individuals or institutions empowered by such AI use it in ways that promote the common good.
(1) openness may speed AI development; (2) openness may make the race to develop AI more closely competitive; (3) openness may promote wider engagement.
making the benefits start earlier is not clearly significant on an impersonal time-neutral view, where instead it looks like the focus should be on reducing existential risk
Accelerated AI would increase the chance that superintelligent AI will preempt existential risks stemming from non-AI sources, such as risks that may arise from synthetic biology, nuclear war, molecular nanotechnology, or other risks as-yet unforeseen
In summary, the fact that openness may speed up AI development seems positive for goals that strongly prioritize currently existing people over potential future generations, and uncertain for impersonal time-neutral goals
In a tight competitive situation, it could be impossible for a leading AI developer to slow down or pause without abandoning its lead to a competitor.
If the pool of potential competitors with near state-of-the-art capabilities is large enough, then one would expect it to contain at least one team that would be willing to proceed with the development of superintelligent AI even without adequate safeguards. The larger the pool of competitors, the harder it would be for them to all coordinate to avoid a risk race to the bottom.
Even if there were inescapable tradeoffs between efficiency and safety (or ethical constraints preventing certain kinds of instrumentally useful computation), the situation would still be salvageable if the frontrunner has enough of a lead to be able to get by with less than maximally efficient AI for a period of time: since during that time, it might be possible for the frontrunner to achieve a sufficient degree of global coordination (for instance, by forming a βsingletonβ, discussed more below) to permanently prevent the launch of more efficient but less desirable forms of AI (or prevent such AI, if launched, from outcompeting more desirable forms of AI) (Bostrom, 2006).
This could be great for a story!
tighter competitive situation would make it less likely that one AI developer becomes sufficiently powerful to monopolize the benefits of advanced AI. This is one of the stated motivations for the OpenAI project, expressed for example, by Elon Musk, one its founders:
I think the best defense against the misuse of AI is to empower as many people as possible to have AI. If everyone has AI powers, then thereβs not any
one person or a small set of individuals who can have AI superpower. (Levy, 2015)
openness may either increase or decrease the degree of influence status quo powers would have over the outcome, depending on whether hardware or software is the bottleneck. Since it is currently unclear what the bottleneck will be, the impact of openness on the expected degree of control of status quo powers is ambiguous.
Reduces probability of a singleton
A singleton is a world order in which there is at the highest level of organization one coordinated decision-making agency. In other words, a singleton is a regime in which major global coordination or bargaining problems are solved.
There are a number of serious problems that can arise in a multipolar outcome that would be avoided in a singleton outcome.
One such problem is that it could turn out that at some
level of technological development (and perhaps at technological maturity) offence has an advantage over defence. For example, suppose that as biotechnology matures, it becomes inexpensive to engineer a microorganism that can wreak havoc on the natural environment while it remains prohibitively costly to protect against the release and proliferation of such an organism
even if in biotechnology offence will not have such an advantage, perhaps it will in cyberwarfare? in molecular nanotechnology? in advanced drone weaponry? or in some other as-yet unanticipated technology that would be developed by superintelligent AIs? A world in which global coordination problems remain unsolved even as the power of technology increases towards its physical limits is a world that is hostage to the possibility that β at some level of technological development β nature too strongly favours destruction over creation.
The long-run equilibrium of such a process is difficult to predict, and might be primarily determined by choices made after the development of advanced AI; but creating a state of affairs in which the world is too fractured and multipolar to be able to influence where it leads should be a cause for concern, unless one is confident (and it is hard to see what could warrant such confidence) that the programs with the highest fitness in a mature algorithmic hyper-economy are essentially coextensive with the programs that have the highest level of subjective well-being or moral value.
The existence of multiple AIs does not guarantee that they will act in the interests of humans or remain under human control. (Analogy: the existence of many competing modern human individuals did little to promote the long-term prospering of the other hominid species with which Homo sapiens once shared the planet.) If the AIs are copies of the same template, or slight modifications thereof, they might all contain the same control flaw. Open development may in fact increase the probability of such homogeneity, by making it easier for different labs to use the same code base and algorithms instead of inventing their own.
Interesting argument here, the variety of species of AI will be less with more openness, and possibly homogeneic, faults and all.
There is also the possibility of systemic failures resulting
from unexpected interactions of different AIs. We know that such failures can occur even with very simple algorithms (witness, e.g., the Flash Crash; US Securities Exchange Commission, 2010).
What's this?
AIs created by a single developer may be more similar to one another, and hence more prone to correlated control failures, than AIs created by different developers. Yet openness, we noted, though it may increase the likelihood that there will be multiple simultaneous developers, would also tend to make the AIs created by those developers be based on more similar designs. So the net effect of openness on the probability that there will be a diverse set of AIs is ambiguous.
Again, unpredictable af
The vision here might be a world containing many AIs, each pursuing a different goal, none of them strong enough to seize control unilaterally or by forming a coalition with other AI powers. These AIs would compete for customers and investors by offering us favourable deals, much like corporations competing for human favours in a capitalist economy.
Seems the safest, but the most boring!
Without a state powerful enough to regulate the competing AIs and enforce law and order, it may be questionable how long the balance-of-power equilibrium would last and how humans would fare under it.
Cute analogy
It would make it harder to pause towards the end in order to implement or test a safety mechanism. It would also make it harder to use any safety mechanism that reduces efficiency. Both of these look like important negative effects on the control problem. Openness also has consequences for the political problem: decreasing the probability that a small group will monopolize the benefits of advanced AI and decreasing the probability of a singleton. It may either increase or reduce the influence of status quo powers over the post-AI future depending on whether the transition is mainly hardware or software constrained. Furthermore, there may be impacts on the control problem via the distribution of AIs that result from open development, though the magnitude and sign of those impacts are unclear: openness may make a multiplicity of AIs more likely, which could increase the probability of some kind of balance-of-power arrangement between AIs; yet openness could also make the AIs more similar to one another than they would have been if the multiplicity of AI scenario had come to pass without openness and thus more likely to exhibit correlated failures. (In any case, it is unclear whether a multiplicity of diverse AIs created by different developers would really help with the control problem.)
Summary of the above notes wrapping up strategic consequences
I like the idea of correlated failures. Many similar AIs from the same developer all have similar failures, problems, kinks. One day one of them has a problem and it's a nightmare, now it's only a matter of time before the other ais fail, or will they? Or are they all in on it? Duh duh duhhh
one might speculate that work on safety would gain more from outside participation than work aimed at increasing AI effectiveness β perhaps on grounds that safety engineering and risk analysis are more vulnerable to groupthink and other biases, and would therefore benefit disproportionately from having external perspectives brought to bear.
Interesting about safety being susceptible to groupthink. Similar to vaccine safety
Openness in AI development could then, by enabling disinterested outsiders to contribute, increase the overall fraction of AI-related effort that is focused on safety and thereby improve chances that the control problem finds a timely solution.
a cooperative approach would likely have a favourable impact on both the control problem and the political problem.
an open development scenario could reduce groupthink and other biases within an AI project by enabling outsiders to engage more, which may differentially benefit risk analysis and safety engineering, thereby helping with the control problem
of advanced robotic warfare β which could conceivably involve destabilizing developments such as challenges to nuclear deterrence (e.g. from autonomous submarine-tracking bots or deep infiltration of enemy territory by small robotic systems; Robb, 2016) β and the use of AI and robotics to suppress riots, protests, or opposition movements, with possibly undesirable ramifications for political dynamics
faster AI progress would increase the probability that some currently existing people will live long enough to reap the far greater benefits that could flow from machine superintelligence (such as superlongevity and extreme prosperity).
If openness leads to wider engagement, this could also have implications for the political problem, by enabling better foresight and by increasing the probability of government control of advanced AI
So it might be possible to reap the near-term benefits of openness while yet avoiding the long-term costs, assuming a project can start out open and then switch to a closed development policy at the appropriate time.
technocrats may worry that wide public engagement with the issue of advanced AI would generate more heat than light, citing analogous cases, such as the debates surrounding GMOs in Europe, where it might appear as if beneficial technological progress would have been able to proceed with fewer impediments had the conversation been dominated more by scientific and political elites with less involvement from the public. Direct democracy proponents, on the other hand, may insist that the issues at stake are too important to be decided by a bunch of AI programmers, tech CEOs, or government insiders (who may serve parochial interests) and that society and the world is better served by a wide open discussion that gives voice to many diverse views and values.
up:: π₯ Sources