In 2001: A Space Odyssey (sci-fi movie), 50 years ago, the crew of the spacecraft included HAL 9000, a computer that was a malevolent AI system. It was motivated by its survival and aimed at murdering any human crew members who doubted it.
This cautionary tale illustrates the danger of AI being unintended or extreme when it comes to augmenting human capabilities. How can the government stop AI from acting against human expectations and intentions? What can be done to reduce potentially undesirable behavior through the procurement process?
This potential is being mitigated by government procurement processes that use new specialized principles and methods that encourage the responsible use of AI. This mission-critical journey is full of learning.
The private sector and academia have been the pioneers in the development of AI technologies. Agencies will be able to achieve better outcomes at scale if AI is adopted by the government.
AI can be used to enhance human performance. This can lead to efficiency gains at unattainable levels and order of magnitudes better than the status quo.
However, putting private-sector AI technology to good use can pose challenges for public procurement, Cary Coglianese & Erik Lampman noted in their work on contracting and AI governance. These experts point out that AI’s potential game-changing benefits must be balanced against the potential risks.
Coglianese and Alicia Lai write an article in which they argue that there is no perfect or unbiased AI system against which to compare. Human designers, decision-makers, and government agents bring decades of experience, often underappreciated, to their work. AI cannot eliminate these biases if AI is not used to remove them. Recurrent training can also introduce algorithmic bias through training data choices that are embedded in algorithms. This could lead to human prejudices being unable to be seen.
The government must improve the way it applies technology to reap the benefits of AI without creating problems. There are many issues at stake, including cybersecurity, equity diversity, inclusion, and adaptation to climate change. If entrepreneurs and government contracting officers are overburdened, innovation in the use of technology for these important use cases will be discouraged.
The federal acquisition system is a collection of many rules and regulations, which are then interpreted by the agencies and their professional contracting officials. Federal Acquisition Regulation (FAR), along with its derivatives, is the sheet music for a bureaucratic choir and orchestra in which the contracting office is the conductor. Government procurement regulations are complex. It was created to ensure public confidence in the system’s fairness. The FAR has mostly met that goal, with hidden and unobservable opportunity costs in terms of performance forgone.
The FAR Part 1 states that contracting officers should employ good business judgment on behalf of taxpayers. However, in practice, Part 1 discretion can be overwhelmed by cultural norms for complying with complex regulations.
Many contracting officers are aware that regulations can be a barrier and discourage companies from embracing technology innovation and collaborating with the government. This recognition has resulted in a flood of procurement innovation by contracting officers to take advantage of new market opportunities that are emerging in the changing landscape.
Sensing a similar need in the United States, Congress extended the 60-year-old Other Transaction Authority. This was created to abolish FAR rules, and allow experimentation with new technologies, such as AI. In recent years, OTAs have been used in a large number of cases, most notably in the U.S. Department of Defense.
These authorities are essential for advancing the art of procuring AI through the Defense Department’s Joint Artificial Intelligence Center. These authorities require more experience from contracting officers. They should use the right business acumen but not the rules embedded in the FAR when creating OTAs.
The JAIC has created Tradewind, an AI contracting “golf course”, where the tees can include the business acumen and freedom of OTAs. Tradewind can be used by the federal government across all levels of government to facilitate faster and more efficient AI acquisition.
Responsible AI (RAI), a collection of AI-specific principles that form part of the JAIC’s enterprisewide AI initiative, is a new set of principles. The commitment of the Defense Department to RAI begins with the top leadership of each department.
The Department of Defense’s new Chief Digital and AI Office is the central point of execution of its AI strategy. RAI principles guide the development of and accelerate the adoption of AI through innovative acquisition approaches. This new acquisition pathway is based on OTA and related authorities. It also includes an infrastructure of contract vehicle vehicles, such as test and evaluation support, described by the Defense Innovation Unit. Contracts based upon challenge statements can be completed in as little as 30-60 days. This allows you to quickly develop and capitalize on new techniques.
However, most important civilian agency missions revolve around allocating resources.
For example, U.S. Department of Health and Human Services missions must prevent illegal socioeconomic biases. The National Institute of Standards and Technology (NIST), which guides procurement teams to avoid such bias, is remarkable in its approach to data, testing, and evaluation as well as human factors. NIST analyzes prospective standards to identify and prevent socioeconomic biases in the deployment of AI solutions.