BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Will Trump's New Artificial Intelligence Initiative Make The U.S. The World Leader In AI?

This article is more than 5 years old.

The tech world got a surprise on Monday when a senior administration official for the Trump administration announced during a telephone briefing that the President would be signing an executive order that would create an American AI Initiative designed to dedicate resources and funnel investments into research on artificial intelligence (AI).

The order, titled Accelerating America’s Leadership in Artificial Intelligence, “will direct agencies to prioritize AI investments in research and development, increase access to federal data and models for that research and prepare workers to adapt to the era of AI.” While an obvious concern is funding for these innovations, no announcements have been made about the specific financial resources that will become available to the new program.

Aside from how it will be paid for, we also currently lack information on how the government intends to structure or re-structure resources, who, exactly, they intend to call on for this effort (other than “federal agencies”), or how soon we should expect to see things take shape. Of course, Congress will ultimately decide how much money the program gets.

The order has five “pillars,” according to the unnamed official:

1) Research and development (which will ask agencies to increase funding for and specifically report on AI research)

2) Infrastructure (which will encourage information sharing, though potentially run up against issues of privacy)

3) Governance (which will have to be drafted by government agencies and, we can only hope, other civic and academic groups, but at least aims to ensure the safe and ethical use of AI)

4) Workforce (which will support job training and continuing education in computer science)

5) International engagement (which will require collaborating on projects with other countries, without giving them the technological edge the U.S. seeks)

Other than this general framework, we have very little else in terms of what will happen, though the government plans to release more information over the next 6 months.

While for many AI conjures images of Skynet or other sci-fi fears of sentient machines threatening to eliminate or enslave humanity, the term is actually used quite differently in tech circles. AI is simply the all-encompassing term for machines which can intelligently solve problems or complete tasks based on a set of stipulated rules (or algorithms). While these algorithms are written by humans, giving them their own set of issues and biases, the machines don’t need human intervention to go about their work. AI is used in recommending your next television show or new music; it’s also been used more problematically in predictive policing and criminal sentencing, for example. But still, it’s not the Terminator.

Two subsets of AI that are even more sophisticated are fields known as machine learning and deep learning. Machine learning aims to enable machines to make accurate predictions based on data provided by programmers. Right now, deep learning is the pinnacle of what we’ve achieved in the field of AI. It is inspired by the way the human brain learns and processes information and patterns, and the goal is to enable machines to label and categorize people and items in order to categorize and make decisions about them. But the more we let machines make these decisions with minimal human intervention, the more humans are kept in the dark about the “decision-making” processes that machines employ. These are what takes AI to the next level and what an ambitious new federal program would likely concentrate on. Still, machines that use deep learning are very different from self-aware machines.

The U.S. is still behind the curve in terms of federal AI strategy. In fact, it will be the 19th country to announce a formal strategy for the future of AI. Canada was the first, back in March of 2017. Seventeen countries followed suit after that, including France, Mexico, the UAE, and China.

There was a special sense of urgency on behalf of the U.S. government once it became clear that China was set to overtake the U.S. in AI innovation. The American AI Initiative announcement comes almost exactly a year after The New York Times published a story on China’s plan to become a world leader in AI by the year 2030, which technologists took as “a direct challenge to America’s lead in arguably the most important tech research to come along in decades.” China’s 28-page (in translation) document laid out an aggressive plan to spark innovation and pump billions into new breakthroughs, though it was similar in many ways to a report the Obama administration had released about the future of AI back in 2016.

Of course, there is research going on all around the country in both industry and academia on AI, machine learning, and deep learning. Kate Crawford, co-director of AI Now, told Science that

...while the executive order “correctly highlights AI as a major priority for U.S. policymaking,” she remains concerned about its apparent lack of input from academic researchers and civic leaders as well as the administration’s “troubling track record” when it comes to privacy and civil liberties. But the truth is, it’s still unclear what kind of input went into the Trump administration’s plan.

Still, many are applauding the effort. Virginia Dignum, Professor of Social and Ethical Artificial Intelligence in Umeå University’s Department of Computer Science, told me she thought “the U.S. government has been too quiet about AI and its societal impact,” since the attempts of the Obama administration and the research is too important to ignore knowing that it “will affect all people and all industries” and “world leaders need to take their role and responsibility seriously.”

She continued:

It is also good to see that the U.S.'s view seems to approach Europe's in terms of analysing the need and scope of regulation, the availability of open (government) data, and the call for wide participation. I hope that this means Europe and the U.S. can collaborate in their efforts to ensure responsible development and use of AI.

Of course, collaboration could be tricky when so many governments think of technological innovation as a race to the top, with a winner-take-all sense of success.

Dignum also cautioned against “this warlike narrative about an 'AI race,’” noting that while “massive investments are crucial,” “there is not ONE finish line and also there are many routes to progress in AI.” Instead of seeing world leaders create a narrative of “more and bigger data and more and bigger computational power” as “the only way to realise the potential of AI,” she hopes to see investments in “environmentally sustainable, and smarter, approaches to AI.”

Another important consideration – the most important, some might argue – is how we can be sure that ethical standards and policy guidelines keep pace with this planned growth. Two of the administration’s proposed pillars deal with ethical issues such as privacy and potential job loss, but the announcement is otherwise vague about how the program plans to ensure that responsible development and use of AI remain central throughout the process.

This news also comes on the heels of concerns from companies such as Google about the government using privately developed AI technology, especially in warfare. Google had to end its collaboration with the Department of Defense last year on Project Maven after thousands of its own employees signed a petition to end the use of their work by the military. However, tech giants like Amazon and Microsoft have pledged to continue to work with the government, and, specifically, the Department of Defense as they see fit.

While the role of universities has yet to be determined in the new plan, those on the forefront of AI research are eager to see what’s ahead and what their roles may be. When contacted for comment, Fei-Fei Li, the Co-Director of Stanford University's Human-Centered AI Institute (who also spent a sabbatical year as Google Cloud’s Chief Scientist of AI/ML and Vice President) said:

At Stanford, we support the responsible development and implementation of AI - especially when it ensures AI's impact on communities around the world is safe, fair and empowering. Building a future that benefits everyone will require cooperation on a truly historic scale, and that includes significant government investment. We look forward to hearing more details about the Trump Administration's plans.

It’s clear that no matter what shape the Trump administration’s plans take, they will have to answer to AI researchers and advocates around the world who are, more loudly than ever, calling for advances that not only reach the masses but ensure their fair and equitable treatment. While the order calls on agencies to “protect civil liberties, privacy, and American values” in applying the new technology, AI simply can’t be limited by geographic boundaries, and global cooperation will be crucial.