The Fifth Column Forum
Not logged in [Login - Register]
Go To Bottom

Printable Version  
Author: Subject: VIEWPOINT: Trustworthy AI - Why Does It Matter?

Posts: 19900
Registered: 13-8-2017
Location: Perth
Member Is Offline

[*] posted on 20-11-2019 at 01:52 PM
VIEWPOINT: Trustworthy AI - Why Does It Matter?


By Nathan Michael

Image: iStock

All technology demands trust, especially technology that is new or unprecedented. We’ve seen it across time for disruptive technologies; the combustion engine, the airplane and the automobile all required some element of trust in order for society to adopt and embrace the new system. Trust that the technology would be reliable. Trust that the technology would be safe. Trust that the technology would be used appropriately and contribute to the betterment of society.

Such is the case for artificial intelligence and robotics. From a science and engineering perspective, artificially intelligent robotic systems are simply engineered systems. No different than a car or a bridge, these systems are based on the theory and underlying principles of math and science. Therefore, like all other engineered systems, AI systems must adhere to certain performance expectations for us, as humans, to begin to trust them. Trust is about the system operating as expected, in a consistent manner, time and time again.

The more that the system is perceived to reliably work as expected, the more trust we build in it.

Conversely, if the system starts behaving erratically or failing unexpectedly, we lose trust. This response makes sense and feels obvious. What is more nuanced concerning trust as it relates to AI systems, is that if the system works as designed, but in a manner that does not align with human expectations, we will tend to distrust the system. This observation implies that trust of AI requires not only a system that performs as designed with high reliability, but also a system that human observers can understand.

The role of human expectations in the trust of artificial intelligence comes as a result of the fact that the human understanding of correct performance is not always technically right.

This is because human expectations, intuition and understanding do not always translate to optimal performance. People tend to optimize their behavior to conserve effort, based on the innate biological drive to conserve energy. Whereas artificially intelligent systems are engineered to optimize their behavior given certain performance criteria. What follows, is that in situations where an AI system is built to optimize its performance for something other than the conservation of energy — such as to maximize speed or minimize inaccuracy — misalignments arise between the robot’s behavior and what a person would consider the correct action.

The idea of developing AI systems that humans can understand and therefore trust, is captured through the idea of “Explainable AI.” Explainable AI, sometimes called “Interpretable AI” or “Transparent AI” refers simply to AI technology that can be easily understood such that a human observer can interpret why the system arrived at a specific decision. The concept of establishing human-operator expectations is particularly challenging when working with resilient intelligent robotic systems because these technologies are built to introspect, adapt and evolve to yield increasingly superior performance levels over time. Then, in order to develop AI systems humans can understand, we must consider how to enable the operator to work with the system and to understand how the system is improving through experience.

This concept is addressed through the development of interfaces that, within the context of artificial intelligence, refer to the development of capabilities that enable machines to engage effectively with human operators. Effective interfaces not only help humans understand the behavior of robots, but also allow for a robot to account for an operator’s needs.

Interfaces allow humans to build trust in robotic systems — and for human interaction with the robot to be personalized or guided, or for the robot to augment the user’s ability.

The significance of effective interfaces becomes evident when considering why it is important to build trust in AI systems and how increased trust will translate to increased reliance on robotic systems. With increased reliance on AI, humans will be able to offload lower-level tasks to these systems in order to focus on more important, higher-level processes. In doing so, artificial intelligence can and will be used to amplify, augment and enhance human ability.

Development of these interfaces is already underway. Today, we are developing robots that can create models that allow them to intuit some of a user’s intentions. These models make it possible for humans to engage with the robot and to achieve much higher levels of performance with less effort. When the operator recognizes this behavior, the operator starts to grow more confident that the robot “gets” them, that the robot understands what it is that they want to achieve and is working with them to achieve a common objective.

The concept of acting as a team evolves, rather than the operator simply utilizing the robot as a tool.

This relationship becomes particularly important as we consider multi-robot systems, swarming and teaming. A human operating a large group of robots will encounter difficulty in perceiving and understanding every occurrence that’s happening while several robots simultaneously perform complex actions. Due to the elaborate nature of the operation, it is possible for an operator to make a mistake, such as asking the system to perform a task counter to what they are actually trying to achieve. A system that can engage in intent modeling of the user will serve to improve and augment the overall performance.

When an artificially intelligent system models the intent of an operator’s desired task, it becomes possible for the system to mitigate, anticipate and adapt in order to overcome user errors, including problematic, unsafe and suboptimal requests. This modeling can be done without any great insight by the system as to what the operator wants, but rather insight into how the operator has engaged in the past.

It’s interesting to observe how these human-robot interactions impact trust, because as humans interact with systems they understand and systems that are built to model their operator’s intent, these characteristics make a tremendous difference. It’s the difference between a person walking up and engaging with a system immediately versus a person requiring extensive training to learn how to interact with that system and its nuances.

When the system adapts to the experience of the individual, it enables anyone to engage with it, having never worked with it before, and to very quickly perform as an expert. That ability to amplify the expertise of the operator is another mechanism by which trust is earned.

One of the greatest challenges with artificial intelligence is that there is an overwhelming impression that magic underlies the system. But it is not magic, it’s mathematics.

What is being accomplished by AI systems is exciting, but it is also simply theory and fundamentals and engineering. As the development of AI progresses, we will see, more and more, the role of trust in this technology. Trust will play a role in everything from the establishment of reliability standards to the improvement of society’s understanding of the technology to the adoption of AI products in our day-to-day lives to discussions of the ethical considerations.

Every member of society has a responsibility to contribute to this discussion; industry, academia, researchers and the general public all have voices to be heard in the discussion of not only what the future of AI could look like, but what the future of AI should look like.

Nathan Michael is chief technology officer of Shield AI.
View user's profile View All Posts By User

Posts: 19900
Registered: 13-8-2017
Location: Perth
Member Is Offline

[*] posted on 21-11-2019 at 05:12 PM

The problem with the Army’s ‘Go’ metaphor — besides being 2,500 years old

By: Kelsey D. Atherton   12 hours ago

Go is a way to think about territory and maneuver space. Like all simulations and abstracted rulesets, it has deep limitations. (Fábio Emilio Costa via Wikimedia Commons (CC BY-SA 2.0))

When it comes to plotting the future of artificial intelligence, the military has a metaphor problem. Not a metaphorical problem, but a literal one based on the language used to describe both the style and the structure of the AI threat from abroad.
The problem, narrowly put, is an over-reliance on the board game “Go” as a metaphor for China’s approach to AI.

The board game Go, which was first played in ancient China somewhere at least 2500 years ago, is about positioning identical pieces on a vast board, with the aim of supporting allies and capturing rivals. Just as chess can be seen as a distillation of combined arms on the battlefield, Go’s strength is in how it simulates a longer campaign over territory.

Also, like chess, Go has become a popular way to demonstrate the strength of AI.

The Google-funded AlphaGo project beat professional human players for the first time without handicap in 2015, and beat a world-class champion 4-1 in a five game match in 2016. That AlphaGo took longer to create than the chess-playing Deep Blue speaks mostly to the complexity of possible board states in the relative games; that Go has 361 spaces while chess has 64 is no small factor in this.

For AI researchers, building machines to match games with fixed pieces in a fixed space is a useful way to demonstrate learning in constrained problem sets. But there is little in the overall study of the games that informs strategic thinking in anything more than just a rudimentary level, and that’s where the problem with the metaphor could lead to bad policy.

At the 2019 Association of the United States Army symposium on AI and Autonomy in Detroit, multiple speakers on Nov. 20 referenced Go as a way to understand the actions of China, especially in light of strategic competition on the international stage. Acting Under Secretary of the Army James McPherson discussed Go as an insight into China’s strategic thinking in his keynote, and that sentiment was echoed later by Brandon Tseng, Shield AI’s chief operating officer.

“The Chinese are playing Go, which is about surrounding, taking more territory and surrounding your adversary,” said Tseng, speaking on a panel about AI and autonomous capabilities in support of competition.

Tseng went on to describe the role of AI as an answer to the problem of remotely piloted vehicles in denied environments. Finding a way for robots to move around electromagnetically denied environments is an undeniable part of the drive behind modern military AI and autonomy.

But we don’t need a Go board to explain that, or to cling to the misunderstood strategic thinking of the past. Thinking that Go itself will unlock China’s strategy is a line pushed by figures ranging from former House Speaker Newt Gingrich to former Secretary of State Henry Kissinger. The notion that the United States is playing chess (or, less charitably, checkers) while its rivals play Go has been expressed by think tanks, but it’s hardly a new idea. The notion that Go itself informed the strategy of rivals to U.S. power was the subject of a book published in 1969, as an attempt to understand how American forces were unable to secure victory in Vietnam.

In the intervening decades since Vietnam, humans and algorithms have gotten better at playing Go, but that narrow AI application has not translated into strategic insight. Nor should it. What is compelling about AI for maneuvering is not an unrelated AI in an unrelated field tackling a new game. What is compelling is the way in which AI can give opportunities to commanders on battlefields, and for that, there’s a whole host of games to study instead.

If the Army or industry wanted to, it could look instead and the limited insights from how AI is tackling Starcraft. But when it makes that leap, it should see it as a narrow artificial intelligence processing a game, not a game offering a window into a whole strategic outlook.
View user's profile View All Posts By User

Posts: 19900
Registered: 13-8-2017
Location: Perth
Member Is Offline

[*] posted on 28-11-2019 at 06:36 PM

The Pentagon’s AI lead needs a cloud integrator

By: Andrew Eversden   16 hours ago

The Joint Artificial Intelligence Center is looking to industry to establish a hybrid, multi-cloud environment. (Zapp2Photo/Getty Images)

The Pentagon’s lead artificial intelligence office is seeking a cloud integrator to help launch its hybrid, multi-cloud environment.

The Defense Information Systems Agency released two source solicitations Nov. 22 on behalf of the Defense Department’s Joint Artificial Intelligence Center, seeking small and large businesses that can provide JAIC with system engineering and system integration services during the deployment and maintenance of the hybrid, multi-cloud environment.

The cloud environment is an important piece of JAIC’s Joint Common Foundation, an enterprisewide AI platform under development by JAIC. The foundation will provided tools, shared data, frameworks and computing capability to components across the Pentagon.

JAIC is responsible for accelerating, scaling and synchronizing AI efforts across the Pentagon.

“The concept is to provide AI project teams with a set of established processes, tools and delivery methodologies that can facilitate the delivery of mission capabilities and integration into operational mission capabilities,” the solicitation read.

Any company chosen should expect to work within Microsoft’s cloud environment, as the tech giant recently won the Pentagon’s enterprise cloud contract known as the Joint Enterprise Defense Infrastructure, or JEDI.

Lt. Gen. Jack Shanahan, head of the JAIC, has continuously asserted that JAIC would be further along in its cloud capabilities if it had an enterprise cloud. The JEDI effort has been delayed by more than six months due to several protests.

According to the solicitation, the request for quote is expected to be released in the late second quarter of fiscal 2020, with an award in the late fourth quarter of the fiscal year.
View user's profile View All Posts By User

Posts: 19900
Registered: 13-8-2017
Location: Perth
Member Is Offline

[*] posted on 17-12-2019 at 11:16 AM

Artificial Intelligence to be Used for Charting, Intel Collection

(Source: US Department of Defense; issued Dec. 13, 2019)

Nautical, terrain and aeronautical charting is vital to the Defense Department mission. This job, along with collecting intelligence, falls to the National Geospatial-Intelligence Agency.

Mark D. Andress, NGA's chief information officer, and Nand Mulchandani, chief technology officer from DOD’s Joint Artificial Intelligence Center, spoke yesterday at the AFCEA International NOVA-sponsored 18th Annual Air Force Information Technology Day in Washington.

The reason charts are so vital is that they enable safe and precise navigation, Andress said. They are also used for such things as enemy surveillance and targeting, as well as precision navigation and timing.

This effort involves a lot of data collection and analysis, which is processed and shared through the unclassified, secret or top secret networks, he said, noting that AI could assist them in this effort.

The AI piece would involve writing smart algorithms that could assist data analysts and leader decision making, Andress said.

He added that the value of AI is that it will give analysts more time to think critically and advise policymakers while AI processes lower-order analysis that humans now do.

There are several challenges to bringing AI into NGA, he observed.

One challenge is that networks handle a large volume of data that includes text, photos and livestream. The video streaming piece is especially challenging for AI because it's so complex, he said.

Andress used the example of an airman using positioning, navigation and timing, flying over difficult terrain at great speed and targeting an enemy. "An algorithm used for AI decision making that is 74% efficient is not one that will be put into production to certify geolocation because that's not good enough," he said.

Another problem area is that NGA inherited a large network architecture from other agencies that merged into NGA. They include these Defense Mapping Agency organizations:
-- DMA Hydrographic Center
-- DMA Topographic Center
-- DMA Hydrographic/Topographic Center
-- DMA Aerospace Center

The networks of these organizations were created in the 1990s and are vertically designed, he said, meaning not easily interconnected. That would prove a challenge because AI would need to process information from all of these networks to be useful.

Next, all of these networks need to continuously run since DOD operates worldwide 24/7, he said. Pausing the network to test AI would be disruptive.

Therefore, Andress said AI prototype testing is done in pilots in isolated network environments.

However, the problem in doing the testing in isolation is the environments don't represent the real world they'll be used in, he said.

Nonetheless, the testing, in partnership with industry, has been useful in revealing holes and problems that might prevent AI scalability.

Lastly, the acceptance of AI will require a cultural shift in the agency. NGA personnel need to be able to trust the algorithms. He said pilots and experimentation will help them gain that trust and confidence.

To sum up, Andress said AI will eventually become a useful tool for NGA, but incorporating it will take time. He said the JAIC will play a central role in helping the agency getting there.

Mulchandani said the JAIC was set up last year to be DOD's coordinating center to help scale AI.

Using AI for things like health records and personnel matters is a lot easier than writing algorithms for things that NGA does, he admitted, adding that eventually it will get done.

Mulchandani said last year, when he came to DOD from Silicon Valley, the biggest shock was having funding for work one day and then getting funding pulled the next due to continuing resolutions. He said legislators need to fix that so that AI projects that are vital to national security are not disrupted.

View user's profile View All Posts By User

Posts: 19900
Registered: 13-8-2017
Location: Perth
Member Is Offline

[*] posted on 19-12-2019 at 01:22 PM

Pentagon's Ambitious Vision and Strategy for AI Not Yet Backed by Sufficient Visibility or Resources

(Source: Rand Corp.; issued Dec. 17, 2019)

The U.S. Department of Defense has articulated an ambitious vision and strategy for artificial intelligence (AI) with the Joint Artificial Intelligence Center as the focal point, but the DoD has yet to provide the JAIC with the visibility, authorities and resource commitments needed to scale AI and its impact across the department, according to a new RAND Corporation report.

The DoD's AI strategy also lacks baselines and metrics to meaningfully assess progress, researchers concluded.

“The DoD recognizes that AI could be a game-changer and has set up organizational structures focusing on AI,” said Danielle C. Tarraf, lead author of the report and a senior information scientist at RAND, a nonprofit, nonpartisan research organization. “But currently the JAIC doesn't have the authorities or resources it needs to carry out its mission. The authorities and resources of the AI organizations within the Services are also unclear.”

If the Pentagon wants to get the maximum benefit from artificial intelligence-enhanced systems it will need to improve its posture along multiple dimensions, according to the report. The study assesses how well the defense department is positioned to build/acquire, test and sustain—on a large scale—technologies falling under the broad umbrella of AI.

The study frames its assessment in terms of three categories of DoD AI applications: enterprise AI, such as AI-enabled financial or personnel management systems; operational AI, such as AI-enabled targeting capabilities that might be embedded within an air defense system such as PATRIOT; and mission-support AI applications, such as Project Maven, which aims to use machine learning to assist humans in analyzing large quantities of imagery from full-motion video data collected by drones.

The field is evolving quickly, with the algorithms that drive the current push in AI optimized for commercial, rather than Defense Department use. However, the current state of AI verification, validation and testing is nowhere close to ensuring the performance and safety of AI applications, particularly where safety-critical systems are concerned, researchers found.

“Many different technologies underpin AI,” Tarraf said. “The current excitement, and hype, are due to leap-ahead advances in Deep Learning approaches. However, these approaches remain brittle and artisanal—they are not ready yet for prime time in safety-critical systems.”

The department lacks clear mechanisms for growing, tracking and cultivating personnel who have AI skills, even as it faces a tight job market. The department also faces multiple data challenges, including the lack of data. “The success of Deep Learning is currently predicated on the availability of large, labeled data sets. Pursuing AI on a department-wide scale will require DoD to fundamentally transform its culture into a data-enabled one,” Tarraf said.

Tarraf and her colleagues offer a set of 11 strategic and tactical recommendations. Among them: The department should adapt AI governance structures that align authorities and resources with the mission of scaling AI. Also, the JAIC should develop a five-year strategic roadmap—backed by baseline measurements—to execute the mission of scaling AI and its impact.

DoD also should advance the science and practice of verification and testing of AI systems, working in close partnership with industry and academia. The department also should recognize data as critical resources, continue to create practices for their collection and curation, and increase sharing while resolving issues in protecting the data after sharing and during analysis and use.

The report recommends that DoD pursue opportunities to leverage new advances in AI, with particular attention to verification, validation, testing and evaluation, and in line with ethical principles. However, it is important for the department to maintain realistic expectations for both performance and timelines in going from demonstrations of the art of the possible to deployments at scale, researchers said.

The RAND Corporation is a research organization that develops solutions to public policy challenges to help make communities throughout the world safer and more secure, healthier and more prosperous.

Click here for the full report (187 PDF pages), on the Rand Corp. website.

View user's profile View All Posts By User

Posts: 19900
Registered: 13-8-2017
Location: Perth
Member Is Offline

[*] posted on 27-12-2019 at 12:52 PM

JAIC outlines strategic cohesiveness, tactical capabilities as near-term objectives

Carlo Munoz, Washington, DC - Jane's International Defence Review

26 December 2019

The Department of Defense’s (DoD’s) Joint Artificial Intelligence Centre (JAIC), tasked with harnessing future artificial intelligence (AI) applications to support US national security priorities, is aiming to bring cohesiveness to the department’s approach to AI integration while rapidly pushing the technologies down to the operational and tactical level, a senior defence industry official told Jane’s .

These two priorities were among several outlined by senior JAIC officials as the centre’s prime objectives for the coming fiscal year, said Graham Gilmer, a principal at Booz Allen Hamilton focusing on AI, machine learning, and high-performance computing. Those objectives were laid out by centre officials during a closed-door industry day hosted by JAIC in November, which included approximately 300 defence industry and information technology companies, as well as nearly 100 “government representatives” from various US federal agencies, Gilmer said in a 13 December interview.

(141 of 983 words)
View user's profile View All Posts By User

Posts: 19900
Registered: 13-8-2017
Location: Perth
Member Is Offline

[*] posted on 11-1-2020 at 05:55 PM

Algorithmic Warfare: Interview with NDIA’s Senior Fellow for AI


By Yasmin Tadjdeh

Illustration: Getty

The National Defense Industrial Association recently tapped Shane Shaneman, the strategic director of national security and defense at Carnegie Mellon University, to be its new senior fellow for artificial intelligence. He spoke with National Defense to discuss his thoughts on AI and his goals as senior fellow.

Shaneman’s views are his and not necessarily the views of Carnegie Mellon. This interview has been edited for length, brevity and clarity.

How and when did you start working on artificial intelligence technologies?

It really started when I transitioned from the Air Force Research Lab into Carnegie Mellon back in the summer of 2016. ... The role that I was playing for the Air Force Research Lab was basically helping to connect some of their research within cross-domain solutions to the operational community and the combatant commands.

Later, I learned more about the opportunity with Carnegie Mellon and, given the pace of innovation that was occurring with machine learning and artificial intelligence, I saw the immediate linkage that is going to be needed to be able to turn around and leverage those technologies, to both enhance our national security as well as to maintain our technological superiority.

Since you joined Carnegie Mellon, how have you seen AI transform?

It’s been fairly tremendous. … With some of the current advances that have taken place in parallelization, machine learning is now 100 times faster than it was just two years ago. And you’ve seen continued evolutions of both the algorithms and the framework and also new styles of machine learning. Of course, going from both the traditional supervised learning into new areas of both unsupervised as well as reinforcement learning.

At Carnegie Mellon, what does your portfolio look like?

My current focus is basically to help link up researchers with requirements across national security and defense and to maximize the value and impact that they have for the United States.

As it relates to defense and national security, what is the promise of AI?

One of the first and key areas that we’re focused on is how it can augment the warfighters. ... If you look at tasks that require very tremendous and tedious focus and involvement from human operators, the ability to … use machine learning and AI as a means to turn around and automate some of those functions and provide additional insights to the warfighter to aid decision making, or to enable them to actually shift what they’re spending their time on to something that’s higher value or more strategically important.

What are the biggest issues that are slowing down innovation in AI development?

There are cultural changes that we have to look at … such as the concept of algorithmic agility — the algorithms are going to continue to evolve. So, this is going to be an ongoing process of how do we look at the newest algorithms and integrate them — not once or twice a year, but really getting to a point where almost we’re doing that multiple times a day.

Algorithmic agility … is not just getting an algorithm and implementing it and going, “Oh, we’re done.” This is going to be something that becomes part of our culture.

Do you think the Defense Department is doing enough with industry and academia to better leverage artificial intelligence?

I’ve been involved with the Department of Defense since I graduated out of ROTC back in the early ‘90s, and I’m seeing the Department of Defense do some things that are truly very, very innovative through the Defense Innovation Unit, the Defense Digital Service and things that have been stood up to look at how do we basically evolve and embrace innovation as part of our overall processes and procedures.

The challenge that I think that we’re going to see is how do we innovate for impact and how do we turn around and look at transitioning [AI technology]. … We’re definitely putting a large focus on operational prototyping, but we have to be able to convert those and sustain those as part of our programs of record. And that really becomes hard because if you think about it, even though we … began focusing on software engineering back in the ‘80s and ‘90s, we’re just now getting used to — from an acquisition and sustainment standpoint — being able to separate out systems as hardware and software and the different processes that we go through with that. But now the world’s changed again and it is no longer just hardware and software, it is hardware, software, data and algorithms.

What are some of your goals as NDIA’s AI senior fellow?

The senior fellow role is really looking at … from a strategic level … what are those major changes in areas that we need to drive and influence, especially from a policy standpoint?

One of the key areas that we’re looking at is how do we take some of the areas that NDIA has been very, very successful in — and I’ll highlight the Special Operations Forces Industry Conference and the impact and the role that it plays for the special operations community — and leverage a similar type of an approach around artificial intelligence for the Department of Defense and contribute to the mission — whether it’s the Joint AI Center or the DoD writ large.

Another area is looking at this concept of crafting the new “Arsenal of Democracy” as we look at artificial intelligence, and that’s a very nebulous concept of we’ve got tons of startups and entrepreneurs that are coming into the area — how do we tap into all of that capability and entice them as part of the defense industrial base? … We’ve got to understand that this is not the 1940s and ‘50s, that this is a global marketplace.
View user's profile View All Posts By User

Posts: 19900
Registered: 13-8-2017
Location: Perth
Member Is Offline

[*] posted on 14-1-2020 at 09:53 PM

14 January 2020

Raytheon starts work on machine learning technology development

Raytheon has started work on the development of machine learning technology in order to create trust between human operators and artificial intelligence (AI) systems.

The company is developing the technology under a $6m contract awarded by the Defense Advanced Research Projects Agency for the Competency Aware Machine Learning programme.

As part of the deal, Raytheon will develop new systems that will be able to communicate information.

Raytheon BBN Technologies Categorical Abstract Machine Language (CAML) principal investigator Ilana Heintz said: “The CAML system turns tools into partners. It will understand the conditions where it makes decisions and communicates the reasons for those decisions.”

The system makes use of a process that is similar to that in a video game, offering a list of choices and identifying a goal instead of rules. It will repeatedly play the game and learn the most effective way to achieve the goal.

It will also record and explain the conditions and strategies used to come up with successful outcomes.

Heintz added: “People need to understand an autonomous system’s skills and limitations to trust it with critical decisions.”

After the system develops these skills, it will be applied to a simulated search and rescue mission by the team.

The conditions surrounding the mission would be created by users, while the system will make recommendations and give information about its competence to them in those particular conditions.

Last December, Raytheon introduced a military training simulator as a proposed solution to meet the requirements of the US Army’s Synthetic Training Environment.
View user's profile View All Posts By User

Posts: 19900
Registered: 13-8-2017
Location: Perth
Member Is Offline

[*] posted on 20-1-2020 at 07:30 PM

20 January 2020 Analysis

Could China dominate the AI arms race?

By Harry Lye

Beijing is rapidly gaining an edge in the development of military artificial intelligence (AI) technology by leveraging its control over domestic research facilities. Harry Lye finds out what the country’s progress means for rivals such as the US, and why winning the AI arms race matters.

Image: Shutterstock.

As the world reckons with the fact that warfare is moving to a hybrid domain, where space and cyberspace become increasingly important, the race to apply artificial intelligence to military technology is in full swing. Whoever achieves AI proliferation first will be leagues ahead of the competition, adversary or ally.

At a session of the Politburo in 2018, China’s premier Xi Jingping said China must “ensure that our country marches in the front ranks where it comes to theoretical research in this important area of AI, and occupies the high ground in critical and core technologies.”

US Secretary of Defence Mark Esper referenced this statement in his speech at the National Security Commission on Artificial Intelligence Public Conference in November 2019, adding: “For instance, improvements in AI enable more capable and cost-effective autonomous vehicles. The Chinese People’s Liberation Army is moving aggressively to deploy them across many warfighting domains. While the US faces a mighty task in transitioning the world’s most advanced military to new AI-enabled systems, China believes it can leapfrog our current technology and go straight to the next generation.”

Eper added: “Advances in AI have the potential to change the character of warfare for generations to come. Whichever nation harnesses AI first will have a decisive advantage on the battlefield for many, many years.”

Emphasising the US’s sense of urgency in AI development, he said: “We have to get there first.”

Beijing’s top priority

With AI set to be a critical component of future warfighting, China is throwing all its might into winning the race, having identified AI as a key area for modernisation. A valuable tool in this push is the country’s ability to draft Chinese industry and academia into supporting its government-led efforts, Esper explained.

AI development plays into Jinping’s long-held ambitions for China, as the country has already stepped out of the shadows to become an economic superpower, and now hopes to replicate this in the cyber domain.

The International Institute for Strategic Studies (IISS) notes in its Asia Pacific Regional Security Assessment: “As China pursues a strategy for development that concentrates on advancing innovation, the contestation of leadership in next-generation information technologies – particularly artificial intelligence (AI) – is also a core priority.”

Beijing’s own national defence white paper makes plain this push for cyber superiority, saying: “Driven by the new round of technological and industrial revolution, the application of cutting-edge technologies such as artificial intelligence, quantum information, big data, cloud computing and the Internet of Things is gathering pace in the military field. International military competition is undergoing historic changes.”

This push, IISS believes, will have “critical implications for the future of security and stability of the Asia-Pacific and beyond”.

In response to China’s trajectory and a need to focus its own AI research, the US established the Joint Artificial Intelligence Centre (JAIC) to streamline the development and adoption of AI. Washington also issued guidance across the Department of Defence for staff to think about AI integration everywhere from the back office to the frontlines.

Could China leapfrog US capability in the virtual domain?

The need to win the race to deploy AI across the military is not lost on the private sector, either. Paolo Palumbo, director of F-Secure’s Artificial Intelligence Centre of Excellence, told us: “I’d say it is very important not only to gain an early advantage but also in terms of starting the immense integration work as soon as possible. Having AI in the control room will be the first step, but then we will see integration closer to the battlefield, and being able to reach that phase quickly could make all the difference.”

Róbert Vass, founder and president of the Globsec think tank, echoed Esper’s warning of China “leapfrogging” the US in capability when he spoke to us ahead of the NATO Summit in December last year. Vass explained that while the US has enjoyed the dominance in conventional capabilities for quite some time, China is approaching a point where it could jump into the lead, a play that would render conventional forces near obsolete.

“We need to make sure that NATO is not preparing for a conflict of yesterday but we are preparing for the conflict of tomorrow, especially when China is heavily investing in artificial intelligence,” he said. “They will never be able to come to the level of the United States when it comes to a traditional army and defence, but they can do a leapfrog because, with new technologies, all of our equipment can become obsolete.”

Vass added that the push for AI was part of a wider sea change in defence and security. “We are moving [away] from a traditional domain to cybersecurity and disinformation,” he said.

“And even I would say ‘hyper war’, which is a combination of traditional means with cyber [and] disinformation, and the scale and the levels of domains that this is impacting will be just mind-blowing.”

Europe also occupies a strategically important position in the race for AI. Vass explained that even if the US beats China to its deployment, it could spell risks for cooperation with the continent as European countries suddenly find their equipment is no longer compatible with that of their ally across the Atlantic.

One challenge faced by most nations in the development of AI is that much of the technology required already exists, but the difficulty lies in integrating it with defence systems. In an interview discussing AI development in the US military last year, the US Air Force’s service acquisition executive Dr Will Roper told us: “If you look across this technology space, I think the core components of what is needed already exist – this is as much of, if not more than, an integration problem as a technological one.”

It appears that Chinese industry has already put at least some of these pieces together. As Esper pointed out during his speech at the conference in November, “Chinese weapons manufacturers are selling drones advertised as capable of full autonomy, including the ability to conduct lethal targeted strikes”.

Chinese UAV manufacturer Ziyan says its Blowfish A-2 system is capable of completing autonomous precision strikes. Image: Ziyan UAV.

Redefining military power

With these capabilities already in Beijing’s hands, proliferation on a large scale could shift the balance of power not only in the South China Sea but on a global scale. After all, it is cheaper to produce low-cost attritables en masse than it is to build an aircraft carrier, destroyer or fighter jet. A military force equipped with a large fleet of AI enabled drones, for instance, could deploy at a pace simply unseen in modern times, and at a cost far lower than current norms.

AI development has emerged as the new arms race, but this time with a much more advanced toolkit. The stakes in this race are higher than ever, but also often misunderstood. After all, the world is still used to seeing aircraft carriers and fleets of tanks, rather than unmanned systems, as markers of military power.

For the US to maintain the dominance it has enjoyed for decades as the world transitions to AI enabled forces, as Esper put it, it has to get there first.
View user's profile View All Posts By User

Posts: 19900
Registered: 13-8-2017
Location: Perth
Member Is Offline

[*] posted on 23-1-2020 at 08:04 PM

Pentagon will start figuring out AI for lethality in 2020

By: Kelsey D. Atherton   13 hours ago

Dana Deasy, Department of Defense chief information officer, hosts a roundtable discussion on the enterprise cloud initiative with reporters, Aug. 9, 2019. (Air Force Staff Sgt. Andrew Carroll)

The Pentagon is eager to plug artificial intelligence into lethality. How the benefits of modern information processing, so far mostly realized in the commercial sector, will be applied to the use of weapons in war remains unclear, but it is a problem the military is interested in solving.

“We are ready to start our first lethality project next year in the joint war fighter targeting space,” said Department of Defense Chief Information Officer Dana Deasy said in December in an exclusive interview with sister brand Defense News.

This vision will be carried out by the Joint Artificial Intelligence Center, the military’s AI coordinating and developing organ. As for the specifics of how, exactly, it will bring the benefits of algorithmic processing to the fight, JAIC is still too early in the process to have much concrete information on offer.

The project will be part of a mission initiative under JAIC called Joint Warfighting.

While joint war fighting could in theory encompass every part of combat that involves more than one branch of the military, JAIC spokesperson Arlo Abrahamson clarified that the initiative encompasses, somewhat more narrowly, “Joint All-Domain Command and Control; autonomous ground reconnaissance and surveillance; accelerated sensor-to-shooter timelines; operations center workflows; and deliberate and dynamic targeting solutions.”

In other words, when the JAIC pairs AI with tools that aid in the use of force, it will come through either a communication tool, scout robots, battlefield targeting tools, workforce management software, or other targeting tools.

“The JAIC is participating in dialogue with a variety of commercial tech firms through industry days and other industry engagement activities to help accelerate the Joint Warfighting initiative,” said Abrahamson. “Contracting information for this mission initiative is under development.”

And while the JAIC is still figuring out if the first lethality project will be a robot, a sensor system, or logistics software, it is still explicitly interested in making sure that whatever the use of AI, it ultimately serves the interests of the humans relying on it in a fight.

As plainly as the JAIC can put it, the initiative is looking for “AI solutions that help manage information so humans can make decisions safely and quickly in battle,” said Abrahmson.

Humans, then, will still be the author of any lethal action. Those humans will just have some AI help.
View user's profile View All Posts By User

Posts: 19900
Registered: 13-8-2017
Location: Perth
Member Is Offline

[*] posted on 13-2-2020 at 04:45 PM

War on Autopilot? It Will Be Harder Than the Pentagon Thinks

By Patrick Tucker
Technology Editor

February 12, 2020

Northrop Grumman

Despite defense contractors’ glittering demonstrations, difficult realities are challenging the military’s race to network everything.

MCLEAN, Virginia — Everything is new about Northrop Grumman’s attempt to help the military link everything it can on the battlefield. One day, as planners imagine it, commanders will be able to do things like send autonomous drones into battle, change attack plans midcourse, and find other ways to remove humans and their limitations from decision chains that increasingly seem to require quantum speed. Northrop’s Innovation Center in McLean, Virginia, looks so new it could have sprung up in a simulation. Its Washington metro rail stop doesn’t even appear on many maps yet.

Northrop is hardly alone. Over the last few months, various weapons makers have begun showing off all sorts of capabilities to reporters, while military officials detail their own efforts to link up jets, tanks, ships, and soldiers. As they describe it, it’s a technological race to out-automate America’s potential adversaries.

But real questions remain about the Pentagon’s re-imagining of networked warfare. Will it ever become more than glitzy simulations? And have military leaders thought through the implications if it does?

Today, the military’s ability to run a battlefield — its command-and-control doctrine and gear — depends partly on large-crewed, non-stealthy planes like the 1980s-designed E-8 Joint Surveillance Target Attack Radar System, or JSTARS, and other aircraft, ships, and ground facilities. In the modern era, the Pentagon worries that these airborne control centers have become giant, fragile targets. An advanced adversary will aim to blind and blunt a U.S. attack by neutralizing these planes, or perhaps just their on-board communications. The military is also too dependent on aging network links that differ across planes, sensors, and weapons; and that don’t offer the bandwidth that modern combat demands.

Military officials think that the idea of a networked arsenal will materialize into a whole new command-and-control regime across the services by 2028. Along the way, there will be incremental improvements and new conversations between individual pieces of hardware, like what Northrop is developing nine miles west of the Pentagon.

Northrop has dubbed their next-gen platform Distributed Autonomy/Responsive Control, or DA/RC. Although it’s a new program, company officials say they’ve been working on the problem for 15 years. They realized that using unmanned planes for combat would require ground crews and sensor data analysts would take up too much precious space on nearby aircraft carriers. That would limit what role unmanned planes would play in missions, said Scott Winship, vice president for Advanced Programs at Northrop Grumman Aerospace Systems. Northrop’s work on the Navy’s X-47B project — an effort to build an autonomous drone for attack or air defense — showed that the path to autonomy was allowing drones to “see” the battlespace by sharing data. That, in turn, would enable one person to control a lot more weapons, and do so, potentially, from positions inside of the range of enemy air defenses.

Northrop believes DA/RC can underpin another project called the Advanced Battle Management System, or ABMS, a proposed digital architecture to connect a wide variety of weapons, not just aircraft. The Pentagon’s budget request released this week seeks $302 million for the project in 2021, up from the $144 million enacted this year. ABMS is part of a broader Pentagon vision called Joint All-Domain Command & Control. JADC2 represents an effort to create a networked nervous system for warfare. It aims to link every ship, soldier, and jet, so that ground, air, sea, space, and cyber assets can share the exact same data and can be used almost interchangeably to take out targets, even in environments where communication is being heavily jammed or where adversaries have advanced air defenses.

It’s more than just hype. JADC2 is essentially the military’s recipe for defeating a highly advanced adversary like Russia or China. Many of the military’s big new spending priorities — autonomy, advanced AI, hypersonics, etc. — are in service to the idea.

More than three years ago, Northrop began to conduct experiments toward this new battlefield web. First, they connected unmanned submarines to manned ships. They began work on a command-and-control dashboard to enable commanders to see every vehicle, aircraft, and other weapon in their arsenal, as well as all the threats on the battlefield between planes and their targets, based on sensed data from those weapons. They programmed it to automatically update when circumstances on the ground change, and even to adjust battle plans — either offering the commander recommendations or, if set to do so, sending out new tasking orders, dispatching jets to strike targets and drones to escort them to jam defenses along the way.

It’s a picture of war on autopilot.

That concept worries some watchdogs. Northrop officials emphasized that commanders will be able to direct every piece on the battlefield to comply with military doctrine, rules of combat, and laws of war. But, ultimately, the commander will be able to decide which rules and doctrine he or she wishes to follow. Want to tell a drone to strike a target even if the communications are cut off? The JADC2 could allow a commander to give a drone a mission and send it on its way.

Missiles, of course, are used in this fashion, but drones, with the ability to send imagery of the target area back to humans for review and approval, are generally not. The notion cuts close to the Pentagon’s central rule for the robot age: that humans will never be removed from decisions to kill.

Northrop’s concept appeared very similar to a software suite that Lockheed Martin showed reporters in November. Winship says Northrop aims less to tell the Air Force what its new ABMS system should look like than to engineer more autonomy into things that service officials are already buying.

ABMS is a massive project made up of smaller projects that bear fruit incrementally. The Air Force plans to continue testing every four months to demonstrate new links between different weapons and vehicles; the next one planned for April. Officials hope to bring new capabilities into operation as fast as they emerge, said Maj. Gen. John Pletcher, deputy assistant secretary of the Air Force, on Monday. “Now it’s just a matter of going to individual combatant commanders and figuring out what are the next things they want to test.”

Can One Service Connect Them All?

Last May, the Pentagon began to organize around the JADC2 concept at the insistence of Gen. Paul Selva, then vice chairman of the Joint Chiefs of Staff. The Joint Staff stood up a cross-functional team, basically a group of people with varied expertise, under the J6 (the Joint Staff’s branch for advising all things related to command, control and cyber.) Their role is to bring all the military services together in one big data loop, according to Army Lt. Gen. Eric Wesley, director of the Futures Concepts Center at Army Futures Command. The Air Force is doing much of the initial work, but in January, they invited leader from the Army and other services to a classified JADC2 conference at Nellis Air Force Base, Nevada.

Wesley said Army leaders worry that their own experiments, ideas, and standards for data and hardware will be discarded under the Air Force-run effort, and that JADC2 will ultimately privilege air assets over ground ones.

“What we would argue is that within this Advanced Battle Management construct, you need to figure out all of the work and the weapon systems that we are building out within the Army into the edge portion of the framework,” he said, meaning, the front lines on the ground.

Wesley’s question for the folks leading the effort is: “Am I building ABMS so that the Army can plug into it? And Army, am I building weapon systems with an ABMS backbone? Both have to accommodate that.”

He said he hopes that the Air Force was on the same page. “I think we’re at the early stages of that. And what [the event at] Nellis did was it allowed us to be very clear about what we want them to build. I think they heard us. It was a good, transparent conversation. It would be too early to say they aren’t building it because it is not built out yet.”

Wesley said the aim is to realize the vision of network-centric warfare by 2028.

“I do think it’s going to take five to 10 years to build a viable joint system on the scale that we are describing” he said.

Linking the battlefield means linking the troops on it as well, and that’s a big job, Gen. Mike Murray, head of U.S. Army Futures Command, said on Monday.

“You can’t discount the scale that comes with the Army. This is about much more than linking 200 planes. This is about linking hundreds of thousands of sensors, especially when you get into working things like IVAS where every soldier is going to be a sensor,” he said, referring to the digital goggles formally known as the Integrated Visual Augmentation System. “As we look toward the future, I clearly see a day where everything on the battlefield can be a sensor and should be a sensor.”

Murray is also worried about how well NATO allies will be able to fit into the picture, especially when it comes to coordinating weapons with shared radar data. “You can’t discount the absolute necessity of having allies and partners being part of this equation. They have to be part of this architecture and have to be networked in,” he said.

‘It Will Be Harder Than They Think’

In the military, new systems like ABMS and JACD2 require new doctrine and concepts of operation. What will machines actually be allowed to do on their own?

More than they do now, Will Roper, assistant Air Force secretary for acquisition, technology, and logistics, told reporters last month. Evolving circumstances and technological improvements may force a reconsideration of the requirement to give a human veto power over every decision a machine might make in war.

“The idea of the machine taking on a lethal decision? That’s against the [Defense] Department’s policy. We do have exceptions where we have automatic action for self-defense. I would imagine that if we build out ABMS, we will allow greater flushing out of that policy,” Roper said. “A lot of progress in government is really earning your way to a better problem. Right now our problem is, really: we have a lot of data and it doesn’t get to people who can make decisions based on it. We want to shed that problem and get to ‘our information now gets to people who can make decisions; do we let them?’ Or do we allow machines to make choices on their behalf?”

That might sound like taking a provocative change, but Paul Scharre, senior fellow and director of the Technology and National Security Program at the Center for a New American Security, or CNAS, says that it’s in keeping with current doctrine, which allows for lethal autonomy after a review process.

But Roper left that somewhat unclear. “The formal policy guidance, DOD Directive 3000.09, gives DOD leaders the option of building lethal autonomous weapons. They can choose not to exercise that option and even say that they would flat out reject any such weapon system. But it’s confusing when senior DOD leaders refer to such a ‘policy.’ It’s often unclear whether they’re simply misstating what the directive is or they are referring to some unwritten, informal policy against lethal autonomy.”

To Scharre, what Roper is talking about makes sense. “I think the goal of getting the right information to the right people to make timely, informed decisions is the right goal for DOD,” he said. “For situations where you always want a certain action in response to certain data or environmental stimuli, then automation may make sense.”

One defense contractor who is working on the JADC2 concept but whose company did not authorize him to speak on the record, said the Defense Department should probably manage its expectations. The challenge is larger than Pentagon officials are willing to acknowledge or admit. It goes beyond human control over machines. It requires rethinking the entire organizational and rules structure over who gets to be in control of any battlefield decisions.

For example, will pilots be given the command authority to order the sorts of strikes and operations that today only high-level commanders can approve? The Air Force has already begun to change the way they talk about doctrine. Traditional themes of centralized command-and-control and decentralized execution is giving way to something else. That “something else” is not fully formed.

“What they’ve now changed, and I’ve seen it on multiple slides, they’ve changed ‘centralized command-and-control and decentralized execution’ to ‘centralized command, distributed control, and decentralized execution’. So they’ve put a wedge between command and control,” the contractor said. “From a practical perspective, I do not know how to wrap my head around that.”

The Air Force aims to give that command-and-control function to the single pilots in stealthy F-22 and F-35 jets. But, said the contractor, that will entail specific changes that service leaders haven’t yet addressed.

“The Air Force does not want wing commanders making these decisions; they want a one-star or a two-star [general] in the region making those decisions, even though they say they want to do it,” said the contractor.

“They have to deconflict it, and that’s just within the Air Force.” Other services are pushing back on changes, said the contractor.

“Assume you go to war in Southeast Asia. How is contested battle management really going to work? Even if we have these highly exquisite assets that can talk to one another, we still don’t have the network to put in place. Roper talks about it like it’s Uber. That’s not how war works. It’s not like there are 40 different F-35s flying overhead and I can just open my app and say, ‘You have bombs; you go get it’,” said the contractor.

Of course, creating exactly that app is what the Air Force and defense contractors are so busy working toward.

“I get the motivation, and you can reduce planning cycles a little bit, but there’s a physical limitation in moving shit from A to B,” said the contractor. These sorts of problems, extending beyond technology but encompassing technology, is why “air tasking orders,” the precisely crafted orders that senior air commanders distribute to all the planes under his or her command during an operation, take days to develop. In the future, those decisions will need to occur within seconds, as Northrop was already demonstrating at its DA/RC demonstration.

Ready or not, everyone will have to adapt. Said the contractor, “They’ll be faced quickly and crudely with the realities of life.”
View user's profile View All Posts By User

  Go To Top

Powered by XMB 1.9.11
XMB Forum Software © 2001-2017 The XMB Group
[Queries: 16] [PHP: 64.4% - SQL: 35.6%]