The Fifth Column Forum
Not logged in [Login - Register]
Go To Bottom

Printable Version  
 Pages:  1  
Author: Subject: VIEWPOINT: Trustworthy AI - Why Does It Matter?

Posts: 25414
Registered: 13-8-2017
Location: Perth
Member Is Offline

[*] posted on 20-11-2019 at 01:52 PM
VIEWPOINT: Trustworthy AI - Why Does It Matter?


By Nathan Michael

Image: iStock

All technology demands trust, especially technology that is new or unprecedented. We’ve seen it across time for disruptive technologies; the combustion engine, the airplane and the automobile all required some element of trust in order for society to adopt and embrace the new system. Trust that the technology would be reliable. Trust that the technology would be safe. Trust that the technology would be used appropriately and contribute to the betterment of society.

Such is the case for artificial intelligence and robotics. From a science and engineering perspective, artificially intelligent robotic systems are simply engineered systems. No different than a car or a bridge, these systems are based on the theory and underlying principles of math and science. Therefore, like all other engineered systems, AI systems must adhere to certain performance expectations for us, as humans, to begin to trust them. Trust is about the system operating as expected, in a consistent manner, time and time again.

The more that the system is perceived to reliably work as expected, the more trust we build in it.

Conversely, if the system starts behaving erratically or failing unexpectedly, we lose trust. This response makes sense and feels obvious. What is more nuanced concerning trust as it relates to AI systems, is that if the system works as designed, but in a manner that does not align with human expectations, we will tend to distrust the system. This observation implies that trust of AI requires not only a system that performs as designed with high reliability, but also a system that human observers can understand.

The role of human expectations in the trust of artificial intelligence comes as a result of the fact that the human understanding of correct performance is not always technically right.

This is because human expectations, intuition and understanding do not always translate to optimal performance. People tend to optimize their behavior to conserve effort, based on the innate biological drive to conserve energy. Whereas artificially intelligent systems are engineered to optimize their behavior given certain performance criteria. What follows, is that in situations where an AI system is built to optimize its performance for something other than the conservation of energy — such as to maximize speed or minimize inaccuracy — misalignments arise between the robot’s behavior and what a person would consider the correct action.

The idea of developing AI systems that humans can understand and therefore trust, is captured through the idea of “Explainable AI.” Explainable AI, sometimes called “Interpretable AI” or “Transparent AI” refers simply to AI technology that can be easily understood such that a human observer can interpret why the system arrived at a specific decision. The concept of establishing human-operator expectations is particularly challenging when working with resilient intelligent robotic systems because these technologies are built to introspect, adapt and evolve to yield increasingly superior performance levels over time. Then, in order to develop AI systems humans can understand, we must consider how to enable the operator to work with the system and to understand how the system is improving through experience.

This concept is addressed through the development of interfaces that, within the context of artificial intelligence, refer to the development of capabilities that enable machines to engage effectively with human operators. Effective interfaces not only help humans understand the behavior of robots, but also allow for a robot to account for an operator’s needs.

Interfaces allow humans to build trust in robotic systems — and for human interaction with the robot to be personalized or guided, or for the robot to augment the user’s ability.

The significance of effective interfaces becomes evident when considering why it is important to build trust in AI systems and how increased trust will translate to increased reliance on robotic systems. With increased reliance on AI, humans will be able to offload lower-level tasks to these systems in order to focus on more important, higher-level processes. In doing so, artificial intelligence can and will be used to amplify, augment and enhance human ability.

Development of these interfaces is already underway. Today, we are developing robots that can create models that allow them to intuit some of a user’s intentions. These models make it possible for humans to engage with the robot and to achieve much higher levels of performance with less effort. When the operator recognizes this behavior, the operator starts to grow more confident that the robot “gets” them, that the robot understands what it is that they want to achieve and is working with them to achieve a common objective.

The concept of acting as a team evolves, rather than the operator simply utilizing the robot as a tool.

This relationship becomes particularly important as we consider multi-robot systems, swarming and teaming. A human operating a large group of robots will encounter difficulty in perceiving and understanding every occurrence that’s happening while several robots simultaneously perform complex actions. Due to the elaborate nature of the operation, it is possible for an operator to make a mistake, such as asking the system to perform a task counter to what they are actually trying to achieve. A system that can engage in intent modeling of the user will serve to improve and augment the overall performance.

When an artificially intelligent system models the intent of an operator’s desired task, it becomes possible for the system to mitigate, anticipate and adapt in order to overcome user errors, including problematic, unsafe and suboptimal requests. This modeling can be done without any great insight by the system as to what the operator wants, but rather insight into how the operator has engaged in the past.

It’s interesting to observe how these human-robot interactions impact trust, because as humans interact with systems they understand and systems that are built to model their operator’s intent, these characteristics make a tremendous difference. It’s the difference between a person walking up and engaging with a system immediately versus a person requiring extensive training to learn how to interact with that system and its nuances.

When the system adapts to the experience of the individual, it enables anyone to engage with it, having never worked with it before, and to very quickly perform as an expert. That ability to amplify the expertise of the operator is another mechanism by which trust is earned.

One of the greatest challenges with artificial intelligence is that there is an overwhelming impression that magic underlies the system. But it is not magic, it’s mathematics.

What is being accomplished by AI systems is exciting, but it is also simply theory and fundamentals and engineering. As the development of AI progresses, we will see, more and more, the role of trust in this technology. Trust will play a role in everything from the establishment of reliability standards to the improvement of society’s understanding of the technology to the adoption of AI products in our day-to-day lives to discussions of the ethical considerations.

Every member of society has a responsibility to contribute to this discussion; industry, academia, researchers and the general public all have voices to be heard in the discussion of not only what the future of AI could look like, but what the future of AI should look like.

Nathan Michael is chief technology officer of Shield AI.
View user's profile View All Posts By User

Posts: 25414
Registered: 13-8-2017
Location: Perth
Member Is Offline

[*] posted on 21-11-2019 at 05:12 PM

The problem with the Army’s ‘Go’ metaphor — besides being 2,500 years old

By: Kelsey D. Atherton   12 hours ago

Go is a way to think about territory and maneuver space. Like all simulations and abstracted rulesets, it has deep limitations. (Fábio Emilio Costa via Wikimedia Commons (CC BY-SA 2.0))

When it comes to plotting the future of artificial intelligence, the military has a metaphor problem. Not a metaphorical problem, but a literal one based on the language used to describe both the style and the structure of the AI threat from abroad.
The problem, narrowly put, is an over-reliance on the board game “Go” as a metaphor for China’s approach to AI.

The board game Go, which was first played in ancient China somewhere at least 2500 years ago, is about positioning identical pieces on a vast board, with the aim of supporting allies and capturing rivals. Just as chess can be seen as a distillation of combined arms on the battlefield, Go’s strength is in how it simulates a longer campaign over territory.

Also, like chess, Go has become a popular way to demonstrate the strength of AI.

The Google-funded AlphaGo project beat professional human players for the first time without handicap in 2015, and beat a world-class champion 4-1 in a five game match in 2016. That AlphaGo took longer to create than the chess-playing Deep Blue speaks mostly to the complexity of possible board states in the relative games; that Go has 361 spaces while chess has 64 is no small factor in this.

For AI researchers, building machines to match games with fixed pieces in a fixed space is a useful way to demonstrate learning in constrained problem sets. But there is little in the overall study of the games that informs strategic thinking in anything more than just a rudimentary level, and that’s where the problem with the metaphor could lead to bad policy.

At the 2019 Association of the United States Army symposium on AI and Autonomy in Detroit, multiple speakers on Nov. 20 referenced Go as a way to understand the actions of China, especially in light of strategic competition on the international stage. Acting Under Secretary of the Army James McPherson discussed Go as an insight into China’s strategic thinking in his keynote, and that sentiment was echoed later by Brandon Tseng, Shield AI’s chief operating officer.

“The Chinese are playing Go, which is about surrounding, taking more territory and surrounding your adversary,” said Tseng, speaking on a panel about AI and autonomous capabilities in support of competition.

Tseng went on to describe the role of AI as an answer to the problem of remotely piloted vehicles in denied environments. Finding a way for robots to move around electromagnetically denied environments is an undeniable part of the drive behind modern military AI and autonomy.

But we don’t need a Go board to explain that, or to cling to the misunderstood strategic thinking of the past. Thinking that Go itself will unlock China’s strategy is a line pushed by figures ranging from former House Speaker Newt Gingrich to former Secretary of State Henry Kissinger. The notion that the United States is playing chess (or, less charitably, checkers) while its rivals play Go has been expressed by think tanks, but it’s hardly a new idea. The notion that Go itself informed the strategy of rivals to U.S. power was the subject of a book published in 1969, as an attempt to understand how American forces were unable to secure victory in Vietnam.

In the intervening decades since Vietnam, humans and algorithms have gotten better at playing Go, but that narrow AI application has not translated into strategic insight. Nor should it. What is compelling about AI for maneuvering is not an unrelated AI in an unrelated field tackling a new game. What is compelling is the way in which AI can give opportunities to commanders on battlefields, and for that, there’s a whole host of games to study instead.

If the Army or industry wanted to, it could look instead and the limited insights from how AI is tackling Starcraft. But when it makes that leap, it should see it as a narrow artificial intelligence processing a game, not a game offering a window into a whole strategic outlook.
View user's profile View All Posts By User

Posts: 25414
Registered: 13-8-2017
Location: Perth
Member Is Offline

[*] posted on 28-11-2019 at 06:36 PM

The Pentagon’s AI lead needs a cloud integrator

By: Andrew Eversden   16 hours ago

The Joint Artificial Intelligence Center is looking to industry to establish a hybrid, multi-cloud environment. (Zapp2Photo/Getty Images)

The Pentagon’s lead artificial intelligence office is seeking a cloud integrator to help launch its hybrid, multi-cloud environment.

The Defense Information Systems Agency released two source solicitations Nov. 22 on behalf of the Defense Department’s Joint Artificial Intelligence Center, seeking small and large businesses that can provide JAIC with system engineering and system integration services during the deployment and maintenance of the hybrid, multi-cloud environment.

The cloud environment is an important piece of JAIC’s Joint Common Foundation, an enterprisewide AI platform under development by JAIC. The foundation will provided tools, shared data, frameworks and computing capability to components across the Pentagon.

JAIC is responsible for accelerating, scaling and synchronizing AI efforts across the Pentagon.

“The concept is to provide AI project teams with a set of established processes, tools and delivery methodologies that can facilitate the delivery of mission capabilities and integration into operational mission capabilities,” the solicitation read.

Any company chosen should expect to work within Microsoft’s cloud environment, as the tech giant recently won the Pentagon’s enterprise cloud contract known as the Joint Enterprise Defense Infrastructure, or JEDI.

Lt. Gen. Jack Shanahan, head of the JAIC, has continuously asserted that JAIC would be further along in its cloud capabilities if it had an enterprise cloud. The JEDI effort has been delayed by more than six months due to several protests.

According to the solicitation, the request for quote is expected to be released in the late second quarter of fiscal 2020, with an award in the late fourth quarter of the fiscal year.
View user's profile View All Posts By User

Posts: 25414
Registered: 13-8-2017
Location: Perth
Member Is Offline

[*] posted on 17-12-2019 at 11:16 AM

Artificial Intelligence to be Used for Charting, Intel Collection

(Source: US Department of Defense; issued Dec. 13, 2019)

Nautical, terrain and aeronautical charting is vital to the Defense Department mission. This job, along with collecting intelligence, falls to the National Geospatial-Intelligence Agency.

Mark D. Andress, NGA's chief information officer, and Nand Mulchandani, chief technology officer from DOD’s Joint Artificial Intelligence Center, spoke yesterday at the AFCEA International NOVA-sponsored 18th Annual Air Force Information Technology Day in Washington.

The reason charts are so vital is that they enable safe and precise navigation, Andress said. They are also used for such things as enemy surveillance and targeting, as well as precision navigation and timing.

This effort involves a lot of data collection and analysis, which is processed and shared through the unclassified, secret or top secret networks, he said, noting that AI could assist them in this effort.

The AI piece would involve writing smart algorithms that could assist data analysts and leader decision making, Andress said.

He added that the value of AI is that it will give analysts more time to think critically and advise policymakers while AI processes lower-order analysis that humans now do.

There are several challenges to bringing AI into NGA, he observed.

One challenge is that networks handle a large volume of data that includes text, photos and livestream. The video streaming piece is especially challenging for AI because it's so complex, he said.

Andress used the example of an airman using positioning, navigation and timing, flying over difficult terrain at great speed and targeting an enemy. "An algorithm used for AI decision making that is 74% efficient is not one that will be put into production to certify geolocation because that's not good enough," he said.

Another problem area is that NGA inherited a large network architecture from other agencies that merged into NGA. They include these Defense Mapping Agency organizations:
-- DMA Hydrographic Center
-- DMA Topographic Center
-- DMA Hydrographic/Topographic Center
-- DMA Aerospace Center

The networks of these organizations were created in the 1990s and are vertically designed, he said, meaning not easily interconnected. That would prove a challenge because AI would need to process information from all of these networks to be useful.

Next, all of these networks need to continuously run since DOD operates worldwide 24/7, he said. Pausing the network to test AI would be disruptive.

Therefore, Andress said AI prototype testing is done in pilots in isolated network environments.

However, the problem in doing the testing in isolation is the environments don't represent the real world they'll be used in, he said.

Nonetheless, the testing, in partnership with industry, has been useful in revealing holes and problems that might prevent AI scalability.

Lastly, the acceptance of AI will require a cultural shift in the agency. NGA personnel need to be able to trust the algorithms. He said pilots and experimentation will help them gain that trust and confidence.

To sum up, Andress said AI will eventually become a useful tool for NGA, but incorporating it will take time. He said the JAIC will play a central role in helping the agency getting there.

Mulchandani said the JAIC was set up last year to be DOD's coordinating center to help scale AI.

Using AI for things like health records and personnel matters is a lot easier than writing algorithms for things that NGA does, he admitted, adding that eventually it will get done.

Mulchandani said last year, when he came to DOD from Silicon Valley, the biggest shock was having funding for work one day and then getting funding pulled the next due to continuing resolutions. He said legislators need to fix that so that AI projects that are vital to national security are not disrupted.

View user's profile View All Posts By User

Posts: 25414
Registered: 13-8-2017
Location: Perth
Member Is Offline

[*] posted on 19-12-2019 at 01:22 PM

Pentagon's Ambitious Vision and Strategy for AI Not Yet Backed by Sufficient Visibility or Resources

(Source: Rand Corp.; issued Dec. 17, 2019)

The U.S. Department of Defense has articulated an ambitious vision and strategy for artificial intelligence (AI) with the Joint Artificial Intelligence Center as the focal point, but the DoD has yet to provide the JAIC with the visibility, authorities and resource commitments needed to scale AI and its impact across the department, according to a new RAND Corporation report.

The DoD's AI strategy also lacks baselines and metrics to meaningfully assess progress, researchers concluded.

“The DoD recognizes that AI could be a game-changer and has set up organizational structures focusing on AI,” said Danielle C. Tarraf, lead author of the report and a senior information scientist at RAND, a nonprofit, nonpartisan research organization. “But currently the JAIC doesn't have the authorities or resources it needs to carry out its mission. The authorities and resources of the AI organizations within the Services are also unclear.”

If the Pentagon wants to get the maximum benefit from artificial intelligence-enhanced systems it will need to improve its posture along multiple dimensions, according to the report. The study assesses how well the defense department is positioned to build/acquire, test and sustain—on a large scale—technologies falling under the broad umbrella of AI.

The study frames its assessment in terms of three categories of DoD AI applications: enterprise AI, such as AI-enabled financial or personnel management systems; operational AI, such as AI-enabled targeting capabilities that might be embedded within an air defense system such as PATRIOT; and mission-support AI applications, such as Project Maven, which aims to use machine learning to assist humans in analyzing large quantities of imagery from full-motion video data collected by drones.

The field is evolving quickly, with the algorithms that drive the current push in AI optimized for commercial, rather than Defense Department use. However, the current state of AI verification, validation and testing is nowhere close to ensuring the performance and safety of AI applications, particularly where safety-critical systems are concerned, researchers found.

“Many different technologies underpin AI,” Tarraf said. “The current excitement, and hype, are due to leap-ahead advances in Deep Learning approaches. However, these approaches remain brittle and artisanal—they are not ready yet for prime time in safety-critical systems.”

The department lacks clear mechanisms for growing, tracking and cultivating personnel who have AI skills, even as it faces a tight job market. The department also faces multiple data challenges, including the lack of data. “The success of Deep Learning is currently predicated on the availability of large, labeled data sets. Pursuing AI on a department-wide scale will require DoD to fundamentally transform its culture into a data-enabled one,” Tarraf said.

Tarraf and her colleagues offer a set of 11 strategic and tactical recommendations. Among them: The department should adapt AI governance structures that align authorities and resources with the mission of scaling AI. Also, the JAIC should develop a five-year strategic roadmap—backed by baseline measurements—to execute the mission of scaling AI and its impact.

DoD also should advance the science and practice of verification and testing of AI systems, working in close partnership with industry and academia. The department also should recognize data as critical resources, continue to create practices for their collection and curation, and increase sharing while resolving issues in protecting the data after sharing and during analysis and use.

The report recommends that DoD pursue opportunities to leverage new advances in AI, with particular attention to verification, validation, testing and evaluation, and in line with ethical principles. However, it is important for the department to maintain realistic expectations for both performance and timelines in going from demonstrations of the art of the possible to deployments at scale, researchers said.

The RAND Corporation is a research organization that develops solutions to public policy challenges to help make communities throughout the world safer and more secure, healthier and more prosperous.

Click here for the full report (187 PDF pages), on the Rand Corp. website.

View user's profile View All Posts By User

Posts: 25414
Registered: 13-8-2017
Location: Perth
Member Is Offline

[*] posted on 27-12-2019 at 12:52 PM

JAIC outlines strategic cohesiveness, tactical capabilities as near-term objectives

Carlo Munoz, Washington, DC - Jane's International Defence Review

26 December 2019

The Department of Defense’s (DoD’s) Joint Artificial Intelligence Centre (JAIC), tasked with harnessing future artificial intelligence (AI) applications to support US national security priorities, is aiming to bring cohesiveness to the department’s approach to AI integration while rapidly pushing the technologies down to the operational and tactical level, a senior defence industry official told Jane’s .

These two priorities were among several outlined by senior JAIC officials as the centre’s prime objectives for the coming fiscal year, said Graham Gilmer, a principal at Booz Allen Hamilton focusing on AI, machine learning, and high-performance computing. Those objectives were laid out by centre officials during a closed-door industry day hosted by JAIC in November, which included approximately 300 defence industry and information technology companies, as well as nearly 100 “government representatives” from various US federal agencies, Gilmer said in a 13 December interview.

(141 of 983 words)
View user's profile View All Posts By User

Posts: 25414
Registered: 13-8-2017
Location: Perth
Member Is Offline

[*] posted on 11-1-2020 at 05:55 PM

Algorithmic Warfare: Interview with NDIA’s Senior Fellow for AI


By Yasmin Tadjdeh

Illustration: Getty

The National Defense Industrial Association recently tapped Shane Shaneman, the strategic director of national security and defense at Carnegie Mellon University, to be its new senior fellow for artificial intelligence. He spoke with National Defense to discuss his thoughts on AI and his goals as senior fellow.

Shaneman’s views are his and not necessarily the views of Carnegie Mellon. This interview has been edited for length, brevity and clarity.

How and when did you start working on artificial intelligence technologies?

It really started when I transitioned from the Air Force Research Lab into Carnegie Mellon back in the summer of 2016. ... The role that I was playing for the Air Force Research Lab was basically helping to connect some of their research within cross-domain solutions to the operational community and the combatant commands.

Later, I learned more about the opportunity with Carnegie Mellon and, given the pace of innovation that was occurring with machine learning and artificial intelligence, I saw the immediate linkage that is going to be needed to be able to turn around and leverage those technologies, to both enhance our national security as well as to maintain our technological superiority.

Since you joined Carnegie Mellon, how have you seen AI transform?

It’s been fairly tremendous. … With some of the current advances that have taken place in parallelization, machine learning is now 100 times faster than it was just two years ago. And you’ve seen continued evolutions of both the algorithms and the framework and also new styles of machine learning. Of course, going from both the traditional supervised learning into new areas of both unsupervised as well as reinforcement learning.

At Carnegie Mellon, what does your portfolio look like?

My current focus is basically to help link up researchers with requirements across national security and defense and to maximize the value and impact that they have for the United States.

As it relates to defense and national security, what is the promise of AI?

One of the first and key areas that we’re focused on is how it can augment the warfighters. ... If you look at tasks that require very tremendous and tedious focus and involvement from human operators, the ability to … use machine learning and AI as a means to turn around and automate some of those functions and provide additional insights to the warfighter to aid decision making, or to enable them to actually shift what they’re spending their time on to something that’s higher value or more strategically important.

What are the biggest issues that are slowing down innovation in AI development?

There are cultural changes that we have to look at … such as the concept of algorithmic agility — the algorithms are going to continue to evolve. So, this is going to be an ongoing process of how do we look at the newest algorithms and integrate them — not once or twice a year, but really getting to a point where almost we’re doing that multiple times a day.

Algorithmic agility … is not just getting an algorithm and implementing it and going, “Oh, we’re done.” This is going to be something that becomes part of our culture.

Do you think the Defense Department is doing enough with industry and academia to better leverage artificial intelligence?

I’ve been involved with the Department of Defense since I graduated out of ROTC back in the early ‘90s, and I’m seeing the Department of Defense do some things that are truly very, very innovative through the Defense Innovation Unit, the Defense Digital Service and things that have been stood up to look at how do we basically evolve and embrace innovation as part of our overall processes and procedures.

The challenge that I think that we’re going to see is how do we innovate for impact and how do we turn around and look at transitioning [AI technology]. … We’re definitely putting a large focus on operational prototyping, but we have to be able to convert those and sustain those as part of our programs of record. And that really becomes hard because if you think about it, even though we … began focusing on software engineering back in the ‘80s and ‘90s, we’re just now getting used to — from an acquisition and sustainment standpoint — being able to separate out systems as hardware and software and the different processes that we go through with that. But now the world’s changed again and it is no longer just hardware and software, it is hardware, software, data and algorithms.

What are some of your goals as NDIA’s AI senior fellow?

The senior fellow role is really looking at … from a strategic level … what are those major changes in areas that we need to drive and influence, especially from a policy standpoint?

One of the key areas that we’re looking at is how do we take some of the areas that NDIA has been very, very successful in — and I’ll highlight the Special Operations Forces Industry Conference and the impact and the role that it plays for the special operations community — and leverage a similar type of an approach around artificial intelligence for the Department of Defense and contribute to the mission — whether it’s the Joint AI Center or the DoD writ large.

Another area is looking at this concept of crafting the new “Arsenal of Democracy” as we look at artificial intelligence, and that’s a very nebulous concept of we’ve got tons of startups and entrepreneurs that are coming into the area — how do we tap into all of that capability and entice them as part of the defense industrial base? … We’ve got to understand that this is not the 1940s and ‘50s, that this is a global marketplace.
View user's profile View All Posts By User

Posts: 25414
Registered: 13-8-2017
Location: Perth
Member Is Offline

[*] posted on 14-1-2020 at 09:53 PM

14 January 2020

Raytheon starts work on machine learning technology development

Raytheon has started work on the development of machine learning technology in order to create trust between human operators and artificial intelligence (AI) systems.

The company is developing the technology under a $6m contract awarded by the Defense Advanced Research Projects Agency for the Competency Aware Machine Learning programme.

As part of the deal, Raytheon will develop new systems that will be able to communicate information.

Raytheon BBN Technologies Categorical Abstract Machine Language (CAML) principal investigator Ilana Heintz said: “The CAML system turns tools into partners. It will understand the conditions where it makes decisions and communicates the reasons for those decisions.”

The system makes use of a process that is similar to that in a video game, offering a list of choices and identifying a goal instead of rules. It will repeatedly play the game and learn the most effective way to achieve the goal.

It will also record and explain the conditions and strategies used to come up with successful outcomes.

Heintz added: “People need to understand an autonomous system’s skills and limitations to trust it with critical decisions.”

After the system develops these skills, it will be applied to a simulated search and rescue mission by the team.

The conditions surrounding the mission would be created by users, while the system will make recommendations and give information about its competence to them in those particular conditions.

Last December, Raytheon introduced a military training simulator as a proposed solution to meet the requirements of the US Army’s Synthetic Training Environment.
View user's profile View All Posts By User

Posts: 25414
Registered: 13-8-2017
Location: Perth
Member Is Offline

[*] posted on 20-1-2020 at 07:30 PM

20 January 2020 Analysis

Could China dominate the AI arms race?

By Harry Lye

Beijing is rapidly gaining an edge in the development of military artificial intelligence (AI) technology by leveraging its control over domestic research facilities. Harry Lye finds out what the country’s progress means for rivals such as the US, and why winning the AI arms race matters.

Image: Shutterstock.

As the world reckons with the fact that warfare is moving to a hybrid domain, where space and cyberspace become increasingly important, the race to apply artificial intelligence to military technology is in full swing. Whoever achieves AI proliferation first will be leagues ahead of the competition, adversary or ally.

At a session of the Politburo in 2018, China’s premier Xi Jingping said China must “ensure that our country marches in the front ranks where it comes to theoretical research in this important area of AI, and occupies the high ground in critical and core technologies.”

US Secretary of Defence Mark Esper referenced this statement in his speech at the National Security Commission on Artificial Intelligence Public Conference in November 2019, adding: “For instance, improvements in AI enable more capable and cost-effective autonomous vehicles. The Chinese People’s Liberation Army is moving aggressively to deploy them across many warfighting domains. While the US faces a mighty task in transitioning the world’s most advanced military to new AI-enabled systems, China believes it can leapfrog our current technology and go straight to the next generation.”

Eper added: “Advances in AI have the potential to change the character of warfare for generations to come. Whichever nation harnesses AI first will have a decisive advantage on the battlefield for many, many years.”

Emphasising the US’s sense of urgency in AI development, he said: “We have to get there first.”

Beijing’s top priority

With AI set to be a critical component of future warfighting, China is throwing all its might into winning the race, having identified AI as a key area for modernisation. A valuable tool in this push is the country’s ability to draft Chinese industry and academia into supporting its government-led efforts, Esper explained.

AI development plays into Jinping’s long-held ambitions for China, as the country has already stepped out of the shadows to become an economic superpower, and now hopes to replicate this in the cyber domain.

The International Institute for Strategic Studies (IISS) notes in its Asia Pacific Regional Security Assessment: “As China pursues a strategy for development that concentrates on advancing innovation, the contestation of leadership in next-generation information technologies – particularly artificial intelligence (AI) – is also a core priority.”

Beijing’s own national defence white paper makes plain this push for cyber superiority, saying: “Driven by the new round of technological and industrial revolution, the application of cutting-edge technologies such as artificial intelligence, quantum information, big data, cloud computing and the Internet of Things is gathering pace in the military field. International military competition is undergoing historic changes.”

This push, IISS believes, will have “critical implications for the future of security and stability of the Asia-Pacific and beyond”.

In response to China’s trajectory and a need to focus its own AI research, the US established the Joint Artificial Intelligence Centre (JAIC) to streamline the development and adoption of AI. Washington also issued guidance across the Department of Defence for staff to think about AI integration everywhere from the back office to the frontlines.

Could China leapfrog US capability in the virtual domain?

The need to win the race to deploy AI across the military is not lost on the private sector, either. Paolo Palumbo, director of F-Secure’s Artificial Intelligence Centre of Excellence, told us: “I’d say it is very important not only to gain an early advantage but also in terms of starting the immense integration work as soon as possible. Having AI in the control room will be the first step, but then we will see integration closer to the battlefield, and being able to reach that phase quickly could make all the difference.”

Róbert Vass, founder and president of the Globsec think tank, echoed Esper’s warning of China “leapfrogging” the US in capability when he spoke to us ahead of the NATO Summit in December last year. Vass explained that while the US has enjoyed the dominance in conventional capabilities for quite some time, China is approaching a point where it could jump into the lead, a play that would render conventional forces near obsolete.

“We need to make sure that NATO is not preparing for a conflict of yesterday but we are preparing for the conflict of tomorrow, especially when China is heavily investing in artificial intelligence,” he said. “They will never be able to come to the level of the United States when it comes to a traditional army and defence, but they can do a leapfrog because, with new technologies, all of our equipment can become obsolete.”

Vass added that the push for AI was part of a wider sea change in defence and security. “We are moving [away] from a traditional domain to cybersecurity and disinformation,” he said.

“And even I would say ‘hyper war’, which is a combination of traditional means with cyber [and] disinformation, and the scale and the levels of domains that this is impacting will be just mind-blowing.”

Europe also occupies a strategically important position in the race for AI. Vass explained that even if the US beats China to its deployment, it could spell risks for cooperation with the continent as European countries suddenly find their equipment is no longer compatible with that of their ally across the Atlantic.

One challenge faced by most nations in the development of AI is that much of the technology required already exists, but the difficulty lies in integrating it with defence systems. In an interview discussing AI development in the US military last year, the US Air Force’s service acquisition executive Dr Will Roper told us: “If you look across this technology space, I think the core components of what is needed already exist – this is as much of, if not more than, an integration problem as a technological one.”

It appears that Chinese industry has already put at least some of these pieces together. As Esper pointed out during his speech at the conference in November, “Chinese weapons manufacturers are selling drones advertised as capable of full autonomy, including the ability to conduct lethal targeted strikes”.

Chinese UAV manufacturer Ziyan says its Blowfish A-2 system is capable of completing autonomous precision strikes. Image: Ziyan UAV.

Redefining military power

With these capabilities already in Beijing’s hands, proliferation on a large scale could shift the balance of power not only in the South China Sea but on a global scale. After all, it is cheaper to produce low-cost attritables en masse than it is to build an aircraft carrier, destroyer or fighter jet. A military force equipped with a large fleet of AI enabled drones, for instance, could deploy at a pace simply unseen in modern times, and at a cost far lower than current norms.

AI development has emerged as the new arms race, but this time with a much more advanced toolkit. The stakes in this race are higher than ever, but also often misunderstood. After all, the world is still used to seeing aircraft carriers and fleets of tanks, rather than unmanned systems, as markers of military power.

For the US to maintain the dominance it has enjoyed for decades as the world transitions to AI enabled forces, as Esper put it, it has to get there first.
View user's profile View All Posts By User

Posts: 25414
Registered: 13-8-2017
Location: Perth
Member Is Offline

[*] posted on 23-1-2020 at 08:04 PM

Pentagon will start figuring out AI for lethality in 2020

By: Kelsey D. Atherton   13 hours ago

Dana Deasy, Department of Defense chief information officer, hosts a roundtable discussion on the enterprise cloud initiative with reporters, Aug. 9, 2019. (Air Force Staff Sgt. Andrew Carroll)

The Pentagon is eager to plug artificial intelligence into lethality. How the benefits of modern information processing, so far mostly realized in the commercial sector, will be applied to the use of weapons in war remains unclear, but it is a problem the military is interested in solving.

“We are ready to start our first lethality project next year in the joint war fighter targeting space,” said Department of Defense Chief Information Officer Dana Deasy said in December in an exclusive interview with sister brand Defense News.

This vision will be carried out by the Joint Artificial Intelligence Center, the military’s AI coordinating and developing organ. As for the specifics of how, exactly, it will bring the benefits of algorithmic processing to the fight, JAIC is still too early in the process to have much concrete information on offer.

The project will be part of a mission initiative under JAIC called Joint Warfighting.

While joint war fighting could in theory encompass every part of combat that involves more than one branch of the military, JAIC spokesperson Arlo Abrahamson clarified that the initiative encompasses, somewhat more narrowly, “Joint All-Domain Command and Control; autonomous ground reconnaissance and surveillance; accelerated sensor-to-shooter timelines; operations center workflows; and deliberate and dynamic targeting solutions.”

In other words, when the JAIC pairs AI with tools that aid in the use of force, it will come through either a communication tool, scout robots, battlefield targeting tools, workforce management software, or other targeting tools.

“The JAIC is participating in dialogue with a variety of commercial tech firms through industry days and other industry engagement activities to help accelerate the Joint Warfighting initiative,” said Abrahamson. “Contracting information for this mission initiative is under development.”

And while the JAIC is still figuring out if the first lethality project will be a robot, a sensor system, or logistics software, it is still explicitly interested in making sure that whatever the use of AI, it ultimately serves the interests of the humans relying on it in a fight.

As plainly as the JAIC can put it, the initiative is looking for “AI solutions that help manage information so humans can make decisions safely and quickly in battle,” said Abrahmson.

Humans, then, will still be the author of any lethal action. Those humans will just have some AI help.
View user's profile View All Posts By User

Posts: 25414
Registered: 13-8-2017
Location: Perth
Member Is Offline

[*] posted on 13-2-2020 at 04:45 PM

War on Autopilot? It Will Be Harder Than the Pentagon Thinks

By Patrick Tucker
Technology Editor

February 12, 2020

Northrop Grumman

Despite defense contractors’ glittering demonstrations, difficult realities are challenging the military’s race to network everything.

MCLEAN, Virginia — Everything is new about Northrop Grumman’s attempt to help the military link everything it can on the battlefield. One day, as planners imagine it, commanders will be able to do things like send autonomous drones into battle, change attack plans midcourse, and find other ways to remove humans and their limitations from decision chains that increasingly seem to require quantum speed. Northrop’s Innovation Center in McLean, Virginia, looks so new it could have sprung up in a simulation. Its Washington metro rail stop doesn’t even appear on many maps yet.

Northrop is hardly alone. Over the last few months, various weapons makers have begun showing off all sorts of capabilities to reporters, while military officials detail their own efforts to link up jets, tanks, ships, and soldiers. As they describe it, it’s a technological race to out-automate America’s potential adversaries.

But real questions remain about the Pentagon’s re-imagining of networked warfare. Will it ever become more than glitzy simulations? And have military leaders thought through the implications if it does?

Today, the military’s ability to run a battlefield — its command-and-control doctrine and gear — depends partly on large-crewed, non-stealthy planes like the 1980s-designed E-8 Joint Surveillance Target Attack Radar System, or JSTARS, and other aircraft, ships, and ground facilities. In the modern era, the Pentagon worries that these airborne control centers have become giant, fragile targets. An advanced adversary will aim to blind and blunt a U.S. attack by neutralizing these planes, or perhaps just their on-board communications. The military is also too dependent on aging network links that differ across planes, sensors, and weapons; and that don’t offer the bandwidth that modern combat demands.

Military officials think that the idea of a networked arsenal will materialize into a whole new command-and-control regime across the services by 2028. Along the way, there will be incremental improvements and new conversations between individual pieces of hardware, like what Northrop is developing nine miles west of the Pentagon.

Northrop has dubbed their next-gen platform Distributed Autonomy/Responsive Control, or DA/RC. Although it’s a new program, company officials say they’ve been working on the problem for 15 years. They realized that using unmanned planes for combat would require ground crews and sensor data analysts would take up too much precious space on nearby aircraft carriers. That would limit what role unmanned planes would play in missions, said Scott Winship, vice president for Advanced Programs at Northrop Grumman Aerospace Systems. Northrop’s work on the Navy’s X-47B project — an effort to build an autonomous drone for attack or air defense — showed that the path to autonomy was allowing drones to “see” the battlespace by sharing data. That, in turn, would enable one person to control a lot more weapons, and do so, potentially, from positions inside of the range of enemy air defenses.

Northrop believes DA/RC can underpin another project called the Advanced Battle Management System, or ABMS, a proposed digital architecture to connect a wide variety of weapons, not just aircraft. The Pentagon’s budget request released this week seeks $302 million for the project in 2021, up from the $144 million enacted this year. ABMS is part of a broader Pentagon vision called Joint All-Domain Command & Control. JADC2 represents an effort to create a networked nervous system for warfare. It aims to link every ship, soldier, and jet, so that ground, air, sea, space, and cyber assets can share the exact same data and can be used almost interchangeably to take out targets, even in environments where communication is being heavily jammed or where adversaries have advanced air defenses.

It’s more than just hype. JADC2 is essentially the military’s recipe for defeating a highly advanced adversary like Russia or China. Many of the military’s big new spending priorities — autonomy, advanced AI, hypersonics, etc. — are in service to the idea.

More than three years ago, Northrop began to conduct experiments toward this new battlefield web. First, they connected unmanned submarines to manned ships. They began work on a command-and-control dashboard to enable commanders to see every vehicle, aircraft, and other weapon in their arsenal, as well as all the threats on the battlefield between planes and their targets, based on sensed data from those weapons. They programmed it to automatically update when circumstances on the ground change, and even to adjust battle plans — either offering the commander recommendations or, if set to do so, sending out new tasking orders, dispatching jets to strike targets and drones to escort them to jam defenses along the way.

It’s a picture of war on autopilot.

That concept worries some watchdogs. Northrop officials emphasized that commanders will be able to direct every piece on the battlefield to comply with military doctrine, rules of combat, and laws of war. But, ultimately, the commander will be able to decide which rules and doctrine he or she wishes to follow. Want to tell a drone to strike a target even if the communications are cut off? The JADC2 could allow a commander to give a drone a mission and send it on its way.

Missiles, of course, are used in this fashion, but drones, with the ability to send imagery of the target area back to humans for review and approval, are generally not. The notion cuts close to the Pentagon’s central rule for the robot age: that humans will never be removed from decisions to kill.

Northrop’s concept appeared very similar to a software suite that Lockheed Martin showed reporters in November. Winship says Northrop aims less to tell the Air Force what its new ABMS system should look like than to engineer more autonomy into things that service officials are already buying.

ABMS is a massive project made up of smaller projects that bear fruit incrementally. The Air Force plans to continue testing every four months to demonstrate new links between different weapons and vehicles; the next one planned for April. Officials hope to bring new capabilities into operation as fast as they emerge, said Maj. Gen. John Pletcher, deputy assistant secretary of the Air Force, on Monday. “Now it’s just a matter of going to individual combatant commanders and figuring out what are the next things they want to test.”

Can One Service Connect Them All?

Last May, the Pentagon began to organize around the JADC2 concept at the insistence of Gen. Paul Selva, then vice chairman of the Joint Chiefs of Staff. The Joint Staff stood up a cross-functional team, basically a group of people with varied expertise, under the J6 (the Joint Staff’s branch for advising all things related to command, control and cyber.) Their role is to bring all the military services together in one big data loop, according to Army Lt. Gen. Eric Wesley, director of the Futures Concepts Center at Army Futures Command. The Air Force is doing much of the initial work, but in January, they invited leader from the Army and other services to a classified JADC2 conference at Nellis Air Force Base, Nevada.

Wesley said Army leaders worry that their own experiments, ideas, and standards for data and hardware will be discarded under the Air Force-run effort, and that JADC2 will ultimately privilege air assets over ground ones.

“What we would argue is that within this Advanced Battle Management construct, you need to figure out all of the work and the weapon systems that we are building out within the Army into the edge portion of the framework,” he said, meaning, the front lines on the ground.

Wesley’s question for the folks leading the effort is: “Am I building ABMS so that the Army can plug into it? And Army, am I building weapon systems with an ABMS backbone? Both have to accommodate that.”

He said he hopes that the Air Force was on the same page. “I think we’re at the early stages of that. And what [the event at] Nellis did was it allowed us to be very clear about what we want them to build. I think they heard us. It was a good, transparent conversation. It would be too early to say they aren’t building it because it is not built out yet.”

Wesley said the aim is to realize the vision of network-centric warfare by 2028.

“I do think it’s going to take five to 10 years to build a viable joint system on the scale that we are describing” he said.

Linking the battlefield means linking the troops on it as well, and that’s a big job, Gen. Mike Murray, head of U.S. Army Futures Command, said on Monday.

“You can’t discount the scale that comes with the Army. This is about much more than linking 200 planes. This is about linking hundreds of thousands of sensors, especially when you get into working things like IVAS where every soldier is going to be a sensor,” he said, referring to the digital goggles formally known as the Integrated Visual Augmentation System. “As we look toward the future, I clearly see a day where everything on the battlefield can be a sensor and should be a sensor.”

Murray is also worried about how well NATO allies will be able to fit into the picture, especially when it comes to coordinating weapons with shared radar data. “You can’t discount the absolute necessity of having allies and partners being part of this equation. They have to be part of this architecture and have to be networked in,” he said.

‘It Will Be Harder Than They Think’

In the military, new systems like ABMS and JACD2 require new doctrine and concepts of operation. What will machines actually be allowed to do on their own?

More than they do now, Will Roper, assistant Air Force secretary for acquisition, technology, and logistics, told reporters last month. Evolving circumstances and technological improvements may force a reconsideration of the requirement to give a human veto power over every decision a machine might make in war.

“The idea of the machine taking on a lethal decision? That’s against the [Defense] Department’s policy. We do have exceptions where we have automatic action for self-defense. I would imagine that if we build out ABMS, we will allow greater flushing out of that policy,” Roper said. “A lot of progress in government is really earning your way to a better problem. Right now our problem is, really: we have a lot of data and it doesn’t get to people who can make decisions based on it. We want to shed that problem and get to ‘our information now gets to people who can make decisions; do we let them?’ Or do we allow machines to make choices on their behalf?”

That might sound like taking a provocative change, but Paul Scharre, senior fellow and director of the Technology and National Security Program at the Center for a New American Security, or CNAS, says that it’s in keeping with current doctrine, which allows for lethal autonomy after a review process.

But Roper left that somewhat unclear. “The formal policy guidance, DOD Directive 3000.09, gives DOD leaders the option of building lethal autonomous weapons. They can choose not to exercise that option and even say that they would flat out reject any such weapon system. But it’s confusing when senior DOD leaders refer to such a ‘policy.’ It’s often unclear whether they’re simply misstating what the directive is or they are referring to some unwritten, informal policy against lethal autonomy.”

To Scharre, what Roper is talking about makes sense. “I think the goal of getting the right information to the right people to make timely, informed decisions is the right goal for DOD,” he said. “For situations where you always want a certain action in response to certain data or environmental stimuli, then automation may make sense.”

One defense contractor who is working on the JADC2 concept but whose company did not authorize him to speak on the record, said the Defense Department should probably manage its expectations. The challenge is larger than Pentagon officials are willing to acknowledge or admit. It goes beyond human control over machines. It requires rethinking the entire organizational and rules structure over who gets to be in control of any battlefield decisions.

For example, will pilots be given the command authority to order the sorts of strikes and operations that today only high-level commanders can approve? The Air Force has already begun to change the way they talk about doctrine. Traditional themes of centralized command-and-control and decentralized execution is giving way to something else. That “something else” is not fully formed.

“What they’ve now changed, and I’ve seen it on multiple slides, they’ve changed ‘centralized command-and-control and decentralized execution’ to ‘centralized command, distributed control, and decentralized execution’. So they’ve put a wedge between command and control,” the contractor said. “From a practical perspective, I do not know how to wrap my head around that.”

The Air Force aims to give that command-and-control function to the single pilots in stealthy F-22 and F-35 jets. But, said the contractor, that will entail specific changes that service leaders haven’t yet addressed.

“The Air Force does not want wing commanders making these decisions; they want a one-star or a two-star [general] in the region making those decisions, even though they say they want to do it,” said the contractor.

“They have to deconflict it, and that’s just within the Air Force.” Other services are pushing back on changes, said the contractor.

“Assume you go to war in Southeast Asia. How is contested battle management really going to work? Even if we have these highly exquisite assets that can talk to one another, we still don’t have the network to put in place. Roper talks about it like it’s Uber. That’s not how war works. It’s not like there are 40 different F-35s flying overhead and I can just open my app and say, ‘You have bombs; you go get it’,” said the contractor.

Of course, creating exactly that app is what the Air Force and defense contractors are so busy working toward.

“I get the motivation, and you can reduce planning cycles a little bit, but there’s a physical limitation in moving shit from A to B,” said the contractor. These sorts of problems, extending beyond technology but encompassing technology, is why “air tasking orders,” the precisely crafted orders that senior air commanders distribute to all the planes under his or her command during an operation, take days to develop. In the future, those decisions will need to occur within seconds, as Northrop was already demonstrating at its DA/RC demonstration.

Ready or not, everyone will have to adapt. Said the contractor, “They’ll be faced quickly and crudely with the realities of life.”
View user's profile View All Posts By User

Posts: 25414
Registered: 13-8-2017
Location: Perth
Member Is Offline

[*] posted on 25-2-2020 at 02:42 PM

The Pentagon now has 5 principles for artificial intelligence

By: Nathan Strout   6 hours ago

The Department of Defense now has broad principles outlining ethical use of artificial intelligence by the military.

DoD Chief Information Officer Dana Deasy announced Feb. 24 that he had been directed by Secretary of Defense Mark Esper to formally adopt five AI principles recommended by the Defense Innovation Board.

The announcement "lays the foundation for the ethical design, development, deployment, and the use of AI by the Department of Defense,” Deasy said at a Feb. 24 press conference at the Pentagon.

Lt. Gen. Jack Shanahan, the director of the Joint Artificial Intelligence Center, said the decision to adopt the principles separates the United States and its allies from adversaries whose use of AI is concerning.

“My conversations with our allies and partners in Europe reveal that we have much in common regarding principles relating to the ethical and safe use of AI-enabled capabilities in military operations,” said Shanahan. “This runs in stark contrast to Russia and China, whose use of AI technology for military purposes raises serious concerns about human rights, ethics and international norms.”

The five principles apply to both the combat and non-combat use of AI technologies, said Deasy.

The five principles are as follows:

Responsible. DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.
Equitable. The department will take deliberate steps to minimize unintended bias in AI capabilities.
Traceable. The department’s AI capabilities will be developed and deployed so that staffers have an appropriate understanding of the technology, development processes, and operational methods that apply to AI. This includes transparent and auditable methodologies, data sources, and design procedure and documentation.
Reliable. The department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing.
Governable. The department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

The principles follow recommendations made by the Defense Innovation Board to Secretary of Defense Mark Esper in October following a 15-month process, where the board met with AI experts from industry, government and academia.

Shanahan described most of the differences in language between the DIB’s recommendations and the DoD’s final version as changes made by lawyers to make sure the language was appropriate for the department, but he maintained that the final language kept “the spirit and intent” of the DIB’s recommendations.

Some of these changes could be contentious for those concerned about the development of military AI.

For example, in the board’s formulation of the “Governable” principle was whether to include an explicit requirement for AI systems to have a way for humans to deactivate or disengage the system. The DIB’s ultimate recommendations included a compromise, calling “for human or automated disengagement or deactivation of deployed systems that demonstrate unintended escalatory or other behavior.” However, the final DoD language removed that wording, and requires AI systems to have “the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.”

However, Shanahan emphasized how the Pentagon’s final language went even further than the board’s recommendations. He pointed to the “Traceable” principle, where the adopted wording apples to all “relevant personnel,” which he said is broader than the “technical experts” language used by the board.

In a statement, Eric Schmidt, the board’s chair and former head of Google, praised the move.

“Secretary Esper’s leadership on AI and his decision to issue AI principles for the department demonstrates not only to DoD, but to countries around the world, that the U.S. and DoD are committed to ethics, and will play a leadership role in ensuring democracies adopt emerging technology responsibly,” he said.

The JAIC is expected lead the effort to implement the principles. Shanahan said that he had followed through on his earlier promise to hire an AI ethicist within the JAIC, and she and other JAIC staff would bring in AI leaders from across the department to hash out implementation.

“This will be a rigorous process aimed at creating a continuous feedback loop to ensure the department remains current on the emerging technologies and innovations in AI. Our teams will also be developing procurement guidance, technological safeguards, organizational controls, risk mitigation strategies and training measures,” said Shanahan.
View user's profile View All Posts By User
Super Administrator
Thread Moved
28-2-2020 at 11:28 AM

Posts: 25414
Registered: 13-8-2017
Location: Perth
Member Is Offline

[*] posted on 4-3-2020 at 05:58 PM

Algorithmic Warfare: DoD Seeks AI Alliance to Counter China, Russia


By Yasmin Tadjdeh

Facing growing threats from Russia and China, the Defense Department wants to increase its collaboration with European allies as it pursues new artificial intelligence technology.

Lt. Gen. John N.T. “Jack” Shanahan, director of the Joint Artificial Intelligence Center, said global security challenges and technological innovations are changing the world rapidly. That reality means partner nations must work more closely together in areas such as artificial intelligence.

“AI — like the major technology innovations of the past — has enormous potential to strengthen the NATO alliance,” he said in January during a call with reporters. “The deliberate actions we take in the coming years with responsible AI adoption will ensure our militaries keep pace with digital modernization and remain interoperable in the most complex and consequential missions.”
A stronger alliance between the United States and Europe on AI research is particularly important as Russia and China collaborate in their pursuit of new tech, he said.

Both Beijing and Moscow are cooperating on artificial intelligence in ways that threaten the United States and NATO’s shared values and risk accelerating digital authoritarianism, Shanahan said.

China is using the technology to strengthen its censorship over its citizens and quash freedom of expression and human rights, he said. It is also facilitating the sale of AI-enabled autonomous weapons in the global arms market, which lowers the barrier of entry for potential adversaries and could place AI systems in the hands of non-state actors, he added.

“Perhaps most concerning, Chinese technology companies, including Huawei, are compelled to cooperate with its Communist Party’s intelligence and security services no matter where the company operates,” Shanahan said.

Meanwhile, Russia has shown a “greater willingness to disregard international ethical norms and to develop systems that pose destabilizing risks to international security,” he said. Moscow is using automation for global disinformation campaigns and to develop lethal autonomous weapon systems, he noted.

“These security challenges and the technological innovations that are changing our world should compel likeminded nations to shape the future of the international order in the digital age, and vigorously promote AI for our shared values,” he said.

However, he recognized that on both sides of the Atlantic there are concerns about the military application of artificial intelligence.

“AI is capable of being used for good or for bad,” Shanahan said. “For the U.S. and our allies, our most valuable contributions will come from how we use AI to make better and faster decisions and optimize human-machine teaming.”

At a time when allied nations need to keep pace with global adversaries, the United States is concerned that some countries in Europe are at risk of becoming “immobilized by debates about regulation and the ethics of the military use of AI,” he said.
Michael Kratsios, the White House’s chief technology officer, echoed a similar sentiment in an op-ed he penned for Bloomberg in January.

“Governments elsewhere are co-opting companies and deploying their AI technology in the service of the surveillance state, where they monitor and imprison dissidents, activists and minorities,” he said. “The best way to counter this dystopian approach is to make sure America and our allies remain the top global hubs of AI innovation. Europe and our other international partners should adopt similar regulatory principles that embrace and shape innovation and do so in a manner consistent with the principles we all hold dear.”

Shanahan noted that the U.S. government is taking a “light-touch” approach for AI regulation wherever possible.

“The last thing we want to do in this field of emerging technology moving as fast as it is, is to stifle innovation,” he said. “Over-regulating artificial intelligence is one way to stifle innovation and do it very quickly.”

Self-regulation won’t always work, so the Defense Department is mulling over what is the right combination of self-regulation and government-enforced regulation, as well as how it can work with NATO and European Union allies to find common ground, he said.

“Just in the discussions we’ve had in the last two days, there are far more commonalities than there are differences, especially when we talk about principles of artificial intelligence and the ethical and safe lawful use of it,” Shanahan said while in Brussels for meetings.

But despite an eagerness to work more closely, there are some potential roadblocks. For example, the United Kingdom recently announced it will allow Huawei to build the country’s next generation of super-fast 5G wireless networks, which has been a point of contention between London and Washington, with U.S. officials fearing the Chinese company could pose national security risks.

Shortly before the United Kingdom’s decision, Shanahan was cautious to say the agreement could undermine U.S.-U.K. artificial intelligence cooperation.

“5G and AI will have a future that will be inextricably linked,” he said. “What our concerns are is access to data. … If you have access to data, you basically have access to algorithms and can defeat the models. And then how is the data being shared, who is it being shared with?”

If the United Kingdom were to move forward with Huawei, discussions would need to be held at both the policy and technical levels to understand the ramifications of having the company in a network that was interacting with allied and partner systems, he added.

“There are a lot of unknowns about this right now,” Shanahan said. “What safeguards could be put in place if that were to happen? And if there weren’t sufficient safeguards, what could we do to ensure that technology wasn’t stolen and given away to an adversary without even us understanding how it took place?”
View user's profile View All Posts By User

Posts: 25414
Registered: 13-8-2017
Location: Perth
Member Is Offline

[*] posted on 7-3-2020 at 04:18 PM

The intelligence community is developing its own AI ethics

By: Nathan Strout   7 hours ago

While less public than the Pentagon's Joint Artificial Intelligence Center, the intelligence community has been developing its own set of principles for the ethical use of artificial intelligence.

The Pentagon made headlines last month when it adopted its five principles for the use of artificial intelligence, marking the end of a months-long effort with significant public debate over what guidelines the department should emp

Less well known is that the intelligence community is developing its own principles governing the use of AI.

“The intelligence community has been doing it’s own work in this space as well. We’ve been doing it for quite a bit of time,” said Ben Huebner, chief of the Office of Director of National Intelligence’s Civil Liberties, Privacy, and Transparency Office, at an Intelligence and National Security Alliance event March 4.

According to Huebner, ODNI is making progress in developing its own principles, although he did not give a timeline for when they would be officially adopted. They will be made public, he added, noting there likely wouldn’t be any surprises.

“Fundamentally, there’s a lot of consensus here,” said Huebner, who noted that ODNI had worked closely with the Department of Defense’s Joint Artificial Intelligence Center on the issue.

Key to the intelligence community’s thinking is focusing on what is fundamentally new about AI.

“Bluntly, there’s a bit of hype,” said Huebner. “There’s a lot of things that the intelligence community has been doing for quite a bit of time. Automation isn’t new. We’ve been doing automation for decades. The amount of data that we’re processing worldwide has grown exponentially, but having a process for handling data sets by the intelligence community is not new either.”

What is new is the use of machine learning for AI analytics.

Instead of being explicitly programmed to perform a task, machine learning tools are fed data to train them to identify patterns or make inferences before being unleashed on real world problems. Because of this, the AI is constantly adapting or learning from each new bit of data it processes.

That is fundamentally different from other IC analytics, which are static.

“Why we need to sort of think about this from an ethical approach is that the government structures, the risk management approach that we have taken for our analytics, assumes one thing that is not true anymore. It generally assumes that the analytic is static,” explained Huebner.

To account for that difference, AI requires the intelligence community to think more about explainability and interpretability. Explainability is the concept of understanding how the analytic works, while interpretability is being able to understand a particular result produced by an analytic.

“If we are providing intelligence to the president that is based on an AI analytic and he asks--as he does—how do we know this, that is a question we have to be able to answer,” said Huebner. “We’re going to need to ensure that we have transparency and accountability in these structures as we use them. They have to be secure and resilient.”

ODNI is also building an ethical framework to help employees implement those principles in their daily work.

“The thing that we’re doing that we just haven’t found an analog to in either the public or the private sector is what we’re referring to as our ethical framework,” said Huebner. “That drive for that came from our own data science development community, who said ‘We care about these principles as much as you do. What do you actually want us to do?’”

In other words, how do computer programmers apply these principles when they’re actually writing lines of code? The framework won’t provide all of the answers, said Huebner, but it will make sure employees are asking the right questions about ethics and AI.

And because of the unique dynamic nature of AI analytics, the ethical framework needs to apply to the entire lifespan of these tools. That includes the training data being fed into them. After all, it’s not hard to see how a data set with an underrepresented demographic could result in a higher error rate for that demographic than the population as a whole.

“If you’re going to use an analytic and it has a higher error rate for a particular population and you’re going to be using it in a part of the world where that is the predominant population, we better know that,” explained Huebner.

The IC wants to avoid those biases due to concerns over privacy, civil liberties, and frankly, accuracy. And if biases are introduced into an analytic, intelligence briefers need to be able to explain that bias to policy makers so they can factor that into their decision making. That’s part of the concepts of explainability and interpretability Huebner emphasized in his presentation.

And because they are constantly changing, these analytics will require some sort of periodic review as well as a way to catalog the various iterations of the tool. After all, an analytic that was reliable a few months ago could change significantly after being fed enough new data, and not always for the better. The intelligence community will need to continually check the analytics to understand how they’re changing and compensate.

“Does that mean that we don’t do artificial intelligence? Clearly no. But it means that we need to think about a little bit differently how we’re going to sort of manage the risk and ensure that we’re providing the accuracy and objectivity that we need to,” said Huebner. “There’s a lot of concern about trust in AI, explainability, and the related concept of interpretability.”
View user's profile View All Posts By User

Posts: 25414
Registered: 13-8-2017
Location: Perth
Member Is Offline

[*] posted on 2-4-2020 at 06:45 PM

Panel wants to double federal spending on AI

Aaron Mehta

12 hours ago

A commission has recommended the U.S. double its non-defense AI investments. (File)

A Congressionally-mandated panel of technology experts has issued its first set of recommendations for the government, including doubling the amount of money spent on artificial intelligence outside the defense department and elevating a key Pentagon office to report directly to the Secretary of Defense.

Created by the National Defense Authorization Act in 2018, the National Security Commission on Artificial Intelligence is tasked with reviewing “advances in artificial intelligence, related machine learning developments, and associated technologies,” for the express purpose of addressing “the national and economic security needs of the United States, including economic risk, and any other associated issues.”

The commission issued an initial report in November, at the time pledging to slowly roll out its actual policy recommendations over the course of the next year. Today’s report represents the first of those conclusions — 43 of them in fact, tied to legislative language that can easily be inserted by Congress during the fiscal year 2021 budget process.

Bob Work, the former deputy secretary of defense who is the vice-chairman of the commission, said the report is tied into a broader effort to move DoD away from a focus on large platforms.

“What you’re seeing is a transformation to a digital enterprise, where everyone is intent on making the DoD more like a software company. Because in the future, algorithmic warfare, relying on AI and AI enabled autonomy, is the thing that will provide us with the greatest military competitive advantage,” he said during a Wednesday call with reporters.

Among the key recommendations:

- The government should “immediately double non-defense AI R&D funding” to $2 billion for FY21, a quick cash infusion which should work to strengthen academic center and national labs working on AI issues. The funding should “increase agency topline levels, not repurpose funds from within existing agency budgets, and be used by agencies to fund new research and initiatives, not to support re-labeled existing efforts.” Work noted that he recommends this R&D to double again in FY22.
- The commission leaves open the possibility of recommendations for increasing DoD’s AI investments as well, but said it wants to study the issue more before making such a request. In FY21, the department requested roughly $800 million in AI developmental funding and another $1.7 billion in AI enabled autonomy, which Work said is the right ratio going forward. “We’re really focused on non-defense R&D in this first quarter, because that’s where we felt we were falling further behind,” he said. “We expect DoD AI R&D spending also to increase” going forward.
- The Director of the Joint Artificial Intelligence Center (JAIC) should report directly to the Secretary of Defense, and should continue to be led by a three-star officer or someone with “significant operational experience.” The first head of the JAIC, Lt. Gen. Jack Shanahan, is retiring this summer; currently the JAIC falls under the office of the Chief Information Officer, who in turn reporters to the secretary. Work said the commission views the move as necessary in order to make sure leadership in the department is “driving" investment in AI, given all the competing budgetary requirements.
- The DoD and the Office of the Director of National Intelligence (ODNI) should establish a steering committee on emerging technology, tri-chaired by the Deputy Secretary of Defense, the Vice Chairman of the Joint Chiefs of Staff, and the Principal Deputy Director of ODNI, in order to “drive action on emerging technologies that otherwise may not be prioritized” across the national security sphere.
- Government microelectronics programs related to AI should be expanded in order to “develop novel and resilient sources for producing, integrating, assembling, and testing AI-enabling microelectronics.” In addition, the commission calls for articulating a “national for microelectronics and associated infrastructure.”
- Funding for DARPA’s microelectronics program should be increased to $500 million. The commission also recommends the establishment of a $20 million pilot microelectronics program to be run by the Intelligence Advanced Research Projects Activity (IARPA), focused on AI hardware.
- The establishment of a new office, tentatively called the National Security Point of Contact for AI, and encourage allied government to do the same in order to strengthen coordination at an international level. The first goal for that office would be to develop an assessment of allied AI research and applications, starting with the Five Eyes nations and then expanding to NATO.

One issue identified early by the commission is the question of ethical AI. The commission recommends mandatory training on the limits of artificial intelligence in the AI workforce, which should include discussions around ethical issues. The group also calls for the Secretary of Homeland Security and the director of the Federal Bureau of Investigation to “share their ethical and responsible AI training programs with state, local, tribal, and territorial law enforcement officials,” and track which jurisdictions take advantage of those programs over a five year period.

Missing from the report: any mention of the Pentagon’s Directive 3000.09, a 2012 order laying out the rules about how AI can be used on the battlefield. Last year C4ISRNet revealed that there was an ongoing debate among AI leaders, including Work, on whether that directive was still relevant.

While not reflected in the recommendations, Eric Schmidt, the former Google executive who chairs the commission, noted that his team is starting to look at how AI can help with the ongoing COVID-19 coronavirus outbreak, saying "“We’re in an extraordinary time… we’re all looking forward to working hard to help anyway that we can.”

The full report can be read here.

Mike Gruss contributed to this report.
View user's profile View All Posts By User

Posts: 25414
Registered: 13-8-2017
Location: Perth
Member Is Offline

[*] posted on 1-5-2020 at 04:52 PM

Artificial Intelligence Outperforms Human Intel Analysts In a Key Area


APRIL 29, 2020


A Defense Intelligence Agency experiment shows AI and humans have different risk tolerances when data is scarce.

In the 1983 movie WarGames, the world is brought to the edge of nuclear destruction when a military computer using artificial intelligence interprets false data as an imminent Soviet missile strike. Its human overseers in the Defense Department, unsure whether the data is real, can’t convince the AI that it may be wrong. A recent finding from the Defense Intelligence Agency, or DIA, suggests that in a real situation where humans and AI were looking at enemy activity, those positions would be reversed.

Artificial intelligence can actually be more cautious than humans about its conclusions in situations when data is limited. While the results are preliminary, they offer an important glimpse into how humans and AI will complement one another in critical national security fields.

DIA analyzes activity from militaries around the globe. Terry Busch, the technical director for the agency’s Machine-Assisted Analytic Rapid-Repository System, or MARS, on Monday joined a Defense One viewcast to discuss the agency’s efforts to incorporate AI into analysis and decision-making.

Earlier this year, Busch’s team set up a test between a human and AI. The first part was simple enough: use available data to determine whether a particular ship was in U.S. waters.

“Four analysts came up with four methodologies; and the machine came up with two different methodologies and that was cool. They all agreed that this particular ship was in the United States,” he said. So far, so good. Humans and machines using available data can reach similar conclusions.

The second phase of the experiment tested something different: conviction. Would humans and machines be equally certain in their conclusions if less data were available? The experimenters severed the connection to the Automatic Identification System, or AIS, which tracks ships worldwide.

“It’s pretty easy to find something if you have the AIS feed, because that’s going to tell you exactly where a ship is located in the world. If we took that away, how does that change confidence and do the machine and the humans get to the same end state?”

In theory, with less data, the human analyst should be less certain in their conclusions, like the characters in WarGames. After all, humans understand nuance and can conceptualize a wide variety of outcomes. The researchers found the opposite.

“Once we began to take away sources, everyone was left with the same source material — which was numerous reports, generally social media, open source kinds of things, or references to the ship being in the United States — so everyone had access to the same data. The difference was that the machine, and those responsible for doing the machine learning, took far less risk — in confidence — than the humans did,” he said. “The machine actually does a better job of lowering its confidence than the humans do….There’s a little bit of humor in that because the machine still thinks they’re pretty right.”

The experiment provides a snapshot of how humans and AI will team for important analytical tasks. But it also reveals how human judgement has limits when pride is involved.

Humans, particularly experts in specific fields, have a tendency to overestimate their ability to correctly infer outcomes when given limited data. Nobel-prize winning economist and psychologist Daniel Kahneman has written on the subject extensively. Kahneman describes this tendency as the “inside view.” He cites the experience of a group of Israeli educators assigned to write a new textbook for the Ministry of Education. They anticipated that it would take them a fraction of the amount of time they knew it would take another similar team. They couldn’t explain why they were overconfident; they just were. Overconfidence is human and a particular trait among highly functioning expert humans, one that machines don’t necessarily share.

The DIA experiment offers an important insight for military leaders, who hope AI will help make faster and better decisions, from inferring enemy positions to predicting possible terror plots. The Pentagon has been saying for years that the growing amount of intelligence data that flows from an ever-wider array of sensors and sources demands algorithmic support.

DIA’s eventual goal is to have human analysts and machine intelligence complement each other, since each has a very different approach to analysis, or as Busch calls it, “tradecraft.” On the human side, that means “transitioning the expert into a quantitative workflow,” he says. Take that to mean helping analysts produce insights that are never seen as finished but that can change as rapidly as the data used to draw those insights. That also means teaching analysts to become data literate to understand things like confidence intervals and other statistical terms. Busch cautioned that the experiment doesn’t imply that defense intelligence work should be handed over to software. The warning from WarGames is still current. “On the machine side, we have experienced confirmation bias in big data. [We’ve] had the machine retrain itself to error…That’s a real concern for us.”
View user's profile View All Posts By User

Posts: 25414
Registered: 13-8-2017
Location: Perth
Member Is Offline

[*] posted on 19-5-2020 at 12:45 PM

Booz Allen Hamilton wins massive Pentagon artificial intelligence contract

Andrew Eversden

5 hours ago

Booz Allen Hamilton won a five-year, $800 million task order to provide artificial intelligence services to the Department of Defense’s Joint Artificial Intelligence Center (JAIC).

Under the contract award, announced by the General Services Administration and the JAIC on May 18, Booz Allen Hamilton will provide a “wide mix of technical services and products” to support the JAIC, a DoD entity dedicated to advancing the use of artificial intelligence across the department.

The contracting giant will provide the JAIC with “data labeling, data management, data conditioning, AI product development, and the transition of AI products into new and existing fielded programs,” according to the GSA news release.

“The delivered AI products will leverage the power of DoD data to enable a transformational shift across the Department that will give the U.S. a definitive information advantage to prepare for future warfare operations,” the release said.

The contract will support the JAIC’s new joint warfighting mission initiative, launched earlier this year. The initiative includes “Joint All-Domain Command and Control; autonomous ground reconnaissance and surveillance; accelerated sensor-to-shooter timelines; operations center workflows; and deliberate and dynamic targeting solutions,” said JAIC spokesperson Arlo Abrahamson told C4ISRNET in January.

The joint warfighting initiative is looking for "AI solutions that help manage information so humans can make decisions safely and quickly in battle,” Abrahamson said. The award to Booz Allen Hamilton will push that effort forward, Lt. Gen. Jack Shanahan, the center’s director, said in a statement.

“The Joint Warfighting mission initiative will provide the Joint Force with AI-enabled solutions vital to improving operational effectiveness in all domains. This contract will be an important element as the JAIC increasingly focuses on fielding AI-enabled capabilities that meet the needs of the warfighter and decision-makers at every level," Shanahan said.

DoD CIO Dana Deasy told Defense News in December that the JAIC would embark on its first lethality project in 2020, which Abrahamson said would be part of the joint warfighting initiative. According to an April blog post from the JAIC, the initiative’s first RFP released in March included the ethical principles DoD adopted this year, an effort to quell concern about how the Pentagon uses artificial intelligence.

The award to Booz Allen Hamilton was made by the GSA through its Alliant 2 Government-wide Acquisition Contract, a vehicle designed to provide artificial intelligence services to the federal government. The GSA and JAIC have been partners since last September, when the pair announced that they were teaming up as part of the GSA’s Centers of Excellence initiative, a program meant to accelerate modernization with agencies across government.

“The CoE and the JAIC continue to learn from each other and identify lessons that can be shared broadly across the federal space,” said Anil Cheriyan, director of the GSA’s Technology Transformation Services office, which administers the Centers of Excellence program. “It is important to work closely with our customers to acquire the best in digital adoption to meet their needs.”
View user's profile View All Posts By User

Posts: 25414
Registered: 13-8-2017
Location: Perth
Member Is Offline

[*] posted on 20-5-2020 at 02:00 PM

DoD developing ‘best practices’ for AI programs

Aaron Mehta

8 hours ago

The DoD has many AI programs, but has yet to develop a cohesive set of standards for them. (metamorworks/Getty Images)

The Pentagon’s research and engineering office is developing a series of technical standards and best practices for the department’s artificial intelligence efforts, according to Mark Lewis, director of research and engineering for modernization.

While running through the top technical priorities under his purview during a Tuesday event hosted by the trade association AFCEA, Lewis highlighted the challenges of trying to corral the artificial intelligence programs spread throughout the Department of Defense.

“[T]here is so much going on in the department right now in artificial intelligence, it’s kind of difficult to get a handle on it,” Lewis said.

As an example, he explained that weeks ago he had tasked Jill Crisman, a former top official with the Joint Artificial Intelligence Center who is now working as the technical director on AI inside the R&E team, to look at the Pentagon-wide efforts on AI and provide “an evaluation” of where everything stood.

“She came back and said: ‘You know, there are so many hundreds of programs that we really couldn’t do a fair evaluation of each individual activity,’ ” Lewis said. So instead, the team had to pivot, and Crisman is now working to establish a “series of standards” for best practices in AI engineering that can be applied to every Pentagon project involving AI.

“One of the things we want to do is break down stovepipes and activities across the department, artificial intelligence, be able to share databases, able to share applications to best figure out what are the artificial intelligence applications that will have the biggest impact on the war fighter,” Lewis added. “In some cases that means getting [the technologies] in the hands of the war fighter and having them play with them, experiment with them, and figure out what makes their job more effective, what makes your job easier. And frankly, to enable them to discard the things that don’t buy their way into the war fight.”

Questions about how to organize the many AI programs running throughout the department have existed for several years, with no clear answer as to who is the point person, particularly between the JAIC and R&E.

Each of the armed services have ongoing AI programs, as do various offices in the so-called fourth estate, including the Defense Advanced Research Projects Agency. Meanwhile, there are a number of AI efforts outside the department that could be relevant to defense products.

Potentially complicating the situation: uncertainties about defense spending in the wake of the COVID-19 pandemic, which could lead to cuts to research and development efforts in the next budget cycle.

In terms of what the current economic situation might mean for the fiscal 2022 R&D budget, Lewis said he has “no indication” of potential cuts, but acknowledged “that’s got to be in the back of everyone’s mind right now.” He also stressed the need to protect small key nodes in the supply chain for R&E priorities, such as specialty shops that produce components for directed energy or hypersonic weapons.

That protection means both financial support and keeping an eye on foreign nations that may try to sweep in and “take over pieces of the supply chain that would put us at some risk” — a concern raised by several top defense acquisition officials over the last two months, particularly in terms of Chinese investments.
View user's profile View All Posts By User

Posts: 25414
Registered: 13-8-2017
Location: Perth
Member Is Offline

[*] posted on 20-5-2020 at 03:37 PM

The Pentagon’s $800M Effort to Embed AI In Decisions in ‘All Tiers’


12:19 AM ET


That's the goal of a five-year task order from the Joint Artificial Intelligence Center to Booz Allen Hamilton.

Through its partnership with the General Services Administration’s Centers of Excellence, the Defense Department’s central artificial intelligence program signed an $800 million contract with Booz Allen Hamilton for AI-powered warfighter support tools.

The Joint Artificial Intelligence Center, or JAIC, issued a five-year task order—awarded through GSA’s Alliant 2 governmentwide acquisition contract—to “deliver artificial intelligence-enabled products to support warfighting operations and be instrumental in embedding AI decision-making and analysis at all tiers of DoD operations,” according to a release Monday.

Booz Allen Hamilton will focus on identifying and integrating advanced analytical tools with existing DOD datasets to create “a definitive information advantage to prepare for future warfare operations.” The work will include “data labeling, data management, data conditioning, AI product development, and the transition of AI products into new and existing fielded programs and systems across the DoD.”

“The Joint Warfighting mission initiative will provide the Joint Force with AI-enabled solutions vital to improving operational effectiveness in all domains,” JAIC Director Lt. Gen. Jack Shanahan said in the statement. “This contract will be an important element as the JAIC increasingly focuses on fielding AI-enabled capabilities that meet the needs of the warfighter and decision-makers at every level.”

The contract is the direct result of a partnership established in September between the JAIC and GSA’s then-newly created AI Center of Excellence. The broader Centers of Excellence program was established in 2017 to act as a modernization advisory service, helping federal agencies identify needs across specific areas like cloud services or customer experience and develop acquisition strategies.

“The CoE and the JAIC continue to learn from each other and identify lessons that can be shared broadly across the federal space,” said Anil Cheriyan, director of GSA’s Technology Transformation Service, which oversees the CoE program. “It is important to work closely with our customers to acquire the best in digital adoption to meet their needs.”

GSA’s Federal Systems Integration and Management, or FEDSIM, office also played a role in the contract award.
View user's profile View All Posts By User

Posts: 25414
Registered: 13-8-2017
Location: Perth
Member Is Offline

[*] posted on 29-5-2020 at 12:35 PM

How Army Futures Command plans to grow soldiers’ artificial intelligence skills

By: Aaron Mehta   5 hours ago

Gen. John Murray, right, the head of Army Futures Command, listens to innovators during a visit to Capital Factory in Austin, Texas, on Sept. 30, 2018. (Courtesy of the U.S. Army)

WASHINGTON — With artificial intelligence expected to form the backbone of the U.S. military in the coming decades, the Army is launching a trio of new efforts to ensure it doesn’t get left behind, according to the head of Army Futures Command.

While speaking at an event Wednesday hosted by the Defense Writers Group, Gen. Mike Murray was asked about areas that need more attention as his command works to modernize the force.

Murray pointed to a change in how the service does long-term planning, as well as two personnel efforts that could pay off in the long run.

The first is something Murray has dubbed “Team Ignite,” which he described as “ad hoc, right now,” with a hope to formalize the process in the future. In essence, this means bringing in the teams that write the concept of operations for the military and having them work next to the technologists driving research and development efforts so that everything is incorporated early.

“It has occurred to me for a long time that when we prepare concepts about how we will fight in the future, they are usually not informed by scientists and what is potentially out there in terms of technology,” Murray said. “And when we invest in technologies, rarely do we consult the concept writers to understand what type of technology will fundamentally change the way we fight in the future.”

In Murray’s vision, this means soon there will be “a concept writer saying, ‘If only I could [do something we can’t do now], this would fundamentally change the way we would fight,’ and a scientist or technologist saying, ‘Well, actually we can, you know, another 10-15 years,’ and then vice versa,” he said. “Really using that to drive where we’re investing our science and technology dollars, so that in 10 or 15 years we actually can fundamentally change the way we’re going to fight.”

The Futures Command chief also laid out two new efforts to seed understanding of AI throughout the force, saying that “a key component of the Army moving more and more into the area of artificial intelligence is the talent that we’re going to need in the formation to do that.”

Murray described a ”recently approved” masters program to be run through Carnegie Mellon University, focusing on bringing in “young officers, noncommissioned officers and warrant officers” to teach them about artificial intelligence. The course features four to five months of actual learning in the classroom, followed by five or six months working for the Army’s AI Task Force. After that, the officers are sent back the force, bringing with them their AI experience.

Additionally, Murray is in the early stages of standing up what he described as a “software factory” to try and identify individual service members who have some computer skills, pull them out of their normal rotations and give them training on “basic coding skills” before sending them back to the force.

“We’re going to need a lot of these types of people. This is just [the] beginning, to seed the Army with the types of talent we’re going to need in the future if we’re going to take advantage of data, if we’re going to take advantage of artificial intelligence in the future,” he said.
View user's profile View All Posts By User

Posts: 25414
Registered: 13-8-2017
Location: Perth
Member Is Offline

[*] posted on 11-6-2020 at 03:19 PM

The Army AI task force takes on two ‘key’ projects

Andrew Eversden

6 hours ago

Army Futures Command is developing Ai tools for target recognition. (monsitj)

The Army’s artificial intelligence task force is working on two key projects, including one that would allow unmanned vehicles in the air to communicate with autonomous vehicles on the ground, after securing new funding, a service official said June 10.

Gen. Mike Murray, commander of Army Futures Command, said during a June 10 webinar hosted by the Association of the United States Army that the task force has moved forward on the projects through its partnership with Carnegie Mellon University, launched in late 2018 .

First, the team is working on programs dedicated to unmanned-unmanned teaming, or developing the ability of air and ground unmanned vehicles to talk to one other.

The other effort underway is on a DevSecOps environment to develop future algorithms to work with other Army systems, Murray said. He did not offer further detail.

The task force force has fewer than 15 people, Murray said, and fiscal 2021 will be the first year that it receives appropriated funds from Congress. Much of the work the task force has done so far as been building the team.

In response to an audience question, Murray said that the task force is not yet working on defending against adversarial machine learning, but added that leaders recognize that’s an area the team will need to focus on.

“We’re going to have to work on how do we defend our algorithms and really, how do we defend our training data that we’re using for our algorithms," Murray said.

In order to train effective artificial intelligence, the team needs significant amounts of data. One of the first projects for the task force was collecting data to develop advanced target recognition capabilities. For example, Murray said, being able to identify different types of combat vehicles. When the work started, the training data for target recognition didn’t exist.

“If you’re training an algorithm to recognize cats, you can get on the internet and pull up hundreds of thousands of pictures of cats,” Murray said. “You can’t do that for a T-72 [a Russian tank]. You can get a bunch of pictures, but are they at the right angles, lighting conditions, vehicle sitting camouflaged to vehicle sitting open desert?”

Murray also said he recognizes the Army needs to train more soldiers in data science and artificial intelligence. He told reporters in late May that the Army and CMU have created a masters program in data science that will begin in the fall. He also said that the “software factory,” a six- to 12-week course to teach soldiers basic software skills. That factory will be based in Austin, where Futures Command is located, and will work with industry’s local tech industry.

“We have got to get this talent identified I’m convinced we have it in our formations,” Murray said.
View user's profile View All Posts By User

Posts: 25414
Registered: 13-8-2017
Location: Perth
Member Is Offline

[*] posted on 9-7-2020 at 04:29 PM

Pentagon AI center shifts focus to joint warfighting operations

Nathan Strout

9 hours ago

The Joint Artificial Intelligence Center is focusing its efforts on building AI for the Department of Defense's Joint All-Domain Command and Control concept. (Getty)

The Pentagon’s artificial intelligence hub is shifting its focus to enabling joint warfighting operations, developing artificial intelligence tools that will be integrated into the Department of Defense’s Joint All-Domain Command and Control efforts.

“As we have matured, we are now devoting special focus on our joint warfighting operation and its mission initiative, which is focused on the priorities of the National Defense Strategy and its goal of preserving America’s military and technological advantages over our strategic competitors,” Nand Mulchandani, acting director of the Joint Artificial Intelligence Center, told reporters July 8. “The AI capabilities JAIC is developing as part of the joint warfighting operations mission initiative will use mature AI technology to create a decisive advantage for the American war fighter.”

That marks a significant change from where JAIC stood more than a year ago, when the organization was still being stood up with a focus on using AI for efforts like predictive maintenance. That transformation appears to be driven by the DoD’s focus on developing JADC2, a system of systems approach that will connect sensors to shooters in near-real time.

“JADC2 is not a single product. It is a collection of platforms that get stitched together — woven together ― into effectively a platform. And JAIC is spending a lot of time and resources focused on building the AI component on top of JADC2,” said the acting director.

According to Mulchandani, the fiscal 2020 spending on the joint warfighting operations initiative is greater than JAIC spending on all other mission initiatives combined. In May, the organization awarded Booz Allen Hamilton a five-year, $800 million task order to support the joint warfighting operations initiative. As Mulchandani acknowledged to reporters, that task order exceeds JAIC’s budget for the next few years and it will not be spending all of that money.

One example of the organization’s joint warfighting work is the fire support cognitive system, an effort JAIC was pursuing in partnership with the Marine Corps Warfighting Lab and the U.S. Army’s Program Executive Office Command, Control and Communications-Tactical. That system, Mulchandani said, will manage and triage all incoming communications in support of JADC2.

Mulchandani added that JAIC was about to begin testing its new flagship joint warfighting project, which he did not identify by name.

“We do have a project going on under joint warfighting which we are going to be actually go into testing,” he said. “They are very tactical edge AI is the way I’d describe it. That work is going to be tested. It’s actually promising work — we’re very excited about it.”

“As I talked about the pivot from predictive maintenance and others to joint warfighting, that is probably the flagship project that we’re sort of thinking about and talking about that will go out there,” he added.

While left unnamed, the acting director assured reporters that the project would involve human operators and full human control.

“We believe that the current crop of AI systems today [...] are going to be cognitive assistance,” he said. “Those types of information overload cleanup are the types of products that we’re actually going to be investing in.”

“Cognitive assistance, JADC2, command and control—these are all pieces,” he added.
View user's profile View All Posts By User

Posts: 25414
Registered: 13-8-2017
Location: Perth
Member Is Offline

[*] posted on 11-7-2020 at 03:13 PM

Where it Counts, U.S. Leads in Artificial Intelligence

(Source: US Department of Defense; issued July 09, 2020)

When it comes to advancements in artificial intelligence technology, China does have a lead in some places — like spying on its own people and using facial recognition technology to identify political dissenters. But those are areas where the U.S. simply isn't pointing its investments in artificial intelligence, said director of the Joint Artificial Intelligence Center. Where it counts, the U.S. leads, he said.

"While it is true that the United States faces formidable technological competitors and challenging strategic environments, the reality is that the United States continues to lead in AI and its most important military applications," said Nand Mulchandani, during a briefing at the Pentagon.

The Joint Artificial Intelligence Center, which stood up in 2018, serves as the official focal point of the department's AI strategy.

China leads in some places, Mulchandani said. "China's military and police authorities undeniably have the world's most advanced capabilities, such as unregulated facial recognition for universal surveillance and control of their domestic population, trained on Chinese video gathered from their systems, and Chinese language text analysis for internet and media censorship."

The U.S. is capable of doing similar things, he said, but doesn't. It's against the law, and it's not in line with American values.

"Our constitution and privacy laws protect the rights of U.S. citizens, and how their data is collected and used," he said. "Therefore, we simply don't invest in building such universal surveillance and censorship systems."

The department does invest in systems that both enhance warfighter capability, for instance, and also help the military protect and serve the United States, including during the COVID-19 pandemic.

Several service members wearing face masks check the contents of a case of canned goods.

The Project Salus effort, for instance, which began in March of this year, puts artificial intelligence to work helping to predict shortages for things like water, medicine and supplies used in the COVID fight, said Mulchandani.

"This product was developed in direct work with [U.S. Northern Command] and the National Guard," he said. "They have obviously a very unique role to play in ensuring that resource shortages ... are harmonized across an area that's dealing with the disaster."

Mulchandani said what the Guard didn't have was predictive analytics on where such shortages might occur, or real-time analytics for supply and demand. Project Salus — named for the Roman goddess of safety and well-being — fills that role.

"We [now have] roughly about 40 to 50 different data streams coming into project Salus at the data platform layer," he said. "We have another 40 to 45 different AI models that are all running on top of the platform that allow for ... the Northcom operations team ... to actually get predictive analytics on where shortages and things will occur."

As an AI-enabled tool, he said, Project Salus can be used to predict traffic bottlenecks, hotel vacancies and the best military bases to stockpile food during the fallout from a damaging weather event.

As the department pursues joint all-domain command and control, or JADC2, the JAIC is working to build in the needed AI capabilities, Mulchandani.

"JADC2 is ... a collection of platforms that get stitched together and woven together[ effectively into] a platform," Mulchandani said. "The JAIC is spending a lot of time and resources focused on building the AI components on top of JADC2. So if you can imagine a command and control system that is current and the way it's configured today, our job and role is to actually build out the AI components both from a data, AI modeling and then training perspective and then deploying those."

When it comes to AI and weapons, Mulchandani said the department and JAIC are involved there too.

"We do have projects going on under joint warfighting, which are actually going into testing," he said. "They're very tactical-edge AI, is the way I describe it. And that work is going to be tested. It's very promising work. We're very excited about it."

While Mulchandani didn't mention specific projects, he did say that while much of the JAIC's AI work will go into weapons systems, none of those right now are going to be autonomous weapons systems. The concepts of a human-in-the-loop and full human control of weapons, he said, "are still absolutely valid."

View user's profile View All Posts By User

Posts: 25414
Registered: 13-8-2017
Location: Perth
Member Is Offline

[*] posted on 11-7-2020 at 06:52 PM

Pentagon AI Gains ‘Overwhelming Support’ From Tech Firms – Even Google

Despite past battles over Project Maven and other military uses of AI, “Google and many others” are now working with the Pentagon’s Joint Artificial Intelligence Center, its new acting director says.


on July 10, 2020 at 11:11 AM

Nand Mulchandani, Acting Director of the Joint Artificial Intelligence Center, holds his first Pentagon press conference, July 8, 2020.

WASHINGTON: Despite some very public blow-ups, “we have had overwhelming support and interest from tech industry in working with the JAIC and the DoD,” the new acting director of the Joint Artificial Intelligence Center, Nand Mulchandani, said Wednesday in his first-ever Pentagon press conference. Speaking two years after Google very publicly pulled out of the AI-driven Project Maven, Mulchandani said that, today, “[we] have commercial contracts and work going on with all of the major tech and AI companies — including Google — and many others.”

Mulchandani is probably better positioned to sell this message than his predecessor, Lt. Gen. Jack Shanahan, an Air Force three-star who ran Project Maven and then founded the Joint AI Center in 2018. While highly respected in the Pentagon, Shanahan’s time on Maven and his decades in uniform created some static in Silicon Valley. Mulchandani, by contrast, has spent his life in the tech sector, joining JAIC just last year after a quarter-century in business and academe.

But relations with the tech world are still tricky at a time when the two-year-old JAIC is moving from relatively uncontroversial uses of artificial intelligence such as AI-driven predictive maintenance, disaster relief and COVID response, to battlefield uses. In June, it awarded consulting firm Booz Allen Hamilton a multi-year contract to support its Joint Warfighting National Mission Initiative, with a maximum value of $800 million, several times the JAIC’s annual budget. For the current fiscal year, Mulchandani said, “spending on joint warfighting is roughly greater than the combined spending on all the of other JAIC mission initiatives” combined.

But tech firms should not have a problem with that, Mulchandani said, and most of them don’t, because AI in the US military is governed far more strictly by rivals like China or Russia. “Warfighting” doesn’t mean Terminators, SkyNet, or other scifi-style killer robots. It means algorithmically sorting through masses of data to help human warfighters make better decisions faster.

Wait, one reporter asked, didn’t Shanahan say shortly before his retirement that the military was about to field-test its first “lethal” AI?

“Many of the products we work on will go into weapons systems,” Mulchandani said. “None of them right now are going to be autonomous weapons systems.”

“Now, we do have products going on under joint warfighting which are actually going into testing,” he went on. “As we pivot [to] joint warfighting, that is probably the flagship product … but it will involve operators, human in the loop, human control.”

For example, JAIC is working with the Army’s PEO-C3T (Command, Control, & Communications – Tactical) and the Marine Corps Warfighting Lab (MCWL) on a Fire Support Cognitive Assistant, software to sort through incoming communications such as calls for artillery or air support. It’s part of a much wider push, led by the Air Force, to create a Joint All-Domain Command & Control (JADC2) mega-network that can coordinate operations by all five armed services across land, sea, air, space, and cyberspace.

Multi-Domain Operations, or All Domain Operations, envisions a new collaboration across land, sea, air, space, and cyberspace (Army graphic)
View user's profile View All Posts By User

Posts: 25414
Registered: 13-8-2017
Location: Perth
Member Is Offline

[*] posted on 21-7-2020 at 01:01 PM

Could this software help users trust machine learning decisions?

Nathan Strout

6 hours ago

BAE Systems says their new software will essentially audit machine learning systems, providing human users with more context about the systems' output.

WASHINGTON - New software developed by BAE Systems could help the Department of Defense build confidence in decisions and intelligence produced by machine learning algorithms, the company claims.

BAE Systems said it recently delivered its new MindfuL software program to the Defense Advanced Research Projects Agency in a July 14 announcement. Developed in collaboration with the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory, the software is designed to increase transparency in machine learning systems—artificial intelligence algorithms that learn and change over time as they are fed ever more data—by auditing them to provide insights about how it reached its decisions.

“The technology that underpins machine learning and artificial intelligence applications is rapidly advancing, and now it’s time to ensure these systems can be integrated, utilized, and ultimately trusted in the field,” said Chris Eisenbies, product line director of the cmpany’s Autonomy, Control, and Estimation group. “The MindfuL system stores relevant data in order to compare the current environment to past experiences and deliver findings that are easy to understand.”

While machine learning algorithms show promise for DoD systems, determining how much users can trust their output remains a challenge. Intelligence officials have repeatedly noted that analysts cannot rely on black box artificial intelligence systems that simply produce a decision or piece of intelligence—they need to understand how the system came to that decision and what unseen biases (in the training data or otherwise) might be influencing that decision.

MindfuL is designed to help address that gap by providing more context around those outputs. For instance, the company says its program will issue statements such as” “The machine learning system has navigated obstacles in sunny, dry environments 1,000 times and completed the task with greater than 99 percent accuracy under similar conditions;” or “The machine learning system has only navigated obstacles in rain 100 times with 80 percent accuracy in similar conditions; manual override recommended.” Those types of statements can help users evaluate how much confidence they should place in any individual decision produced by the system.

This is the first release of the MindfuL software as part of a $5 million, three-year contract under DARPA’s Competency-Aware Machine Learning (CAML) program. BAE Systems plans to demonstrate their software in both simulation and in prototype hardware later this year.
View user's profile View All Posts By User
 Pages:  1  

  Go To Top

Powered by XMB 1.9.11
XMB Forum Software © 2001-2017 The XMB Group
[Queries: 16] [PHP: 81.9% - SQL: 18.1%]