Categories

Live Blog: AAAI Fall Symposium on 'Artificial Intelligence in Government and Public Sector'

Live Blog: AAAI Fall Symposium on 'Artificial Intelligence in Government and Public Sector'

Scroll down for updates

CSRI Co-Director Prof Alun Preece is in Arlington, Virginia, USA as a co-organiser of the AAAI Fall Symposium on "Artificial Intelligence in Government and Public Sector” 

Founded in 1979, the Association for the Advancement of Artificial Intelligence (AAAI) (formerly the American Association for Artificial Intelligence) is a nonprofit scientific society devoted to advancing the scientific understanding of the AI field. AAAI has hosted its Fall Symposia series for over 25 years on the east coast of the USA. Arlington VA, adjacent to Washington DC, is an ideal venue for an event examining the progress, opportunities and challenges in applying AI and machine learning in government and public sector domains. Prof Preece will be updating this live blog over the three days of the symposia.


Lynne Parker of the White House Office of Science and Technology Policy (OSTP) opened the Symposium with a close look at the US Government’s AI R&D Strategic Plan https://www.nitrd.gov/PUBS/national_ai_rd_strategic_plan.pdf which emphasises two areas where CRSI's joint US/UK DAIS programme is making advances:

  1. Developing effective methods for human-AI collaboration (rather than human replacement);

  2. Ensuring the safety and security of AI systems (trustworthy, safe, explainable AI).

Dr Parker also referenced the 2018 White House memorandum "Artificial Intelligence for the American People”  https://www.whitehouse.gov/briefings-statements/artificial-intelligence-american-people/ which emphasises the US Government’s commitment to pursuing international AI R&D collaborations, highlighting its first-ever Science and Technology (S&T) agreement between the United States and the United Kingdom.


Gavin Pearson, CSRI’s close collaborator from Dstl / Ministry of Defence, spoke on “A Systems Approach to Achieving the Benefits of Artificial Intelligence in UK Defence”. Gavin and his co-authors identify issues which currently block UK Defence from fully benefiting from AI technology, setting these in the context of a systems reference model for the AI Value Train.

Key issues highlighted by Gavin included addressing achievable use cases to show benefit, improving the availability of defence-relevant data, and enhancing defence ‘know how’ in AI.

The pre-proceedings of the Symposium, including Gavin’s paper, is published at https://arxiv.org/abs/1810.06018


Alun Preece presented the first of four papers co-authored by the CSRI DAIS team. “Hows and Whys of Artificial Intelligence for Public Sector Decisions: Explanation and Evaluation” https://arxiv.org/abs/1810.02689 is based on the outputs of a workshop hosted at CSRI in May 2018 in collaboration with Y Lab http://ylab.wales where we brought together AI technologists and public sector "challenge owners” to explore the potential for AI and machine learning in their organisations. We identified a set of issues and strategies that are highlighted in the paper in terms of the differing elements that need to be emphasised when creating and evaluating AI solutions for public sector decision applications. The talk highlighted the importance of explanation facilities and their relationship to verification and validation.

 

Richard Tomsett of IBM Research UK, our collaborator on the DAIS ITA programme, presented a paper co-authored by CSRI’s Alun Preece and Dan Harborne. “Stakeholders in Explainable AI” https://arxiv.org/abs/1810.00184 expands on our earlier “Interpretable to Whom?” framework by considering the problem of explaining the decision-making behaviour of modern machine learning systems from the prospective of four communities: developers, theorists, ethicists and users. The paper argues that, of these four communities, users currently have the least representation in the literature; however, the success or failure of AI as a technology in the near term is critically dependent on meeting users’ expectations.


Dr Tien Pham, the US Army Research Lab’s senior campaign scientist for information sciences, presented an overview of ARL’s artificial intelligence and machine learning essential research area. Tien, who is also US Government technical area leader for DAIS ITA, emphasised the challenges of developing AI and ML technologies that are robust in complex data environments, and can work effectively in resource-constrained “edge” environments. He highlighted the value to ARL of working in partnership with academia and industry, including our DAIS work. A key innovation to facilitate collaboration is the ARL Open Campus in Adelphi, Maryland, that recently played host to three of CSRI’s PhD students.


An emerging theme of the symposium has been robustness of AI/ML systems in the face of “unknowns”. Lance Kaplan of the US Army Research Laboratory presented an approach to directly addressing this problem, from the paper “Uncertainty Aware AI ML: Why and How” https://arxiv.org/abs/1809.07882 co-authored by CSRI’s Federico Cerutti and Alun Preece. The key point highlighted by Lance is that it is critical for AI/ML systems to express uncertainty in novel situations, i.e., to be aware of “known unknowns”. This allows them to work more effectively in collaboration with humans whose broader knowledge and direct experience of the world may allow them to fill in the gaps in the machine’s knowledge. This is seen as an exciting research area, and is an active theme in the DAIS ITA project Anticipatory Situational Understanding .


Presenting on behalf of CSRI PhD student Iain Barclay, Alun Preece gave the paper “Defining the Collective Intelligence Supply Chain” https://arxiv.org/abs/1809.09444 which argues for a more principled way of building collective intelligence systems that exploit data from humans, e.g., via crowd-sourcing. The paper proposes a “supply chain” model backed by decentralised blockchain distributed ledgers that makes the provenance of data traceable back to providers. This enables tracking of source validity — for example, is the source qualified or trustworthy? — and fair reward for crowd-worker efforts. The latter is an important issue, as a step towards “ethically sourced” collective intelligence products such as an AI/ML system that uses training data produced by human labellers.

Challenge Graphic.jpg

Chuck Howell of Mitre Corp presented the summary of the Symposium at the plenary wrap-up session:

  1. “Get it out of the lab” — several speakers argued that in order to make progress it’s time to move AI and ML beyond theory-driven concerns and really start engaging with transition of the technology in messy real-world contexts.

  2. “Get beyond admiring the problem” — another related recurring theme was the need to move from examining the (admittedly large!) problem space of what can go badly wrong in AI/ML, and focus on pragmatic near-term solutions: what does incremental progress look like?

  3. “Oh, it’s real now” — the Symposium has now been running for four years and it was very evident this year that we’ve seen progression from “opportunities” to “pilots” to acceptance of AI/ML as “business as usual”.

  4. A distinguishing feature of this Symposium is shared tacit understanding that technical and governance (human) issues go hand-in-hand in government and public sector AI/ML application domains.

  5. An explicit theme of this year’s Symposium was considering what makes the government and public sector domains “special": there is a wealth of publicly-owned data, though access needs to be strictly overseen; compared to industry, there is a lack of compute resources; the consequences of failure are very serious, and risk tolerance is relatively conservative; nevertheless, there is a deeply-rooted culture of accountability and transparency in our domains — and strong interdisciplinary working across engineering, social sciences and humanities — which arguably puts this community in a prime position to address areas such as bias mitigation, safety, and human+computer co-working.


ENDS

From Minutes to Months: A Rapid Evidence Assessment of the Impact of Media and Social Media During and After Terrorist Events

From Minutes to Months: A Rapid Evidence Assessment of the Impact of Media and Social Media During and After Terrorist Events

Cardiff Model Adopted to Tackle Opioid Misuse and Violence in Wisconsin, US

Cardiff Model Adopted to Tackle Opioid Misuse and Violence in Wisconsin, US