News

What's been going on @ COMPASS

Steering AI

Lewis Griffin appeared on the Steering AI podcast to discuss how Large Language Models could be used for political influence.

16-02-2024

Artificial Intelligence: The Criminal Threat

Lewis Griffin and Kimberly Ton Tran contributed to BBC File on 4 documentary "Artificial Intelligence: The Criminal Threat".

26-11-2023

Generative AI and Homeland Security

COMPASS organised a two day workshop, 'Generative AI and Homeland Security: rethinking risk and response', 9-10 November 2023.

10-11-2023

Malicious uses of AI

Lewis Griffin presented on 'Malicious uses of AI' at the 1st AI Forum for Law Enforcement in Arab Countries,
co-organized by NAUSS and UNICRI.

05-10-2023

AI in Wargaming

Lewis Griffin presented on 'AI and Wargaming' at Connections UK 2023. [Slides, Audio]

06-09-2023

BBC Inside Science

Professor Lewis Griffin was a guest on BBC Inside Science talking about voice deepfakes.

17-08-2023

Generative AI Policy Workshop

COMPASS, with the Dawes Centre for Future Crime are organizing a workshop for the UK HMG Home Office to build understanding of the technology and threat, enhance direct industry engagement and begin articulating future policy positions on Generative AI.

25-07-2023

Large Language Models & Influence

Report (id DSTL/TR149009) completed for UK Defence S&T Futures Programme.

15-05-2023

Susceptibility to Influence of Large Language Models

Our new pre-print paper on Susceptibility to Influence of Large Language Models is now available on arXiv.

10-03-2023

Adversarial Camera Model Anonymization

Our new paper on Conditional Adversarial Camera Model Anonymization has been accepted at ECCV 2020 (Advances in Image Manipulation workshop)

10-08-2020

AI-enabled Future Crime

Our paper on AI-enabled Future Crime has been accepted and is in press at Crime Science

30-06-2020

Limits on Transfer Learning

Our new paper "Limits on transfer learning from photographic image data to X-ray threat detection" has been published in the Journal of X-ray Science and Technology

02-01-2020

Multiple-Identity Image Attacks Against Face-based Identity Verification

Our new paper on Multiple Identity Images is now available online: "Multiple-Identity Image Attacks Against Face-based Identity Verification"

01-07-2019

AvSec World 2019

Lewis Griffin is an invited speaker at the "AvSec World 2019" conference in Miami, 26-28 February.

26-02-2019

AI & Future Crime

COMPASS is hosting an AI & Future Crime sandpit, 14-15 February 2019

14-02-2019

Unexpected item in the bagging area

Our new Anomaly Detection paper for IEEE Transactions on Information Forensics and Security is now online: "Unexpected item in the bagging area".

16-11-2018

Invited Talk at ADSA18

Dr Lewis Griffin will give an invited talk at the 18th workshop on Advanced Development for Security Applications (ADSA18).

24-03-2018

FASS Phase 2 project award

COMPASS awarded FASS phase 2 funding for project 'Next-generation automated image analysis for security: semantics and anomalies'

26-01-2018

AI & Future Crime

COMPASS awarded project from Dawes Foundation on 'AI & Future Crime'.

23-09-2017

Defence Science and Technology Laboratory

COMPASS & Renzoni lab awarded DSTL project on 'Machine learning aided electromagnetic imaging with atomic magnetometers'.

29-07-2017

The Team

Meet the members of the COMPASS team

Prof. Lewis Griffin (PI)

Prof. Lewis Griffin

Group Leader | Principal Investigator

Maximilian Mozes (PhD Student)

Maximilian Mozes

PhD Student

Kimberly Tran (PhD Student)

Kimberly Tran

PhD Student

Former Members

Dr. Nicolas Jaccard (Former RA)

Dr. Nicolas Jaccard

Former Research Associate

Dr. Thomas W. Rogers (Former PhD student)

Dr. Thomas Rogers

Former PhD Student

Mark Ransley (Former PhD Researcher)

Mark Ransley

Former PhD Researcher

Dr. Thomas Tanay (Former PhD student)

Dr. Thomas Tanay

Former PhD Student

Dr. Jerone T. Andrews (RA)

Dr. Jerone Andrews

Former Research Associate

Research Projects

Projects that COMPASS are currently working on

Large Language Models (e.g. OpenAI's GPT-4) have uses in Strategic Influence as Author, Vector, Target, Subject and Gauge. Understanding the potential of these will support detection of and defence against adversarial influence.
Security staff inspect x-ray images by: threat detection, where they look for particular items (e.g. knives, detonators); and anomaly detection, where they look for deviations from normal. This project will automate anomaly detection, for which there are no current systems. For firearms, anomaly detection is particularly important for ISO containers and vehicles, where the fabric of the container or vehicle provides opportunities for concealment. Firearms so concealed may not be visible as such, hence not detectable by threat detection methods, but may still be noticed by security staff who spot a darkening out of place (e.g. in the roof of an ISO container) or a shape not quite right (e.g. the engine block of a car). The system we develop for anomaly detection, like experienced security staff, will 'know' what is normal so that it can spot such deviations.
This project will develop firearm detection algorithms using the most recent methods of Computer Vision. Specifically, we will use Convolutional Neural Networks (CNNs), with parameters Deep Learnt from training images. CNNs approximate the action and connections of neurons in the human brain. ‘Deep’ because of the many layers of the network of artificial neurons that they employ. ‘Learning’ because only the broad architecture of the network is engineered, its detailed parameters being learnt by exposing it to relevant images. Across the two phases of the project, we will develop: a single algorithm for detection of firearms in x-ray images; with performance validated as being at least at human-level; applicable to images from scanners of any modality and manufacturer, after a one-off automated tuning process for operation on a new scanner type, requiring only a sample dataset of benign images.
Since 9/11 commercial flights have been attacked by groups with political aims. Their strategic aim is to provoke fear, leading to pressure for political change. Reactive security measures, which visibly guard against a repeat attack, achieve the purpose of the attack since they constantly remind of the existence of the adversaries, who are confirmed as national enemies, not mere criminals. Understanding this, the terrorists readily vary their tactics. Each variation provokes a new measure layered on top of existing measures. The apparent potency of the adversaries is hugely magnified: we walk unshod, and have strangers touch our `junk' and taste our children's food. To exit this cycle a novel approach is needed that has the capability to detect the next attack, not the previous; and to do so invisibly, so that fear is not further magnified. In this project, we will apply the latest Computer Science to automate Anomaly Detection (noticing the suspicious or unusual), as used by experienced, trained security staff. Deep Learning algorithms, that mimic the operation of the human brain, makes this feasible: they recognize faces better than humans. If automated Anomaly Detection at human-levels of performance can be achieved, then computers calculating out of sight, can pore over every luggage and baggage scan and every airport CCTV feed, as if a thousand trained, experienced, unflagging security staff were constantly employed in every airport looking for oddities and passing them up to security staff when found.
In 2013 the Renzoni lab at UCL demonstrated, for the first time, the possibility of imaging using atomic magnetometers in the Magnetic Induction Tomography modality. This opened up a new realm for electromagnetic imaging, given the extreme sensitivity of atomic magnetometers at low frequency. Electromagnetic imaging has potential in many security domains where x-rays are not applicable, such as universal fast parcel screening. COMPASS is working with the Renzoni lab, applying machine learning methods to produce tomographic images from magnetometer measurements that depend non-linearly on the scene. The Machine Learning approach provides an alternative to the standard Inverse Problem approach requiring many Finite Element simulations to produce each image. By shifting the computational burden onto a training stage, we permit fast enough inversion to be applicable in security applications with high throughput.

Publications

Selection of recent COMPASS research output

Filter by type of application:
Year Title Journal/Conf. Authors Type
2023 Use of LLMs for Illicit Purposes: Threats, Prevention Measures, and Vulnerabilities arXiv Maximilian Mozes MM, Xuanli He XH, Bennett Kleinberg BK, Lewis Griffin LDG Language Models
2023 Warning: Humans Cannot Reliably Detect Speech Deepfakes PLoS One (in press) Kimberly Mai KTM, Sergi Bray SDB, Toby Davies TD, Lewis Griffin LDG Deepfakes Speech Audio
2023 Large Language Models respond to Influence like Humans SICon 2023 Lewis Griffin LDG, Bennett Kleinberg BK, Maximilian Mozes MM, Kimberly Mai KTM, Maria Vau MV, Matthew Caldwell MC, Augustine Marvor-Parker AMP Language Models
2023 Susceptibility to Influence of Large Language Models arXiv Lewis Griffin LDG, Bennett Kleinberg BK, Maximilian Mozes MM, Kimberly Mai KTM, Maria Vau MV, Matthew Caldwell MC, Augustine Marvor-Parker AMP Language Models
2023 Warning: Humans Cannot Reliably Detect Speech Deepfakes arXiv Kimberly Mai KTM, Sergi Bray SDB, Toby Davies TD, Lewis Griffin LDG Deepfakes Speech Audio
2020 Conditional Adversarial Camera Model Anonymization ECCV Jerone Andrews JTAA, Yidan Zhang YZ, Lewis Griffin LDG Object Detection Adversarial Machine Learning
2020 AI-enabled Future Crime Crime Science Matthew Caldwell MC, Jerone Andrews JTAA, Thomas Tanay TT, Lewis Griffin LDG Review Future Crime
2019 Limits on transfer learning from photographic image data to X-ray threat detection Journal of X-ray Science and Technology Matthew Caldwell MC, Lewis Griffin LDG Transfer Learning Object Detection
2019 Multiple-Identity Image Attacks Against Face-based Identity Verification arXiv Jerone Andrews JTAA, Thomas Tanay TT, Lewis Griffin LDG Face Morphing Data Poisoning Adversarial Machine Learning
2018 Unexpected item in the bagging area: Anomaly Detection in X-ray Security Images TIFS Lewis Griffin LDG, Matthew Caldwell MC, Jerone Andrews JTAA, Helene Bohler HB Anomaly Detection
2018 Machine Learning Based Localization and Classification with Atomic Magnetometers PRL Cameron Deans CD, Lewis Griffin LDG, Luca Marmugi LM, Ferruccio Renzoni FR Electromagnetic Imaging
2017 Transferring x-ray based automated threat detection between scanners with different energies and resolution SPIE D+S Matthew Caldwell MC, Mark Ransley MR, Thomas Rogers TWR, Lewis Griffin LDG Object Detection
2017 Representation-learning for anomaly detection in complex x-ray cargo imagery SPIE D+S Jerone Andrews JTAA, Nicolas Jaccard NJ, Thomas Rogers TWR, Thomas Tanay TT, Lewis Griffin LDG Anomaly Detection
2017 L2 Regularization and the Adversarial Distance ICML Jerone Andrews JTAA, Nicolas Jaccard NJ, Thomas Rogers TWR, Thomas Tanay TT, Lewis Griffin LDG Anomaly Detection Adversarial Machine Learning
2017 A deep learning framework for the automated inspection of complex dual-energy x-ray cargo imagery SPIE D+S Thomas Rogers TWR, Nicolas Jaccard NJ, Edward Morton EJM, Lewis Griffin LDG Object Detection
2016 A boundary tilting perspective on the phenomenon of adversarial examples arXiv Thomas Tanay TT, Lewis Griffin LDG Adversarial Machine Learning
2016 Anomaly Detection for Security Imaging DSDS Jerone Andrews JTAA, Nicolas Jaccard NJ, Thomas Rogers TWR, Thomas Tanay TT, Lewis Griffin LDG Anomaly Detection
2016 Automated detection of smuggled high-risk security threats using Deep Learning ICDP Nicolas Jaccard NJ, Thomas Rogers TWR, Edward Morton EJM, Lewis Griffin LDG Object Detection
2016 Detection of concealed cars in complex cargo X-ray imagery using deep learning JXST Nicolas Jaccard NJ, Thomas Rogers TWR, Edward Morton EJM, Lewis Griffin LDG Object Detection
2016 Automated X-ray Image Analysis for Cargo Security: Critical Review and Future Promise JXST Thomas Rogers TWR, Nicolas Jaccard NJ, Lewis Griffin LDG Review Object Detection Anomaly Detection Image Pre-Processing
2016 Measuring and correcting wobble in large-scale transmission radiography JXST Thomas Rogers TWR, James Ollier JO, Edward Morton EJM, Lewis Griffin LDG Image Pre-Processing
2016 Threat Image Projection (TIP) into X-ray images of cargo containers for training humans and machines IEEE ICCST Thomas Rogers TWR, Nicolas Jaccard NJ, Emmanouil Protonotarios EPD, James Ollier JO, Edward Morton EM, Lewis Griffin LDG Object Detection
2016 Transfer Representation-Learning for Anomaly Detection ICML Jerone Andrews JTAA, Thomas Tanay TT, Edward Morton EJM, Lewis Griffin LDG Anomaly Detection
2016 Tackling the x-ray cargo inspection challenge using machine learning SPIE D+S Nicolas Jaccard NJ, Thomas Rogers TWR, Edward Morton EJM, Lewis Griffin LDG Review Object Detection Image Pre-Processing
2016 Detecting Anomalous Data Using Auto-Encoders IJMLC Jerone Andrews JTAA, Edward Morton WJM, Lewis Griffin LDG Anomaly Detection
2015 Using deep learning on X-ray images to detect threats DSDS Nicolas Jaccard NJ, Thomas Rogers TWR, Edward Morton EJM, Lewis Griffin LDG Object Detection
2015 Detection of cargo container loads from X-ray images IET ICISP Thomas Rogers TWR, Nicolas Jaccard NJ, Edward Morton EJM, Lewis Griffin LDG Object Detection
2014 Labelling images without classifiers YDS Theodore Boyd TB, Lewis Griffin LDG Object Detection
2014 Automated detection of cars in transmission X-ray images of freight containers IEEE AVSS Nicolas Jaccard NJ, Thomas Rogers TWR, Lewis Griffin LDG Object Detection
2014 Reduction of Wobble Artefacts in Images From Mobile Transmission X-ray Vehicle Scanners IEEE ICIST Thomas Rogers TWR, James Ollier JO, Edward Morton EJM, Lewis Griffin LDG Image Pre-Processing

Partners and Funding

Rapiscan Systems
EPSRC
Home Office
DfT
Defense and Security Accelerator

Contact us

Lewis D. Griffin
Department of Computer Science
University College London
Gower Street
London
WC1E 6BT
Telephone: +44 20 3108 7107
E-mail: l.griffin@cs.ucl.ac.uk

UCL Engineering logo