What's a hackathon?

A hackathon is an event, usually lasting a few days, during which participants are invited to use their programming skills to collaborate on one or more projects.

Program

9:00

10:00

11:00

12:00

13:00

14:00

15:00

16:00

17:00

9:30

10:30

11:30

12:30

13:30

14:30

15:30

16:30

Welcome session + project leaders presentations

10:00 - 11:00 Jeremy Bentham room - UCL main campus

lunch break

lunch break

coffee break

coffee break

coffee break

Socializing

17:00 - all night Function room - 90 High Holborn

Socializing

17:00 - all night Function room - 90 High Holborn

Closing session

,   16:30 - 17:00,    Jeremy Bentham room - UCL main campus

List of Projects

A list of currently proposed projects for the hackathon, along with links to the proposer and the research group they're in for more background.

We are no longer looking for more projects because we've had so many great submissions! If you have an idea that you didn't get to propose then consider being on next years organising committee.

Project Title Project Leader Research Group Project Description
Lung and placenta united Ashkan Pakzad, Paddy Slator Satsuma, MIG This project will transfer knowledge, ideas, and techniques between the similar but different domains of lung and placenta imaging. We'll apply AirQuant to derive lower-dimensional representations of synthetic placental vascular trees, segment vessels in placental microCT with techniques designed for airways, calculate susceptibility maps from airway segmentations, and calculate flow and transport maps in the lungs.
Deploying Cloud ML Models Using Microsoft InnerEye Tom Dowrick, Haroon Chugthai COMPASS, ARC/CMIC MIRSG Microsoft InnerEye provides a suite of tools for training and developing medical imaging algorithms either using local computing resources or Azure cloud services, and there are many algorithms/models developed within CMIC that would benefit from compatiblity with InnerEye, to allow for more widespread deployment.
During this hackathon project, participants will convert existing models developed within CMIC to use PyTorch Lightning, and then deploy them on the cloud using InnerEye. Pariticipants will also work to deploy a local InnerEye environment using existing CMIC computing resources, to provide a testing/development environment that can be used in the longer term, to prepare models before they are deployed on the cloud.
Towards Robust Machine Learning: Debiasing Neural Networks via Mixtures of Experts Moucheng Xu Satsuma Safeness and robustness of machine learning models are very important in applications such as medical imaging. However, the current machine learning models are vulnerable against out-of-the-distribution noises (e.g. testing data is different from training data, for example, training data is health scans and testing data has patients scans, or they are from different scanners or populations). In this project, we will have two adversarial objectives. One objective will focus on how to design "worse" out-of-the-distribution noises to attack the networks (e.g. extreme class imbalance tailored to data distribution, extreme augmentation, et al). The other objective will focus on how to design "better" mixtures of expert's based neural network models (e.g. probabilistic mixtures or stochastic mixtures) to protect the networks. Due to the time limit, we will evaluate the progress on a synthetic image segmentation task with MNIST. The project will be based on Python and Pytorch.
Simulating diffusion microstructural models Leevi Kerkela Developmental Imaging and Biophysics Section, UCL GOSH ICH Numerical simulations play an essential part in developing and validating diffusion-weighted MRI data acquisition and analysis methods. Furthermore, microstructural parameter estimation using supervised machine learning requires large amounts of training data. This project aims to develop an efficient simulator for generating signals from a given acquisition protocol and microstructural model. We will write the simulator in Python using tools like Numpy, JAX, PyTorch, Pytest, and Sphinx. If the resulting software package is of sufficient quality, the hackathon could be followed by paper submission to, for example, the Journal of Open Source Software.
Pond/Off: Comparing Alzheimer's progression models Cameron Shand POND Are Alzheimer's disease progression models comparable? This hackathon project is part hacking, part research. Disease progression modelling has become increasingly popular with numerous different groups developing computational models and releasing the source code. The code can be challenging to run, which may explain why systematic comparison of selected models is yet to be conducted. This project will provide the code for multiple disease progression models, along with pre-prepared data from the Alzheimer's Disease Neuroimaging Initiative, among others. The challenge will be to get the code running, document the steps necessary for each, and perform a direct comparison of results from competing disease progression models.
ADNI@CMIC Neil Oxtoby POND The aim is to create a centralised resource for ADNI (Alzheimer's Disease Neuroimaging Initiative) data on the CS cluster. We already have some scripts/pipelines for converting data into BIDS format, but so far we have collated MRI data only. This project will scale up this effort to create scripts for updating the data upon new releases, organise/select different data modalities, and create more containers for running BIDS apps (we currently have a singularity image for FreeSurfer). Requirements are to have authorized data access to ADNI, familiarity with Linux, and the HPC environment. You do not need to have previously created containers, though some familiarity would help.
Deep learning of quantitative MRI parameters Chris Parker CIG Unsupervised deep learning promises fast and efficient estimation of quantitative MRI parameters. Yet, current approaches are sub-optimal for data with low SNR, as they incorrectly assume that image noise is gaussian.
This project aims to incorporate a rician distribution-based likelihood into the cost function of deep learning algorithms. We will demonstrate the approach using the IVIM diffusion model, which relies on low SNR DWI data, as an example. If there is time, we can also extend it to a Bayesian framework.
Simulating susceptibility-induced distortion fields. Antoine Legouhy CIG Echo planar imaging (EPI) is the most common approach for diffusion and functional MRI, but it produces images with severe geometric distortions due to susceptibility-induced B0 field inhomogeneities. The state-of-the-art tool to correct these distortions is FSL TOPUP, which is slow. Recently, deep-learning techniques have been developed allowing tremendous processing time reductions. However, the multifarious acquisition settings of EPI leads to a large variety of possible contrasts, and those methods generalize very poorly. The use of generative models to produce synthetic MR images from segmentations have been proposed to produce training sets with a large variety of contrasts (even beyond the scope of realistic!), thus allowing contrast-agnostic models. But for distortion correction, we also need to produce synthetic distortion fields.
Some algorithms are capable of producing whole head segmentations notably differentiating soft tissues / hard tissues and air / tissue. The idea of this project would be to see how we could train a model to learn the relationship between those segmentations and associated distortion fields in order to produce synthetic distortion fields.
Optimising a normative modelling pipeline to map neuroanatomical heterogeneity. Serena Verdi, Sophie Martin MANIFOLD We will optimise current data processing pipelines for a novel neuroanatomical normative modelling technique. Data from the National Alzheimer’s Coordinating Centre will be processed to compare against a large brain MRI dataset of healthy controls (n=58k). This project aims to reveal neuroanatomical heterogeneity in Alzheimer’s Disease at the individual level.
Advanced Visualisation for Augmented Reality Surgery Stephen Thompson, Tom Dowrick, Miguel Xochicale SciKit-Surgery Successful deployment of augmented reality into surgery depends on visualisation methods that can accurately convey essential information without distracting the surgeon. We have developed SciKit-SurgeryVTK to provide visualisation for our research and teaching in augmented reality surgery. SciKit-SurgeryVTK is open source software and is being adopted by other research groups.
In this project we will implement advanced rendering methods within SciKit-SurgeryVTK to support our ongoing research. Specifically we want to implement an outline renderer similar to this example, as we have previously shown this to be effective during augmented keyhole surgery. We would also welcome your suggestions for visualisation methods.
In this project you have the opportunity to gain familiarity with the widely used VTK library and contribute to research efforts in augmented reality for surgery.
Metrics Reloaded stress testing and GUI development Carole Sudre The Metrics Reloaded consortium has recently provided a list of reference metrics to use in the evaluation and validation of classification, object detection, instance segmentation and semantic segmentation. Adequate implementation of these evaluation processes and metrics is essential to ensure appropriate use and promote best practice. Most of the code for such an evaluation suite has already been developed but requires some stress-testing of edge cases. Another aspect of this project will be the creation of a GUI to help in the final choice of metrics to be employed in an evaluation setting.

Location

The hackathon is a purely in person event which will take place in:
Jeremy Bentham Room (JBR), Wilkins Building (main building),
UCL main campus, Gower St, London WC1E 6AE.

At the end of day, participants are invited to socialize with food and drinks in:
Function room and reception area, 1st floor,
90 High Holborn, London WC1V 6LJ.

Register!

Registration is now open!