A-AFMA-Detection: Automatic amniotic fluid detection from ultrasound video¶
The A-AFMA challenge was part of ISBI 2021, with the workshop taking place on April 13, 2021. We are no longer accepting submissions but are busy working on the next stage of the A-AFMA Challenge, which will be running in 2022.
Thank you to all the participants and congratulations to the winning teams:
Task 1 Detection:
1st Sen
2nd Golden Retriever
3rd MUSIC_SZU
***
The A-AFMA Challenge Workshop is on 13th April, 1.30pm - 3.00pm (CET)¶
Workshop timetable¶
- Welcome
- Clinical context of amniotic fluid measurement
- The A-AFMA Challenge tasks
- Algorithm presentation: Task 1 winning team
- Algorithm presentation: Task 2 winning team
- Algorithm presentation: Chair’s choice winning team
- Analysis of results from different metrics
- Discussion
Introduction to the Challenge video¶
The aim of this task is to automatically detect two anatomies within each frame of an ultrasound video: amniotic fluid and the maternal bladder.
The ultrasound video in this task is taken during a prenatal scan and is made up of a single sweep of the ultrasound probe over the maternal abdomen (see Figure 1)
Figure 1: Linear sweep protocol
Within ultrasound images; fluid appears dark (black) and solid areas such as the uterus, placenta, fetus or umbilical cord appear in shades of grey and white. Amniotic fluid (AF), the maternal bladder and acoustic shadows appear black (see Figure 2), and so distinguishing between them is not a trivial task. Being able to accurately differentiate between them in an ultrasound frame is crucial as the first step in measuring the amount of amniotic fluid. The clinician annotating each video has drawn two rectangular ‘bounding boxes’ around areas in the frame containing AF, and the maternal bladder (see Figure 2).
Figure 2: Bounding-boxes in an example of a frame in an US video clip. Orange box indicates amniotic fluid, green one indicates bladder.
Task¶
The bounding boxes are the ground truth for this task, and the challenge is to predict the coordinates of a bounding box for each anatomy (one for AF, one for bladder). There should be an appropriate bounding box in each frame of the US video if the anatomy is visible so there may be zero, one, two or more bounding boxes in each frame.
Each team needs to send a registration email including the team name, team members' names, affiliations and grand-challenge usernames to the organizer via email: noble_pm@eng.ox.ac.uk before accessing the data
Evaluation¶
The evaluation measure is the per-frame mean average precision (mAP@0.5) for the two-class detection task based on the bounding-box prediction and anatomical classification label prediction. The AP results of each anatomical category carry equal weight in the evaluation.
Update¶
The training dataset has now been released.
The test dataset has now been released.