Microsoft AI for Accessibility is funding the Object Recognition for Blind Image Training (ORBIT) project, led by City, University of London's Dr Simone Stumpf.
Currently, the project is recruiting blind and low vision users in the UK to record videos of things that are important to them. The collected video data will enable the team to construct a large data set from users who are blind or have low vision which can be used for training and testing AI models that personalise object recognition - and ultimately help build better AI for everyone.
395261For this purpose, the project team, comprising researchers from City, Microsoft Research and University of Oxford, have built the ORBIT iPhone app for collecting the videos, including guidance to users on how to film the things that they want to have recognised.
Collecting this video data from users who are blind and low vision is, as Dr Stumpf notes, a "tricky process", because it, "must be simultaneously easy for blind users to record the videos and the data must be useful for machine learning."
Experience from a pilot study showed users were able to take videos in different settings in their home using different filming techniques. We found that common items that were videoed were their own white canes, keys, glasses, remote controls, bags, and headphones.
Commenting on the research process, Dr Stumpf, a Senior Lecturer in the Centre for Human-Computer Interaction Design in the School of Mathematics, Computer Science & Engineering (SMCSE), says:
"Our research is focused on providing training and testing data so that new algorithms can be developed quickly and rigorously evaluated. We anticipate that our dataset might be useful for any implementations in existing apps as well as novel wearable systems."
The data set will be made publicly available for download in two phases: Phase 1 will include about 100 users in the UK and 1000s of videos, while Phase 2 will gather data on a global scale from about 1000 users and contain more than 10,000 videos.