Two scientists have concluded in a study that has been published in the journal Nature Machine Intelligence that artificial intelligence engineers should borrow ideas and know-how from a wide range of social science domains, including those containing qualitative approaches, in order to minimize the potential harm of their designs and to better serve humanity as a whole.
“There is mounting evidence that AI can exacerbate inequality, perpetuate discrimination, and inflict harm,” states Mona Sloane, a research fellow at New York University’s Institute for Public Knowledge, and Emanuel Moss, a doctoral candidate at the City University of New York.
They added, “To achieve socially just technology, we need to include the broadest possible notion of social science, one that includes disciplines that have developed methods for grappling with the vastness of social world and that helps us understand how and why AI harms emerge as part of a large, complex, and emergent techno-social system.”
The authors summarize reasons where social science methods, and its many qualitative techniques, can widely improve the value of AI while also evading documented drawbacks. Studies have demonstrated that search engines may discriminate against women of color while numerous analysts have raised questions about how self-driving cars will make socially suitable choices in crash situations (for example, avoiding humans instead of fire hydrants).
Sloane, also an adjunct faculty member at NYU’s Tandon School of Engineering, and Moss accept that AI engineers are presently seeking to impart “value-alignment”—the idea that machines should act in keeping with human values—in their creations, but state that “it is exceptionally difficult to define and encode something as fluid and contextual as ‘human values’ into a machine.”
To look into this inadequacy, the authors provide a blueprint for the inclusion of social sciences in AI via a series of suggestions:
- Qualitative social research can help comprehend the classifications by which one makes sense of social life and which are being applied in AI. “For example, technologists are not trained to understand how racial categories in machine learning are reproduced as a social construct that has real-life effects on the organization and stratification of society,” Sloane and Moss perceive. “But these questions are discussed in depth in the social sciences, which can help create the socio-historical backdrop against which the…history of ascribing categories like ‘race’ can be made explicit.”
- A qualitative data-collection method can set protocols to help reduce bias. “Data always reflects the biases and interests of those doing the collecting,” the researchers note. “Qualitative research is explicit about the data collection, whereas quantitative research practices in AI are not.”
- Qualitative research usually requires scientists to reflect on how their interventions impact the world in which they make their observations. “A quantitative approach does not require the researcher or AI designer to locate themselves in the social world,” they explain. “Therefore, does not require an assessment of who is included into vital AI design decision, and who is not.”
“As we move onwards with weaving together social, cultural, and technological elements of our lives, we must integrate different types of knowledge into technology development,” Sloane and Moss conclude. “A more socially just and democratic future for AI in society cannot merely be calculated or designed; it must be lived in, narrated, and drawn from deep understandings about society.”