Although my field of study is computer science and artificial intelligence, I am conducting my doctoral studies in a (soon to become) Faculty of Education while my field of expertise is Multimodal Learning Analytics. In this field, the Human component has always been of highest relevance. In the prototypes we developed it is typically a human that uses the developed systems, as well as the human (e.g. teacher or learner) is the subject of investigation.
The human centrality is however not the standard in AI systems. Last week, visiting IJCAI2019 the AI conference in Macao, I got to know that more in detail. In most of AI research, the algorithms (their effectiveness, precision, etc.) are more central as compared to the effects on humans. Not surprisingly I have to admit since we are talking about computer science, not human intelligence.
However, there is something I believe is alarming in this. AI is the most expanding research field attracting more and more researchers from different disciplines. The unit measure for the goodness of AI research is not the impact that AI has on humans, rather the overperforming of the state-of-the-art results in classification accuracy, precision, recall etc. The development and of the recurrent and deep neural network takes place along this line of development. These algorithms perform surprisingly good in classifying or predicting targets, but they fall short into explainability and interpretability.
I believe, for this reason, to prevent that the AI field continues to wander towards its own directions, we need more applied AI and more multidisciplinarity research. We need to value more studies that measure scientifically the impact that AI systems have in society and ultimately on humans. This is clearly a much bigger effort for the research community because it pushes to think towards the bigger picture of why we use AI rather than to specific tasks AI can perform.