CategoriesResearch visits

Research visit at the University of Bari

In January and February 2024, I pursued a research visit at the University "Aldo Moro" of Bari, Italy, in the Department of Informatics (DIB); at this university, I conducted undergraduate studies in computer science, graduating in 2014 with a thesis in Learning Analytics for Formative assessment. In that department, my interest in education technologies began.

Going into that tall glass building after 10 years as a senior researcher was a strange feeling... 

The visit involved the Department of Informatics of Bari (DIB) and Education, Psychology, and Communication Science (ForPsiCom). My primary reference contact during my stay was Dr Gabriella Casalino, an assistant professor at the DIB. For this reason, I was stationed in the CILAB group, led by Prof Dr Giovanna Castellano, to which Dr Casalino belongs.

The overall scope of the research visit in Italy was to establish multiple bridges between my research institute (the DIPF) and the University “Aldo Moro” of Bari.

The first bridge established with the CILAB, the Computational Intelligence Laboratory, was to hold a presentation about my research. On Monday, 22nd January 2024, I delivered my presentation as part of the seminar on “Information Technology Outlook,” the seminar series part of the PhD programme in Computer Science and Mathematics. My talk was entitled “Intelligent Tutors, Learning Analytics and Multimodal Technologies for Feedback Augmentation.”

The talk emphasised the importance of Artificial Intelligence (AI) in providing personalised and immediate feedback to students in online learning environments, especially when no human experts are available. AI's constant feedback availability facilitates self-paced learning, including cognitive and physical tasks, through immersive technologies such as Augmented and Virtual Reality. The presentation briefly overviewed the speaker's research on AI and Multimodal Learning Analytics (MMLA), focusing on "Multimodal Tutors". It showed how MMLA improves online teaching by providing personalised feedback, using relevant applications that integrate AI and immersive technologies.

The talk sparked some interest in my research from the research group members. In fact, after that, I planned various smaller or individual talks with doctoral students to support them in improving their research design. I also participated in two research events of the CILAB group. The first was the GNCS meeting, gathering researchers from CNR, the University of Bari, and the University of Padua to discuss the results of the joint project on “Computational methods, based on fuzzy logic, for eXplainable Artificial Intelligence (XAI)”. Furthermore, I participated in the brainstorming meeting of the CILAB to address the latest and most complicated research challenges the group members face.

The collaboration with ForPsiCom focused on research transfer and exchange with the existing groups in Bari that deal with education technologies, the first led by Prof Beatrice Ligorio, the second led by Prof Loredana Perla and Prof Michele Baldassare.

On the 20th of February 2024, I presented to Prof Ligorio's group. The talk, entitled “Intelligent Tutors, Learning Analytics and Multimodal Technologies for Feedback Augmentation”, had, this time, a much more humanistic focus compared to the one given at the DIB. The presentation lasted 90 minutes, followed by discussions and questions. After the presentation in the afternoon, I attended 4 PhD presentations from the PhD students of Ligorio’s group. I provided PhD candidates with my feedback on their progress and what they could improve in their future steps. Moreover, we discussed a possible involvement in a research project on emotions and learning.

On the 28th of February 2024, I presented at Prof Perla’s and Baldassarre’s group. As preparation for the presentation, I asked the students to listen to a recent podcast I recorded entitled “The Future is Multimodal” as part of the AI_ducation podcast series. The meeting with Perla’s group was particularly successful as it allowed the participants (mostly PhD students) to share their views on AI in education, specifically the support that generative AI and large language models can have in education.

CategoriesJournal article

From the Automated Assessment of Student Essay Content to Highly Informative Feedback: a Case Study

How can we provide students with highly informative feedback on their essays using natural language processing?

Check out our new paper, led by Sebastian Gombert, where we present a case study on using GBERT and T5 models to generate feedback for educational psychology students.

In this paper:

➡ We implemented a two-step pipeline that segments the essays and predicts codes from the segments. The codes are used to generate feedback texts that inform the students about the correctness of their solutions and the content areas they need to improve.

➡ We used 689 manually labelled essays as training data for our models. We compared GBERT, T5, and bag-of-words baselines for scoring the segments and the codes. The results showed that the transformer-based models outperformed the baselines in both steps.

➡ We evaluated the feedback using a randomised controlled trial. The control group received essential feedback, while the treatment group received highly informative feedback based on our pipeline. We used a six-item survey to measure the perception of feedback.

➡ We found that highly informative feedback had positive effects on helpfulness and reflection. The students in the treatment group reported higher levels of satisfaction, usefulness, and learning than the students in the control group.

➡ Our paper demonstrates the potential of natural language processing for providing highly informative feedback on student essays. We hope that our work will inspire more research and practice in this area.

You can read the full paper here.

https://link.springer.com/article/10.1007/s40593-023-00387-6

CategoriesJournal article

How to improve Knowledge Tracing with hybrid machine learning techniques

 

Knowledge Tracing is a well-known problem in AI for Education. It consists of monitoring how the student's knowledge changes during the learning process and accurately predicting their performance in future exercises. But how can we improve the current methods and overcome heir limitations?

In recent years, many advances have been made thanks to various machine learning and deep learning techniques. However, they have some pitfalls, such as modelling one skill at a time, ignoring the relationships between different skills, or inconsistent predictions, i.e. sudden spikes and falls across time steps.

In our recently published systematic literature review, we aim to illustrate the state of the art in this field. Specifically, we want to identify the potential and the frontiers in integrating prior knowledge sources in the traditional machine learning pipeline to supplement the normally considered data. We propose a taxonomy with three dimensions: knowledge source, knowledge representation, and knowledge integration. We also conduct a quantitative analysis to detect the most common approaches and their advantages and disadvantages.

Our work provides a comprehensive overview of the hybrid machine-learning techniques for Knowledge Tracing and highlights the benefits of incorporating prior knowledge sources in the learning process. We believe this can lead to more accurate and robust predictions of student performance and help design more effective and personalized learning interventions. However, we also acknowledge that many challenges and open questions still need to be addressed, such as how to select the most relevant and reliable knowledge sources, how to represent and integrate them in a meaningful way, and how to evaluate their impact on the learning outcomes.

We hope that our work can inspire more research and innovation in the field of Knowledge Tracing and AI for Education.

Zanellati, A., Di Mitri, D., Gabbrielli, M., & Levrini, O. (2023). Hybrid Models for Knowledge Tracing: A Systematic Literature Review. IEEE Transactions on Learning Technologies, 1–16. doi: 10.1109/TLT.2023.3348690

https://ieeexplore.ieee.org/document/10379123

CategoriesArtificial IntelligenceDigital learning

Impact of AI Act on Affective Computing

As we get closer to enacting the #AIAct, I want to share a few thoughts on banning #emotionrecognition on education applications.
 
While certainly moved from a good cause, this ban risks hindering much of the community's progress in affective computing in education. 
As my colleague @deniziren puts it:
"Computational services lacking empathy or emotion-aware capabilities are merely blunt tools. How can we hope to address the human-AI alignment problem without enabling AI to understand human emotions?"
https://www.linkedin.com/pulse/impact-ai-act-affective-computing-deniz-iren-phd-pmp/ 

Technology-assisted emotion recognition is helpful in various contexts, such as supporting people with autism spectrum disorder (ASD) or Asperger syndrome.
Emotion recognition is not the only proxy for users' identity; the same can be done with speech, physiological data, etc. Do we need to ban them all? What will this mean for education research?

There are techniques which we have been using that allow the use of emotion recognition while preserving user privacy (see here: https://link.springer.com/chapter/10.1007/978-3-031-16290-9_4 )

Ultimately, the technology is never the problem per se, but what is more problematic is how it is used and for which intention. So banning a certain technology, such as emotion recognition, also blocks good-intentioned initiatives.