Are you willing to do your master or bachelor thesis or research internship under my supervision? that is possible.
- The ideal duration of the internship is from 3 to 6 months (5 to 15 ECTS).
- The internship will all take place in English.
- The target students are bachelor or master students in the field of computer science, knowledge engineering, data science, information technologies.
- The bachelor and master thesis is connected to the Educational Technologies course at Goethe University. For more information please visit the website.
- In some cases, research funding is possible through the German Hiwi scheme.
- The projects are divided between [RES] (research) and [DEV] (development).
- Please have a look at the list of open topics and reach out to me to discuss this further.
- Do you have in mind a similar topic you wish to propose? That is also possible. You will have to fill in an Exposé of 2-4 pages by using the template. In case you have any question please consult the FAQ or reach out to me.
List of Topics
********
1. [DEV] Smartwatch Experience Sampling 
Background. The permeation of connected wearable devices is increasing rapidly, and it is forecasted to be over one billion in 2022. Wearable devices are electronics that can be worn on the body such as smartwatches, wristbands or earbuds. Typical commercial smartwatches embed powerful microchips and sensors and can connect via the cellular network, Wi-Fi, NFC or Bluetooth. From being primarily chosen by athletes and fitness enthusiast, smartwatches are being progressively adopted also by the general population. Smartwatches allow collecting physiological data, such as step counting, heart-rate tracking and sleep monitoring. They also provide a hands-free interface, which enables the user to stream music, receive notifications, interact with conversational agents having their hands free. Compared to smartphones, smartwatches have smaller screen size and a limited Graphic User Interface, which makes the smartwatch less “task-dominant” in day-to-day tasks and thus less alienating as compared to smartphones. This makes the smartwatch a better-suited device for supporting practical tasks such as doing fitness or cooking.
Research questions. Building on the previous research conducted the Learning Hub (Schneider, Di Mitri, Limbu, & Drachsler, 2018) for multimodal data collection and the Visual Inspection Tool for data annotation (Di Mitri, Schneider, Klemke, Specht, & Drachsler, 2019) in this LAKathon challenge, we would like to explore can new technological affordances introduced by the smartwatches can be leveraged in education.
Expected outcomes. A conceptual design of a smartwatch application which continuously can (1) collect sensor data, (2) ask user reports and (3) return valuable information to the user for optimising a particular task.
2. [RES] Conversational Agents for learning and Information Retrieval
Background. A chatbot is an artificial intelligence (AI) program that simulates interactive human conversation by using key pre-calculated user phrases and auditory or text-based signals. Chatbots are frequently used for basic customer service and marketing systems that frequent social networking hubs and instant messaging clients. They are also often included in operating systems as intelligent virtual assistants. A chatbot is also known as an artificial conversational entity (ACE), chat robot, talk bot, chatterbot or chatterbox.
Research task. In this internship task you are asked to compile a literature review of conversational agents or chatbots used for information retrieval - e.g. for retrieving important documents, resources, names or events from a database and so on.
Expected outcomes. A comprehensive literature review that summarises the relevant related research. You can find the template here.
3. [DEV] Using Microsoft Platform for Situated Intelligence for educational tasks
Background. Recently Microsoft released the Platform for Situated Intelligence an open, extensible framework for development and research of multimodal, integrative-AI systems. Examples include multimodal interactive systems such as social robots and embodied conversational agents, systems for ambient intelligence and smart spaces, applications based on small devices that work with streaming sensor data, etc. In essence, any application that processes streaming, sensor data (such as audio, video, depth, etc.), combines multiple (AI) technologies, and operates under latency constraints can benefit from the affordances the provided by the framework.
Research task. You will have to install the PSI platform and pre-select different sensors which can be video-camera, depth camera (like Kinect), microphone, smartwatch or interactive pen. The application should gather data from one or more sensors.
Expected outcomes. A conceptual design and a proof of concept of the application using the PSI platform. You could ask me to borrow hardware such as the Kinect Azure.
4. [DEV] Integrated Visual Inspection Tool
Background. The Visual Inspection Tool (VIT) is a web-based tool developed in Javascript and HTML5, which allows the visual inspection and the annotation of multimodal datasets encoded with MLT-JSON data format. In the VIT, the expert can load the session files one by one to triangulate the video recording with the sensor data. The user can select and plot individual data attributes and inspect visually how they relate to a video recording. The VIT is also a tool for collecting expert annotations. In the case of CPR Tutor, the annotations were given as properties of every single chest compression.
Research Task. The VIT is meant to be used with the Multimodal Learning Hub. The VIT is able to load the MLT-JSON format. In its 2.0 version, the LearningHub is evolving its data format, which instead of using a batch-based approach it will use a streaming approach using Kafka and the Data Lake. This research task consists in using re-engineering the VIT to have it an integrated tool with the back-end of the Kafka infrastructure.
Resources:
- Github Repository of the VIT https://github.com/dimstudio/visual-inspection-tool
- Research paper about the VIT: Di Mitri D., Schneider J., Specht M., Drachsler H. (2019) Read Between the Lines: An Annotation Tool for Multimodal Data for Learning. In Proceedings of the 9th International Conference on Learning Analytics & Knowledge - LAK19 (pp. 51–60). New York, NY, USA: ACM. DOI: 10.1145/3303772.3303776
5. [DEV] Knowledge Tracing in Python
Background. Knowledge Tracing (KT), the act of tracing a student's knowledge from their learning history, is one of the most important problems in the field of Artificial Intelligence in Education (AIEd). Through KT, an Intelligent Tutoring System (ITS) can understand each student's learning behaviour and provide learning experience adapted to all individuals. Accordingly, a variety of methods including Bayesian Knowledge Tracing (BKT), Deep Knowledge Tracing (DKT) and many more has been developed. In this context, the debate over which methods are most effective for KT has emerged. KT is the task of modelling student knowledge over time so that we can accurately predict how students will perform on future interactions. Improvement on this task means that resources can be suggested to students based on their individual needs, and content which is predicted to be too easy or too hard can be skipped or delayed.
At the EduTec group, we have collected a number of different datasets from Moodle courses that we think can be analysed using a Knowledge Tracing approach. These datasets were collected during online courses and contain both the activity data of the students, the scrolling behaviour and, in some cases, also the sensor data collected from smartwatch and smartphones.
Research Tasks. In this internship, you will be given access to some various datasets and you will be asked to experiment what Knowledge Tracing algorithms can be applied to the dataset. The expected outcome is a prototypical algorithm which uses KT to model students' knowledge, either predicting the answers to multiple-choice questions or some other relevant aspects of their learning progress.
Resources. Some research paper about Knowledge Tracing
- Deep Knowledge Tracing using Recurrent Neural Networks (link to PDF)
- Convolutional Knowledge Tracing (link to PDF)
- Baker et al 2008 More Accurate Student Modeling Through Contextual Estimation of Slip and Guess Probabilities in Bayesian Knowledge Tracing (link to PDF)
- Python implementations of Bayesian Knowledge Tracing:
- pyBKT - Python implementation of the Bayesian Knowledge Tracing algorithm and variants, estimating student cognitive mastery from problem-solving sequences. (option 1)
- Bayesian Knowledge Tracing (option 2)
- Python implementations of Deep Knowledge Tracing:
- Deep Knowledge Tracing Implementation (option 1)
6. [DEV] EDA with the Learning Pulse dataset
Background. In spring 2016 at the Welten Institute, the research centre of the Open University, took place Learning Pulse, an exploratory study whose main aim was to discover whether physiological responses (their heart rate) and physical activity (step count) when associated with data about the learning activity (use of software applications) are predictive for learning performance. Nine PhD students of the OU took part in the experiment wearing each of them a Fitbit HR tracker and having their computer activity tracked. In addition to each of them, also the geo-location and outdoor meteorological conditions were tracked. To monitor the level of learning performance, the participants, participate had to self report every hour during their working time their Main Activity and their perceived level of Productivity, Stress, Challenge and Abilities. The former two indicators can be combined to calculate the Flow, a famous construct in psychology which can be used as a Learning performance indicator. The data collected in the experiment lasted five weeks and produced a dataset about 10,000 records. Each of those records represents a five minutes learning interval having ~430 attributes, the majority of which are sparse.
Research Tasks. This Master thesis topic consists in performing Exploratory Data Analysis on the Learning Pulse data set in order to:
- find meaningful patterns, interesting correlation and insights in collected data
- find a structured approach to treat sparse data preserving the time dependency
- learn statistical regression models which are able to predict at least one of the performance indicators
- find interesting visualisations of the data for self-awareness and reflection
Requirements. This topic requires familiarity with descriptive statistics, data analysis and preferably with machine learning. To accomplish these task you are required to use one data analysis tool such as Python (preferably), R or Matlab.
7. [DEV] Affective computer interfaces with EEG
This topic concerns the creation of a library for Multimodal LearningHub to work with Emotiv Insight a 5 channel EEG device which is able to track Attention, Focus, Engagement, Interest, Excitement, Affinity, Relaxation and reduce Stress levels. The Emotiv Insight device can be borrowed from our lab equipment.
This information could be combined with either self-reported measurements as well as with performance in quizzes or games.
Expected results
- A server-Windows application in c# which is able to collect Emotiv sensor data through the LearningHub
- The design of a simple experimental application using Emotiv Insight
8. [DEV] Dynamic Time-series Visualisations
Sensor data are complex to understand for human inspection. With the help of visualisation libraries such as D3.js, Plot.ly it is, however, possible to visualize time-based data.
Expected results
- A dashboard with different types of visualisation to work with different time series including Kinect, Empatica, Myo.
7. [DEV] Extracting and Re-enacting Kinematic data
Using the video recordings of the LearningHub we would like to run the body and facial features extraction from videos using deep learning libraries such as OpenPose.
Expected results
- A tool which takes as input the MLT session from the LearningHub and translates the video format into OpenPose.json, sensor file compliant with the MLT format.
- A tool which takes as input one OpenPose.json and converts into an animation into space
****
If you wish to propose yourself a topic not listed and related to the previously mentioned topic, feel free to reach out to me to discuss this further.