Are you willing to do your master or bachelor thesis or research internship under my supervision? that is possible. 

The ideal duration of the internship is from 3 to 6 months (7 to 15 ECTS).

The target students are bachelor or master students in the field of computer science, knowledge engineering, data science, information technologies.

The internship can be both carried out at Heerlen Campus of the Open University or elsewhere (with distance supervision). In case you want to work at OU, you need to be available 32-40 hours a week, in that case, monthly scholarships are available.

The internship will all take place in English.

Please have a look at the list of open topics and reach out to me to discuss this further. 

All the internship proposals should rely on the standardised approach we are advocating for the Multimodal Pipeline. The approach use its own data collection format, the Meaningful Learning Task (MLT) sessions.

 

The topics here described are:

  1. Using smartphones for Multimodal Learning Analytics
  2. Human activity recognition with sensor data
  3. Multimodal chess-playing
  4. Exploratory Data Analysis with the Learning Pulse dataset
  5. RaspberryPI + SenseHat + LearningHub 
  6. Affective computer interfaces with EEG 
  7. Cloud Learning Hub
  8. Dynamic Time-series Visualisations
  9. Extracting and Re-enacting Kinematic data
  10. User-classification from audio signals

 

1) Using smartphones for Multimodal Sensor Recording

Image result for human activity recognition

Smartphones have embedded multiple sensors such as accelerometers, GPS, Microphones, Cameras, etc. For this project, the task would be to develop an application that gathers smartphone data and stores them in Multimodal Learning Hub in our house format the Meaningful Learning Task. Examples of learning experiences can be: dancing, gymnastics, martial arts, playing a musical instrument, public speaking, etc. The student can decide what type of learning task would like to make recordings from. The student also needs to do some recordings in order to test the developed application. For research purposes, it would be better if the recordings are made using some experts and novices performing the learning tasks.

Expected results:

  • Development of a program able to extract data out of the sensors from a smartphone.
  • Development of a library class for smartphones that can connect to our multimodal recording tool (TCP and UDP socket connection).
  • Design the set-up for the recording of the learning task
    • Which specific learning task will be recorded
    • Which characteristics of this learning task can and should be recorded
    • Which sensors of the smartphone are needed for the recording (We can provide other sensors, such as Kinect, MYO armband, Leapmotion, etc, to improve the recording)
    • How should the user carry the smartphone while doing the recording
  • Creating a set of multimodal learning recordings.

 

2) Human activity recognition with sensor data

Image result for human activity recognition

Connected to the previous topic on collecting sensor data from smartphones, this topic focuses on Human Activity Recognition from sensor data. This topic focuses on multi time-series classification using machine learning techniques including deep neural networks (see example here). 

In this internship, you would have to use the data collected with the Multimodal Learning Hub and our house format the Meaningful Learning Tasks. In this internship, you would have to become acquainted to different machine learning and deep learning libraries such as Sklearn, Keras, Pytorch. 

Expected Result

  • A software package in Python or equivalent package
  • Feature extraction from the raw sensor data
  • Different implementations of Machine Learning Algorithms 
  • Evaluation methods AUC, Accuracy, Sensitivity, k-fold cross-validation. 
  • Optional: GUI of the software package

 

3) Multimodal chess-playing

Image result for chess playing eeg

The popular game of chess is an interesting learning scenario for investigating the true meaning of expertise in cognitive intense tasks. In artificial intelligence the game of chess is usually considered a search problem: find the optimal move taking into account the opponent’s reactions in all the possible configurations.

sHumans, however, are not able to keep track of all the combinatorial possibilities, and for this reason, they adopt a search system which is much more based on heuristics or tactics. The scope of this multimodal application consists of untangling the strategy of the players by means of sensors data and multimodal learning analytics. The nature of this task is highly explorative.

The multimodal application should be able to correlate the decisions taken by the players (i.e. a move in the chess board) within a particular state configuration of the game with the observed sensor data. The analysis can also look at different strategies adopted by different players and reason on the differences.

Expected results

  • A sensor capturing plays sessions of one sensor among Emotiv Insight EEG headset, Empatica E4 wristband or Eye tracking device
  • Correlate the sensor data with players’ moves a board configurations
  • Analysing recurrent patterns in the sensor data and peculiarities of each individual  player
  • Possible extension:
    • Compare two or more players
    • Scale to multiple integrated sensors

4) Exploratory Data Analysis with the Learning Pulse dataset

Image result for learning pulse

In spring 2016 at the Welten Institute, research centre of the Open University, took place  Learning Pulse, an exploratory study whose main aim was to discover whether physiological responses (their heart rate) and physical activity (step count) when associated with data about the learning activity (use of software applications) are predictive for learning performance.  Nine PhD students of the OU took part in the experiment wearing each of them a Fitbit HR tracker and having their computer activity tracked. In addition to each of them, also the geo-location and outdoor meteorological conditions were tracked. To monitor the level of learning performance, the participants, participate had to self report every hour during their working time their Main Activity and their perceived level of Productivity, Stress, Challenge and Abilities. The former two indicators can be combined to calculate the Flow, a famous construct in psychology which can be used as a Learning performance indicator. The data collected in the experiment lasted five weeks and produced a dataset about 10,000 records. Each of those records represents a five minutes learning interval having ~430 attributes, the majority of which are sparse.

This Master thesis topic consists in performing Exploratory Data Analysis on the Learning Pulse data set in order to:

  • find meaningful patterns, interesting correlation and insights in collected data
  • find a structured approach to treat sparse data preserving the time dependency
  • learn statistical regression models which are able to predict at least one of the performance indicators
  • find interesting visualisations of the data for self-awareness and reflection

This topic requires familiarity with descriptive statistics, data analysis and preferably with machine learning. To accomplish these task you are required to use one data analysis tool such as R, SPSS, Python.

5) RaspberryPI + SenseHat + LearningHub 

Image result for RaspberryPI sense HAT

This internship will deal with the crafting and configuration of a RaspberryPi device with a SenseHat. The device is supposed to work as an actuator for feedback in educational settings. Configuring the LEDs of the Hat is possible to communicate visual messages and close the feedback loop with the student. 

Expected results

  • A server-Windows application in c# which is able to send feedback strings to RaspberryPI through local network 
  • A client Raspbian application that controls the SensorHat
  • Integration with the Multimodal Learning Hub
  • Some ideas of how the feedback could be used in some learning scenarios

6) Affective computer interfaces with EEG 

Image result for emotiv insight

This topic concerns the creation of a library for Multimodal LearningHub to work with Emotiv Insight a 5 channel EEG device which is able to track Attention, Focus, Engagement, Interest, Excitement, Affinity, Relaxation and reduce Stress levels. 

This information could be combined with either self-reported measurements as well as with performance in quizzes or games.

Expected results

  • A server-Windows application in c# which is able to collect Emotiv sensor data through the LearningHub
  • The design of a simple experimental application using Emotiv Insight

7) Cloud Learning Hub

Image result for time series streaming

The LearningHub is a system to collect sensor data from multiple sensor applications.  At the current state of development, the LearningHub collects the data into batches in a local computer network. The session file is stored into the Meaningful Learning Task sensor format We would like to explore to move the LearningHub into the cloud so that we can turn from batch to streaming approach. 

Expected results

  • A cloud application able using Kafka and Spark to leverage sensor data streams.

8) Dynamic Time-series Visualisations

Image result for time series visualizationSensor data are complex to understand for human inspection. With the help of visualisation libraries such as D3.js, Plot.ly it is, however, possible to visualize time-based data. 

Expected results

  • A dashboard with different types of visualisation to work with different time series including Kinect, Empatica, Myo. 

9) Extracting and Re-enacting Kinematic data

Image result for open pose

Using the video recordings of the LearningHub we would like to run the body and facial features extraction from videos using deep learning libraries such as OpenPose

Expected results

  • A tool which takes as input the MLT session from the LearningHub and translates the video format into OpenPose.json, sensor file compliant with the MLT format.
  • A tool which takes as input one OpenPose.json and converts into an animation into space

10) User-classification from audio signals

Image result for microphone on table

Imagine a working meeting in which you record speech from a group. You can either have a microphone on the table for the whole group or, alternatively, several microphones, one for each member of the group. In the first case n-users can talk to one single microphone, so from the audio signal recorded the speech of multiple users is mixed into one signal. In the second case, we expect n-users having n-microphones so the user mapped 1-to-1 to different microphone signals. But what happens if someone doesn't turn on his/her microphone? For these scenarios, we need a user-classification layer. This layer assigns the semantic to one or multiple audio signals to determine who talked in a particular interval of time. 

Expected results

  • Collect a dataset using microphones during meeting setting using the LearningHub (for example, meeting at OU)
  • Design a classification algorithm to classify the correct speaker
  • Present the classification algorithms 

****

If you wish to propose yourself a topic not listed and related to the previously mentioned topic, feel free to reach out to me to discuss this further.