In my responsible AI class, my students have to design an AI tool or functionality for education. This year, 3 out of 5 groups decided to look into LLMs such as ChatGPT.
So I discussed with them to what extent ChatGPT can be helpful in education. Three takeaways:
(1) is ChatGPT made for feedback?
With increasing multimodal capabilities, you can now use natural language chat with PDF images and videos and get salient properties or answers to questions.
If I upload a video of me running, will ChatGPT give me feedback about my running technique?
This seems improbable at the current stage because GPT needs a model of what constitutes running correctly, let alone giving proper feedback for running.
Also, it is far from having the exchange of perspectives, insight derivation, and possibility for knowledge creation which giving feedback requires.
(2) noise in training data remains a paramount issue.
Education is generally relying on carefully curated content. ChatGPT is trained on the data scraped from the web, which is the opposite of having careful and curated content.
The obvious consequence is to drive the user into useless, misleading, inappropriate content or provide false information.
(3) the data science cycle (data collection, cleaning, annotation, model training etc.) is no more needed for AI application design.
Relying on LLMs and other foundation models means the AI generates content but does not need to classify/predict anymore.
The focus has shifted to efficient prompt engineering and to priming the model to get the desired output.
The data science cycle risks becoming irrelevant and useless to learn.