The use of artificial intelligence in schools and universities is a controversial topic. However, numerous applications based on it are here to stay. But what about its use in children’s rooms? What can parents do when children start using ChatGPT for homework or as chat partners in everyday life?
ChatGPT is a chatbot, a text-based dialogue system that relies on machine learning. It sounds fun at first, but ChatGPT also comes with some risks.
According to German professor Felicitas Macgilchrist, who specializes in media research, to raise children’s awareness of the risks, it is important to first inform them about artificial intelligence and then engage in dialogue with them: “Talk, talk, talk and inform the child based on the basic knowledge acquired,” says the expert.
Sandra Schulz, Professor of Computer Science Education at the University of Hamburg, also advises against approaching the subject with reluctance. It must be accepted that ChatGPT is part of the digital transformation, and it is therefore important to be informed about the subject to guide the child in using artificial intelligence.
Both Schulz and Macgilchrist advise against completely banning children from using ChatGPT and other programs. According to Macgilchrist, if a child uses ChatGPT despite the ban, it becomes even more difficult to talk to them about it.
According to Schulz, this can also be a disadvantage at school: “If it is allowed as an aid, children will be at a disadvantage because they cannot use ChatGPT,” she says.
According to Schulz, the first thing to do is figure out how ChatGPT actually works. The AI program writes texts based on probabilities. The system calculates the most likely next word or response from the database it has been given.
However, depending on the data that one provides to the system, the answers can be very different. “Of course, this can also have a significant influence on the formation of opinions,” says Schulz.
What may also hurt children’s opinion formation is that ChatGPT is trained on data that represents “mainstream” positions in society. “We know from internet research that it is predominantly young white men from the Global North or the West who write a lot – largely in English – on the internet,” says Macgilchrist.
Conversely, this means that the experiences, perspectives, and knowledge of other groups of people, for example, women, activists, or people from the Global South, are underrepresented or even missing.
In light of this, parents will need to actively work with their children and, together with them, critically analyze the perspectives in ChatGPT texts and consider which viewpoints are not mentioned. According to Macgilchrist, one could also talk to the child about what it means for people’s lives in the world if these positions are missing.
Another risk is that ChatGPT texts may contain fictitious information or even errors. “This can be dangerous if children and young people accept these texts as factual,” says Macgilchrist.
To make children aware of ChatGPT’s flaws, Macgilchrist recommends drawing on the child’s knowledge. For example, the child can ask ChatGPT questions about his or her favorite series. This way, the child can learn how to use the system and see that, among other things, it is fictitious content and that errors can occur.
You can never be sure whether a text is coming from artificial intelligence or not. However, there are some clues. According to Macgilchrist, AI-based texts often seem boring and sound standard.
It is also possible to check whether the vocabulary used in the text matches that of the child. Parents can ask themselves whether the words used are the same as those their child would use. “In the case of spelling and grammar, too, most parents probably have an idea whether these match the child’s level,” says Schulz.
When in doubt, it is recommended to ask the child why he or she thinks that way or where the information comes from. Schulz advises actively participating in the child’s learning process and encouraging them to explain their answers.
Even if parents notice that their child has completed a task exclusively with the help of ChatGPT, this does not have to be a negative thing. According to Macgilchrist, at the latest at this point, a conversation can be started with the child.
The expert also suggests a playful approach: let the child ask ChatGPT a question and then have him/her find out for himself/herself whether the answer is correct. In this way, the child not only learns how to use ChatGPT but also deals with the topic of homework despite the help.
Parents can also ask their children why they are going to school and explain that the purpose of doing so is to learn. ChatGPT can work as a tool, but it should not and cannot do all the work, Schulz points out.
+ There are no comments
Add yours