In preparation for the next academic year (which in Australia will begin in February) I am exploring new ways of assessing students in foreign language units, especially at first year level.
Last Friday I created a chatbot using Microsoft’s QnA Maker and the Azure cloud computing platform. You can have a play with it the foot of this post.
It’s MichelSerresBot, a very simple Q&A chatbot that recognises four questions about the philosopher Michel Serres and is pre-loaded with appropriate answers:
- Qui est Michel Serres?
- Un philosophe français.
- Combien de livres Michel Serres a-t-il écrits ?
- Il a écrit plus de 50 livres.
- Pourquoi devrais-je lire les livres de Michel Serres ?
- Vous devez lire ses livres si vous vous intéressez à l’écologie, à la littérature, à la philosophie comme moteur d’invention, à l’interdisciplinarité ou aux sciences.
- Pourquoi Michel Serres n’est-il pas mieux connu ?
- Parce qu’il n’est pas une « marque » comme d’autres philosophes.
Interestingly, it also understands variations on those questions, as you can see in the screenshot below.
The bot tries to guess what question you are asking, and you can tinker with its tolerance levels. It can misinterpret questions that resemble those it recognises. For example, with the default tolerance setting “Pourquoi Michel Serres?” [Why Michel Serres?] gives the answer “Un philosophe français” [A French philosopher].
I dare say that bots like this sort of bot could have many uses in undergraduate teaching, from transforming FAQs in a unit’s LMS page through learning how to formulate questions, to framing small research tasks, but in this post I want to focus on the use of chatbots in assessment. Not that the chatbot assesses the students (please, no!), but that the students use the capabilities of a bot to sift, structure, present and discover information. Here’s a sketch of how it might work:
- A first year cohort is divided into groups of five or six.
- Each group either chooses or is assigned a research topic relevant to the unit, perhaps from a list provided by the unit coordinator or perhaps by each group suggesting a topic for approval. In my unit the topics would be within the field of French culture.
- The group then has to research their topic and distinguish between important and incidental information, prioritising the most important things to know in the area.
- They condense those most important areas into a list of questions and answers (say a minimum of forty and a maximum of fifty), formulated in the foreign language.
- They submit the list of questions and answers in a text file to the unit coordinator, in a format readable by QnA Maker (question [tab] answer [new line]: no other markup needed).
- The coordinator then turns each of the lists into a chatbot (this can be done quite quickly, I found today), and hosts each bot either on the LMS or an a website like this one.
- Each group is then assigned the chatbot of another group and has to interact with it in order to find out as much information as they can about the area in question. Azure keeps a record of all the questions asked and answers given.
- The mark for the assignment is part peer-assessment (the group seeking to extract information from the chatbot is given a set of criteria including comprehensibility, ease of use and so forth) and part assessed by the unit coordinator, who marks the question and answer document of the first group according to criteria covering both language and content, and marks the adeptness of the second group at asking questions that elicit the information it contains.
It appears that the language of the interface can be changed from English to French, but that’s for another day. What Friday’s experiment showed me was that there is enough potential in chatbots for me to give them more thought as I plan my teaching for next year.
So here is MichelSerresBot. Remember, it’s a prototype and only recognises four questions…