2026
- Ph.D. Thesis Multimodal Group Emotion Recognition In-the-wild towards a privacysafe non-individual approach
Anderson Augusma
Université Grenoble Alpes, France, January 2026
[BibTeX]@phdthesis{Augusman2026, title = {{Multimodal Group Emotion Recognition In-the-wild towards a privacysafe non-individual approach}}, author = {Augusma, Anderson}, school = {{Universit{\'e} Grenoble Alpes}}, address = {Universit{\'e} Grenoble Alpes, France}, year = {2026}, month = {January}, type = {Computer Science}, }
2023
- Proceedings Multimodal Group Emotion Recognition In-the-wild Using Privacy-Compliant Features
A. Augusma, D. Vaufreydaz and F. Letué
ICMI ’23: International Conference on Multimodal Interaction, pp. 750-754, Paris, France, October 2023
PDF
DOI
HAL[BibTeX][Abstract]
@inproceedings{augusma:hal-04325815, title = {{Multimodal Group Emotion Recognition In-the-wild Using Privacy-Compliant Features}}, author = {Augusma, Anderson and Vaufreydaz, Dominique and Letu{\'e}, Fr{\'e}d{\'e}rique}, booktitle = {{ICMI '23: International Conference on Multimodal Interaction}}, hal_version = {v1}, hal_id = {hal-04325815}, pdf = {https://hal.science/hal-04325815v1/file/MgEmoR-pcf-Emotiw2023.pdf}, keywords = {Transformer networks ; Group emotion recognition in-the-wild ; Multimodal ; Privacy safe}, doi = {10.1145/3577190.3616546}, month = {October}, year = {2023}, pages = {750-754}, publisher = {{ACM}}, address = {Paris, France}, url = {https://hal.science/hal-04325815}, abstract = {This paper explores privacy-compliant group-level emotion recognition "in-the-wild" within the EmotiW Challenge 2023. Group-level emotion recognition can be useful in many fields including social robotics, conversational agents, e-coaching and learning analytics. This research imposes itself using only global features avoiding individual ones, i.e. all features that can be used to identify or track people in videos (facial landmarks, body poses, audio diarization, etc.). The proposed multimodal model is composed of a video and an audio branches with a cross-attention between modalities. The video branch is based on a fine-tuned ViT architecture. The audio branch extracts Mel-spectrograms and feed them through CNN blocks into a transformer encoder. Our training paradigm includes a generated synthetic dataset to increase the sensitivity of our model on facial expression within the image in a data-driven way. The extensive experiments show the significance of our methodology. Our privacy-compliant proposal performs fairly on the EmotiW challenge, with 79.24% and 75.13% of accuracy respectively on validation and test set for the best models. Noticeably, our findings highlight that it is possible to reach this accuracy level with privacy-compliant features using only 5 frames uniformly distributed on the video.}, }This paper explores privacy-compliant group-level emotion recognition "in-the-wild" within the EmotiW Challenge 2023. Group-level emotion recognition can be useful in many fields including social robotics, conversational agents, e-coaching and learning analytics. This research imposes itself using only global features avoiding individual ones, i.e. all features that can be used to identify or track people in videos (facial landmarks, body poses, audio diarization, etc.). The proposed multimodal model is composed of a video and an audio branches with a cross-attention between modalities. The video branch is based on a fine-tuned ViT architecture. The audio branch extracts Mel-spectrograms and feed them through CNN blocks into a transformer encoder. Our training paradigm includes a generated synthetic dataset to increase the sensitivity of our model on facial expression within the image in a data-driven way. The extensive experiments show the significance of our methodology. Our privacy-compliant proposal performs fairly on the EmotiW challenge, with 79.24% and 75.13% of accuracy respectively on validation and test set for the best models. Noticeably, our findings highlight that it is possible to reach this accuracy level with privacy-compliant features using only 5 frames uniformly distributed on the video.
2022
- Journal Analyser automatiquement les signaux de l’enseignement : Une approche d’apprentissage social fondée sur les preuves
R. Laurent, P. Dessus and D. Vaufreydaz
A.N.A.E. Approche neuropsychologique des apprentissages chez l’enfant, pp. 29-36, 2022
PDF
HAL[BibTeX][Abstract]
@article{laurent:hal-03599280, title = {{Analyser automatiquement les signaux de l'enseignement : Une approche d'apprentissage social fond{\'e}e sur les preuves}}, author = {Laurent, Romain and Dessus, Philippe and Vaufreydaz, Dominique}, journal = {{A.N.A.E. Approche neuropsychologique des apprentissages chez l'enfant}}, hal_version = {v2}, hal_id = {hal-03599280}, pdf = {https://hal.univ-grenoble-alpes.fr/hal-03599280v2/file/ANAE-HAL.pdf}, keywords = {Social Learning ; Machine Learning ; Signal Processing and Analysis ; Pedagogy ; Evidence-based Education ; Apprentissage machine ; Traitement et analyse du signal ; P{\'e}dagogie ; {\'E}ducation fond{\'e}e sur les preuves ; Apprentissage social}, year = {2022}, pages = {29-36}, number = {176}, publisher = {{St{\'e} Artemis [1989] - J. Libbey Eurotext [1990-1993] - PDG Communication [1994-2002] - Pleiomedia [2003-....]}}, url = {https://hal.univ-grenoble-alpes.fr/hal-03599280}, abstract = {Recent advances in signal processing and analysis have made it possible to create new ways of instrumenting the observation and the analysis of educational events, and thus to gather new kinds of evidence on teaching and learning practice. This article identifies some of these, based on a “social learning” framework, which posits that pedagogy is a social activity embedded in everyday life, and relies on certain innate human capacities.}, }Recent advances in signal processing and analysis have made it possible to create new ways of instrumenting the observation and the analysis of educational events, and thus to gather new kinds of evidence on teaching and learning practice. This article identifies some of these, based on a “social learning” framework, which posits that pedagogy is a social activity embedded in everyday life, and relies on certain innate human capacities.
- Journal L’instrumentation intelligente des salles de classe au service de l’observation des interactions enseignant-apprenants
R. Laurent, P. Dessus and D. Vaufreydaz
Revue internationale de communication et socialisation, vol. 9, no. 2, pp. 247-258, 2022
PDF
HAL[BibTeX][Abstract]
@article{laurent:hal-03985556, title = {{L'instrumentation intelligente des salles de classe au service de l'observation des interactions enseignant-apprenants}}, author = {Laurent, Romain and Dessus, Philippe and Vaufreydaz, Dominique}, journal = {{Revue internationale de communication et socialisation}}, hal_version = {v2}, hal_id = {hal-03985556}, pdf = {https://hal.univ-grenoble-alpes.fr/hal-03985556v2/file/Laurent%20et%20al-RICS_final-2.pdf}, keywords = {computational observation ; Teacher-Student Relationship (TSR) ; classroom ecology ; Relation enseignant-apprenants ; observation computationnelle ; {\'e}cologie de la salle de classe}, year = {2022}, pages = {247-258}, number = {2}, volume = {9}, publisher = {{J.C. Kalubi}}, url = {https://hal.univ-grenoble-alpes.fr/hal-03985556}, abstract = {The quality of the relationship between teacher and learners is a key factor in improving learning. If this relationship remains observable by several classical methods (self and hetero-reported), the recent introduction of computer vision in classrooms is likely to considerably increase the investigation of its interactional component, the most obvious and immediate dimension of relationships between learners and teachers (Pianta, 1999). However, the implementation of cameras feeding artificial intelligence processes in classrooms divides the scientific community. Between contemptible worried about data surveillance, and laudatory about the adaptive perspectives of a teacher informed in real time of the state, even hidden, of learners, it seems to us possible to draw a line of demarcation, on which the impact of such so-called computational instrumentations would be questioned and negotiated, in regards to the classroom ecology conservation.}, }The quality of the relationship between teacher and learners is a key factor in improving learning. If this relationship remains observable by several classical methods (self and hetero-reported), the recent introduction of computer vision in classrooms is likely to considerably increase the investigation of its interactional component, the most obvious and immediate dimension of relationships between learners and teachers (Pianta, 1999). However, the implementation of cameras feeding artificial intelligence processes in classrooms divides the scientific community. Between contemptible worried about data surveillance, and laudatory about the adaptive perspectives of a teacher informed in real time of the state, even hidden, of learners, it seems to us possible to draw a line of demarcation, on which the impact of such so-called computational instrumentations would be questioned and negotiated, in regards to the classroom ecology conservation.
2021
- Proceedings Comment instrumenter l’observation et l’analyse de la REE ?
P. Dessus
Les systèmes éducatifs québécois et français sous l’angle de la relation enseignant-apprenants : enjeux et impacts, Montréal, Canada, September 2021
HAL
LINK[BibTeX][Abstract]
@inproceedings{dessus:hal-03359291, title = {{Comment instrumenter l'observation et l'analyse de la REE ?}}, author = {Dessus, Philippe}, booktitle = {{Les syst{\`e}mes {\'e}ducatifs qu{\'e}b{\'e}cois et fran{\c c}ais sous l'angle de la relation enseignant-apprenants : enjeux et impacts}}, hal_version = {v1}, hal_id = {hal-03359291}, keywords = {Salles ambiantes ; Syst{\`e}mes d'observation ; Relations enseignant-{\'e}l{\`e}ve}, month = {September}, year = {2021}, organization = {{S{\'e}verine Ha{\"i}at and Annie Charron}}, address = {Montr{\'e}al, Canada}, url = {https://hal.science/hal-03359291}, note = {https://hal.archives-ouvertes.fr/hal-03359291}, abstract = {Cette intervention passe en revue les outils qui peuvent aider à l'observation et l'analyse de la relation enseignant-apprenants (REE). Nous verrons comment informatiser la saisie d'observations d'événements scolaires, puis comment l'enregistrement vidéo peut apporter des éléments complémentaires, à la fois du point de vue de l'observation que du développement professionnel des enseignants. Des dispositifs plus élaborés, comme l'oculométrie et les salles sensibles au contexte seront ensuite détaillés. En termes d'analyse, nous définirons l'analyse sémantique des codages d'événements, ainsi que l'analyse en réseaux sociaux, pour terminer avec de plus récentes avancées en apprentissage machine. Une réflexion sur l'éthique et la vie privée, essentielle vu le contexte, sera également menée.}, }Cette intervention passe en revue les outils qui peuvent aider à l'observation et l'analyse de la relation enseignant-apprenants (REE). Nous verrons comment informatiser la saisie d'observations d'événements scolaires, puis comment l'enregistrement vidéo peut apporter des éléments complémentaires, à la fois du point de vue de l'observation que du développement professionnel des enseignants. Des dispositifs plus élaborés, comme l'oculométrie et les salles sensibles au contexte seront ensuite détaillés. En termes d'analyse, nous définirons l'analyse sémantique des codages d'événements, ainsi que l'analyse en réseaux sociaux, pour terminer avec de plus récentes avancées en apprentissage machine. Une réflexion sur l'éthique et la vie privée, essentielle vu le contexte, sera également menée.
- Journal Sciences sociales et apprentissage machine pour l’interaction
D. Vaufreydaz
Interstices, September 2021
HAL[BibTeX][Abstract]
@article{vaufreydaz:hal-03363875, title = {{Sciences sociales et apprentissage machine pour l'interaction}}, author = {Vaufreydaz, Dominique}, journal = {{Interstices}}, hal_version = {v1}, hal_id = {hal-03363875}, keywords = {Apprentissage machine deep learning ; Sciences humaines \& sociales ; Interactions Homme-Machine ; Robot}, month = {September}, year = {2021}, publisher = {{INRIA}}, url = {https://inria.hal.science/hal-03363875}, abstract = {Le machine learning a aujourd'hui fait preuve de son efficacité : on peut produire, à partir d'une grande masse d'informations, des Intelligences Artificielles capables de répondre à de nombreux besoins, comme le montrent les progrès en vision par ordinateur ou en traduction automatique ces dernières années. Pour autant, cette technique a des limites, vis-à-vis des secteurs ne disposant pas de suffisamment de données, vis-à-vis de certaines questions éthiques, et vis-à-vis de son explicabilité. Pour pallier ces problèmes dans les applications où le Machine Learning seul n’est pas efficient, les sciences humaines peuvent apporter des solutions et de la précision aux systèmes automatiques. À l'aide de deux exemples concrets, Dominique Vaufreydaz illustre comment les apports des sciences humaines peuvent nourrir et améliorer un programme informatique dédié aux interactions avec les humains.}, }Le machine learning a aujourd'hui fait preuve de son efficacité : on peut produire, à partir d'une grande masse d'informations, des Intelligences Artificielles capables de répondre à de nombreux besoins, comme le montrent les progrès en vision par ordinateur ou en traduction automatique ces dernières années. Pour autant, cette technique a des limites, vis-à-vis des secteurs ne disposant pas de suffisamment de données, vis-à-vis de certaines questions éthiques, et vis-à-vis de son explicabilité. Pour pallier ces problèmes dans les applications où le Machine Learning seul n’est pas efficient, les sciences humaines peuvent apporter des solutions et de la précision aux systèmes automatiques. À l'aide de deux exemples concrets, Dominique Vaufreydaz illustre comment les apports des sciences humaines peuvent nourrir et améliorer un programme informatique dédié aux interactions avec les humains.
2020
- proceedings Group-Level Emotion Recognition Using a Unimodal Privacy-Safe Non-Individual Approach
A. Petrova, D. Vaufreydaz and P. Dessus
EmotiW2020 workshop of the 22nd ACM International Conference on Multimodal Interaction (ICMI2020), Utrecht, Netherlands, October 2020
PDF
DOI
HAL
LINK[BibTeX][Abstract]
@proceedings{petrova:hal-02937871, title = {{Group-Level Emotion Recognition Using a Unimodal Privacy-Safe Non-Individual Approach}}, author = {Petrova, Anastasia and Vaufreydaz, Dominique and Dessus, Philippe}, booktitle = {{EmotiW2020 workshop of the 22nd ACM International Conference on Multimodal Interaction (ICMI2020)}}, hal_version = {v1}, hal_id = {hal-02937871}, pdf = {https://inria.hal.science/hal-02937871v1/file/main.pdf}, keywords = {EmotiW 2020 ; audio-video group emotion recognition ; Deep Learning ; affective computing ; privacy}, doi = {10.48550/arXiv.2009.07013}, month = {October}, year = {2020}, address = {Utrecht, Netherlands}, url = {https://inria.hal.science/hal-02937871}, note = {https://hal.inria.fr/hal-02937871}, abstract = {This article presents our unimodal privacy-safe and non-individual proposal for the audio-video group emotion recognition subtask at the Emotion Recognition in the Wild (EmotiW) Challenge 2020 1. This sub challenge aims to classify in the wild videos into three categories: Positive, Neutral and Negative. Recent deep learning models have shown tremendous advances in analyzing interactions between people, predicting human behavior and affective evaluation. Nonetheless, their performance comes from individual-based analysis, which means summing up and averaging scores from individual detections, which inevitably leads to some privacy issues. In this research, we investigated a frugal approach towards a model able to capture the global moods from the whole image without using face or pose detection, or any individual-based feature as input. The proposed methodology mixes state-of-the-art and dedicated synthetic corpora as training sources. With an in-depth exploration of neural network architectures for group-level emotion recognition, we built a VGG-based model achieving 59.13% accuracy on the VGAF test set (eleventh place of the challenge). Given that the analysis is unimodal based only on global features and that the performance is evaluated on a real-world dataset, these results are promising and let us envision extending this model to multimodality for classroom ambiance evaluation, our final target application.}, }This article presents our unimodal privacy-safe and non-individual proposal for the audio-video group emotion recognition subtask at the Emotion Recognition in the Wild (EmotiW) Challenge 2020 1. This sub challenge aims to classify in the wild videos into three categories: Positive, Neutral and Negative. Recent deep learning models have shown tremendous advances in analyzing interactions between people, predicting human behavior and affective evaluation. Nonetheless, their performance comes from individual-based analysis, which means summing up and averaging scores from individual detections, which inevitably leads to some privacy issues. In this research, we investigated a frugal approach towards a model able to capture the global moods from the whole image without using face or pose detection, or any individual-based feature as input. The proposed methodology mixes state-of-the-art and dedicated synthetic corpora as training sources. With an in-depth exploration of neural network architectures for group-level emotion recognition, we built a VGG-based model achieving 59.13% accuracy on the VGAF test set (eleventh place of the challenge). Given that the analysis is unimodal based only on global features and that the performance is evaluated on a real-world dataset, these results are promising and let us envision extending this model to multimodality for classroom ambiance evaluation, our final target application.
- Journal Design spatial sociotechnique : le rôle des classes sensibles au contexte
R. Laurent, P. Dessus and D. Vaufreydaz
Distances et Médiations des Savoirs, vol. 30, pp. 1-8, July 2020
PDF
DOI
HAL
LINK[BibTeX][Abstract]
@article{laurent:hal-02883770, title = {{Design spatial sociotechnique : le r{\^o}le des classes sensibles au contexte}}, author = {Laurent, Romain and Dessus, Philippe and Vaufreydaz, Dominique}, journal = {{Distances et M{\'e}diations des Savoirs}}, hal_version = {v1}, hal_id = {hal-02883770}, pdf = {https://hal.science/hal-02883770v1/file/DMS-v-5.7.pdf}, keywords = {{\'E}thique et vie priv{\'e}e ; Salles de classes sensibles au contexte ; Espace ; Enseignement sup{\'e}rieur ; Ing{\'e}nierie p{\'e}dagogique ; Design de l'enseignement}, doi = {10.4000/dms.5228}, month = {July}, year = {2020}, pages = {1-8}, volume = {30}, publisher = {{CNED-Centre national d'enseignement {\`a} distance}}, url = {https://hal.science/hal-02883770}, note = {https://hal.archives-ouvertes.fr/hal-02883770}, abstract = {La recherche en ingénierie éducative (instructional design) a jusqu’à présent été riche en théories et applications d’une grande puissance prescriptive et centrées principalement sur l’enseignant. En revanche elle paraît manquer encore de travaux rendant compte de l’activité de l’enseignant et de l’apprenant en contexte, donc avec une dimension descriptive plus importante. Ce que nous nommons « design spatial sociotechnique » peut devenir une activité de design plus globale que celles précédemment mises au jour. Nous montrons comment l’essor récent des salles de classe sensibles au contexte, ou « salles intelligentes » peut autoriser l’émergence de tels modèles, et à quelles conditions, en prenant des exemples dans l’enseignement universitaire.}, }La recherche en ingénierie éducative (instructional design) a jusqu’à présent été riche en théories et applications d’une grande puissance prescriptive et centrées principalement sur l’enseignant. En revanche elle paraît manquer encore de travaux rendant compte de l’activité de l’enseignant et de l’apprenant en contexte, donc avec une dimension descriptive plus importante. Ce que nous nommons « design spatial sociotechnique » peut devenir une activité de design plus globale que celles précédemment mises au jour. Nous montrons comment l’essor récent des salles de classe sensibles au contexte, ou « salles intelligentes » peut autoriser l’émergence de tels modèles, et à quelles conditions, en prenant des exemples dans l’enseignement universitaire.
- Journal Design spatial sociotechnique : le rôle des classes sensibles au contexte
R. Laurent, P. Dessus and D. Vaufreydaz
Distances et Médiations des Savoirs, vol. 30, pp. 1-8, July 2020
PDF
DOI
HAL[BibTeX][Abstract]
@article{laurent:hal-02883770, title = {{Design spatial sociotechnique : le r{\^o}le des classes sensibles au contexte}}, author = {Laurent, Romain and Dessus, Philippe and Vaufreydaz, Dominique}, journal = {{Distances et M{\'e}diations des Savoirs}}, hal_version = {v1}, hal_id = {hal-02883770}, pdf = {https://hal.science/hal-02883770v1/file/DMS-v-5.7.pdf}, keywords = {{\'E}thique et vie priv{\'e}e ; Salles de classes sensibles au contexte ; Espace ; Enseignement sup{\'e}rieur ; Ing{\'e}nierie p{\'e}dagogique ; Design de l'enseignement}, doi = {10.4000/dms.5228}, month = {July}, year = {2020}, pages = {1-8}, volume = {30}, publisher = {{CNED-Centre national d'enseignement {\`a} distance}}, url = {https://hal.science/hal-02883770}, abstract = {La recherche en ingénierie éducative (instructional design) a jusqu’à présent été riche en théories et applications d’une grande puissance prescriptive et centrées principalement sur l’enseignant. En revanche elle paraît manquer encore de travaux rendant compte de l’activité de l’enseignant et de l’apprenant en contexte, donc avec une dimension descriptive plus importante. Ce que nous nommons « design spatial sociotechnique » peut devenir une activité de design plus globale que celles précédemment mises au jour. Nous montrons comment l’essor récent des salles de classe sensibles au contexte, ou « salles intelligentes » peut autoriser l’émergence de tels modèles, et à quelles conditions, en prenant des exemples dans l’enseignement universitaire.}, }La recherche en ingénierie éducative (instructional design) a jusqu’à présent été riche en théories et applications d’une grande puissance prescriptive et centrées principalement sur l’enseignant. En revanche elle paraît manquer encore de travaux rendant compte de l’activité de l’enseignant et de l’apprenant en contexte, donc avec une dimension descriptive plus importante. Ce que nous nommons « design spatial sociotechnique » peut devenir une activité de design plus globale que celles précédemment mises au jour. Nous montrons comment l’essor récent des salles de classe sensibles au contexte, ou « salles intelligentes » peut autoriser l’émergence de tels modèles, et à quelles conditions, en prenant des exemples dans l’enseignement universitaire.
- Journal Ethical Teaching Analytics in a Context-Aware Classroom: A Manifesto
R. Laurent, D. Vaufreydaz and P. Dessus
ERCIM News, pp. 39–40, January 2020
PDF
HAL
LINK[BibTeX][Abstract]
@article{laurent:hal-02438020, title = {{Ethical Teaching Analytics in a Context-Aware Classroom: A Manifesto}}, author = {Laurent, Romain and Vaufreydaz, Dominique and Dessus, Philippe}, journal = {{ERCIM News}}, hal_version = {v1}, hal_id = {hal-02438020}, pdf = {https://hal.science/hal-02438020v1/file/ERCIM%20News%20No120_FC4-img.pdf}, keywords = {ethics and privacy ; learning analytics ; teaching analytics ; teacher cognition ; machine learning ; ubiquitous computing ; ambient classroom}, month = {January}, year = {2020}, pages = {39--40}, number = {120}, publisher = {{ERCIM}}, url = {https://hal.science/hal-02438020}, note = {https://hal.archives-ouvertes.fr/hal-02438020}, abstract = {Should Big Teacher be watching you? The Teaching Lab project at Grenoble Alpes University proposes recommendations for designing smart classrooms with ethical considerations taken into account.}, }Should Big Teacher be watching you? The Teaching Lab project at Grenoble Alpes University proposes recommendations for designing smart classrooms with ethical considerations taken into account.
- Journal Ethical Teaching Analytics in a Context-Aware Classroom: A Manifesto
R. Laurent, D. Vaufreydaz and P. Dessus
ERCIM News, pp. 39–40, January 2020
PDF
HAL[BibTeX][Abstract]
@article{laurent:hal-02438020, title = {{Ethical Teaching Analytics in a Context-Aware Classroom: A Manifesto}}, author = {Laurent, Romain and Vaufreydaz, Dominique and Dessus, Philippe}, journal = {{ERCIM News}}, hal_version = {v1}, hal_id = {hal-02438020}, pdf = {https://hal.science/hal-02438020v1/file/ERCIM%20News%20No120_FC4-img.pdf}, keywords = {ethics and privacy ; learning analytics ; teaching analytics ; teacher cognition ; machine learning ; ubiquitous computing ; ambient classroom}, month = {January}, year = {2020}, pages = {39--40}, number = {120}, publisher = {{ERCIM}}, url = {https://hal.science/hal-02438020}, abstract = {Should Big Teacher be watching you? The Teaching Lab project at Grenoble Alpes University proposes recommendations for designing smart classrooms with ethical considerations taken into account.}, }Should Big Teacher be watching you? The Teaching Lab project at Grenoble Alpes University proposes recommendations for designing smart classrooms with ethical considerations taken into account.
2018
- Proceedings A Framework for a Multimodal Analysis of Teaching Centered on Shared Attention and Knowledge Access
P. Dessus, L. Aubineau, D. Vaufreydaz and J. L. Crowley
Grenoble Workshop on Models and Analysis of Eye Movements, Grenoble, France, June 2018
PDF
HAL
LINK[BibTeX][Abstract]
@inproceedings{dessus:hal-01811092, title = {{A Framework for a Multimodal Analysis of Teaching Centered on Shared Attention and Knowledge Access}}, author = {Dessus, Philippe and Aubineau, Louise-H{\'e}l{\'e}na and Vaufreydaz, Dominique and Crowley, James L.}, booktitle = {{Grenoble Workshop on Models and Analysis of Eye Movements}}, hal_version = {v1}, hal_id = {hal-01811092}, pdf = {https://hal.science/hal-01811092v1/file/eye-mov-18.pdf}, keywords = {Teacher cognition ; Joint Attention ; Classroom Observation ; Eye tracking}, month = {June}, year = {2018}, address = {Grenoble, France}, url = {https://hal.science/hal-01811092}, note = {https://hal.archives-ouvertes.fr/hal-01811092}, abstract = {The effects of teaching on learning are mostly uncertain, hidden, and not immediate. Research investigating how teaching can have an impact on learning has recently been given a significant boost with signal processing devices and data mining analyses. We devised a framework for the study of teaching and learning processes which posits that lessons are composed of episodes of joint attention and access to the taught content, and that the interplay of behaviors like joint attention, actional contingency, and feedback loops compose different levels of teaching. Teaching by social tolerance, which occurs when learners (Ls) have no attentional problems but their access to the taught knowledge depends on the teacher (T). Teaching by opportunity provisioning, when Ls can be aware on the taught content but lack access to it (e.g., lack of understanding), and T builds ad hoc situations in which Ls are provided with easier content. Teaching by stimulus or local enhancement, when Ls have fully access to the content but lack attention toward it. T explicitly shows content to Ls, slows down her behaviors, tells and acts in an adapted way (e.g., motherese). A variety of devices installed in a classroom will capture and automatically characterize these events. T’s and Ls’ utterances and gazes will be recorded through low-cost cameras installed on 3D printed glasses, and T will wear a mobile eye tracker and a mobile microphone. Instructional material is equipped with qrcodes so that Ls’ and T’s video streams are processed to determine where people are looking at, and to infer the corresponding teaching levels. This novel framework will be used to analyze instructional events in ecological situations, and will be a first step to build a ”pervasive classroom”, where eye-tracking and sensor-based devices analyze a wide range of events in a multimodal and interdisciplinary way.}, }The effects of teaching on learning are mostly uncertain, hidden, and not immediate. Research investigating how teaching can have an impact on learning has recently been given a significant boost with signal processing devices and data mining analyses. We devised a framework for the study of teaching and learning processes which posits that lessons are composed of episodes of joint attention and access to the taught content, and that the interplay of behaviors like joint attention, actional contingency, and feedback loops compose different levels of teaching. Teaching by social tolerance, which occurs when learners (Ls) have no attentional problems but their access to the taught knowledge depends on the teacher (T). Teaching by opportunity provisioning, when Ls can be aware on the taught content but lack access to it (e.g., lack of understanding), and T builds ad hoc situations in which Ls are provided with easier content. Teaching by stimulus or local enhancement, when Ls have fully access to the content but lack attention toward it. T explicitly shows content to Ls, slows down her behaviors, tells and acts in an adapted way (e.g., motherese). A variety of devices installed in a classroom will capture and automatically characterize these events. T’s and Ls’ utterances and gazes will be recorded through low-cost cameras installed on 3D printed glasses, and T will wear a mobile eye tracker and a mobile microphone. Instructional material is equipped with qrcodes so that Ls’ and T’s video streams are processed to determine where people are looking at, and to infer the corresponding teaching levels. This novel framework will be used to analyze instructional events in ecological situations, and will be a first step to build a ”pervasive classroom”, where eye-tracking and sensor-based devices analyze a wide range of events in a multimodal and interdisciplinary way.
function toggleElement(id) { var current = document.getElementById(id); if (current.style.display === 'none') { // Ferme tous les blocs existants var allBib = document.querySelectorAll('.bibshow'); allBib.forEach(function(el) { // if ( el !== current ) { el.style.display = 'none'; //} }); current.style.display = 'block'; } else { current.style.display = 'none'; } }
function toggleBibtex(id) { toggleElement('bib-' + id); }
function toggleAbstract(id) { toggleElement('abs-' + id); }