PRIM Project: What Contributions in the Field of Disabilities?

PRIM Project: What Contributions in the Field of Disabilities?

Céline Jost* Justin Debloos Brigitte Le Pévédic Agnès Piquard-Kipffer Caroline Barbot-Bouzit Gérard Uzan

CHArt laboratory, Paris 8 University, Saint-Denis 93200, France

Lac-STICC laboratory, South Brittany University, Vannes 56000, France

Grhapes laboratory, INSHEA, Suresnes 92150, France

Corresponding Author Email: 
celine.jost@univ-paris8.fr
Page: 
1-6
|
DOI: 
https://doi.org/10.18280/mmc_c.831-401
Received: 
15 September 2022
| |
Accepted: 
3 December 2022
| | Citation

OPEN ACCESS

Abstract: 

Through the PRIM project, we aim to give people the power to create Scenagrams (interaction scenarios between a human and digital devices) without the need to be trained in programming or to ask computer scientists for help. In this project, software design follows an unconventional approach, far from classical codes, to embody human thinking (based on interactions) instead of computer logic (based on algorithms). The main idea rests on a new time representation using a PRIM-specific timeline instead of a standardized timeline. We evaluated the acceptability and cognitive compatibility of this new timeline with 50 participants. Results are very promising. In this paper, we will present qualitative evaluation results about the interest of such software in the field of disability.

Keywords: 

scenagram, timeline metaphor, software, programming for all

1. Introduction

The PRIM project, which stands for Playing and Recording with Interactivity and Multisensoriality, aims to gather an interdisciplinary community to conceive an original software tool allowing to quickly and simply create Scenagrams. This relatively new word conveys the idea that interaction is central and means "interaction scenario between human and digital devices" [1, 2]. It meets the need, felt by an important part of the population, to be autonomous in creating interactive activities. Indeed, it is currently necessary to use programming languages to access connected objects’ functionalities to make them collaborate together with the user. That means these people have to choose between learning how to program, or to outsource programming to computer scientists. In both cases, the creation is either considerably slowed down or totally impossible. Yet numerous domains are in need of creating Scenagrams. For example, education needs to create activities for differentiated instruction; health for reeducation or cognitive stimulation exercises; art for digital artworks which evolve according to audience actions; theater for producers who can define interactions between an actor on stage and digital devices; cinema for creating 4D interactive movies; research for creating experimental conditions to explore the impact of technology on humans, and so on.

Nowadays, no software tool allows for creating Scenagrams without programming [3, 4]. The solution, which appears to be the simplest one, is to use visual programming languages such as Choregraphe for the Nao robot [5], Blockly for simulated objects programming [6, 7], or Scratch, which is widely used to teach programming to children [8]. Visual programming was a revolution in the field of programming because it was supplementary to textual programming and allowed more people to learn how to program [9]. However, even if visual programming has opened some doors, it remains too complicated to allow everyone to implement their ideas [10].

What makes the PRIM project original is the fact that it is willing to change classical paradigms by providing a software tool based on humans' thinking, instead of being based on computer operations. Thus, users are expected to be able to easily create interactions based on their mental model, without the need to translate their ideas into computer logic. We wish to provide a very simple system perceived as natural. The software tool will obviously offer a graphical language, which may be similar to programming but based on human thinking (i.e., interaction) instead of thinking that is specific to computer conception - thus to computing language (i.e. algorithmics: (Noun for computer thinking science that teaches how to think to produce algorithms)). The main goal is therefore to represent the sight of humans instead of that of machines (as it is the case in programming languages based on algorithmics). For that purpose, section II explains the context, limits and expectations of the project. It details the main ideas which constitute the basis of the software tool to be designed; and then it introduces the main scientific obstacle to be faced. Section III introduces a software prototype implemented to offer a solution to the main scientific obstacle and to evaluate the acceptability of the future software tool in order to verify if it is relevant to continue this project. Section IV shows the methodology of the evaluation we conducted to validate our proposal and to check if the users move forward in the future. Section V shows the opinions of users about the utility (This word is part of the three dimensions of the ergonomic approach: utility, usability, and acceptability.) of such a tool. Finally, section VI discusses the results and concludes this paper.

2. Ideas and Concepts

2.1 Context: Scenagram

Since our objective is to create a language, different from a programming language, the simplest possible, and having a different logic, there is no question to copy what programming languages already do. It is important to keep in mind that our objective is to create what we called Scenagrams [2], and which are defined by "a series of actions performed by the end-user and/or by digital devices, alternately, to reach a common goal based on cognitive stimulation". This definition is really important to lay the foundations of our main idea. This means that Scenagrams use the existing functionalities of the sensors and actuators present on the connected objects. Our aim is not to give the user the possibility to create new functionalities, but only to use existing ones, and to define the interactions, that is, what will happen between the human and the system.

2.2 Main ideas

A literature review [10], showed that visual programming languages seemed to have had success mainly thanks to the visual design of their interfaces. Their particularities are to be composed of several well-identified areas that help for programming: an area containing programming elements (components), an area to build the program, an area for components configuration, and an area to execute the program. The same areas can be found in video editing and musical conception software tools, which are also easy-to-use tools and which allow creation.

Visual programming languages also seemed to have had success thanks to the easy manipulation of components that can be moved from one area to another, which graphically gives clues or tips to help programming, for example by their shapes and/or by their colors. Once again, this particularity is shared by video editing and musical conception tools.

 For this reason, the PRIM project aims at getting inspiration not only from visual programming languages but also from video editing tools [11-16], and musical conception tools [17], which altogether have the same strengths but some different approaches to allow users to make creations.

2.3 Main challenge: Time

We believe that the biggest difference between visual programming languages and both other types of software tools is time representation and management. Indeed, the first ones are based on relative and event time in which each action is triggered one after the other. Some actions can happen anytime but others may simply never happen. This omnipresent uncertainty is representative of the interaction with humans. It is impossible to model some interaction scenarios without this uncertainty, which is totally absent from the second types of software tools. In fact, these latter are based on real time. In this context, each action is triggered at a precise moment for a precise duration: time goes and never stops. It is thus impossible to schedule uncertain actions. Both of these temporalities, which are incompatible by nature, exist in a separate way (either in different software tools or in different areas from the same software tool, as in Choregraphe for example).

However, there seems to be no software tool where both temporalities co-exist. In the majority of cases, time is represented by a line which is a time axis. In the case of video editing or musical conception software tools, the line, on which the user builds her/his video or music directly, is always displayed. It is different in the case of visual programming language. The timeline (horizontal or vertical) does not visually exist as a time axis. It is the construction of the program that gradually builds the timeline. and is represented gradually by the program conception. Thus, in the first case, the timeline is explicit and seems to represent directly the user's mental model, while in the second case the timeline is rather implicit and the user must reconstruct it mentally, which seems to be cognitively more complex.

As part of the PRIM project, we hypothesize that video editing and musical conception software tools are easier to use because they are based on a timeline. On the one hand, a timeline is easy to use and to manipulate. On the other hand, it avoids the cognitive complexity of mentally translating human thinking to computer thinking and vice versa.

Our proposal has a major scientific obstacle. Are the users able to get used to and to accept a timeline which manipulates event time while looking similar to the timelines in video editing and musical conception software tools? Indeed, its use may require a different cognitive effort and deconstruct the habits taken with a timeline based on absolute time (i.e., real temporality).

3. ScenaProd: First Prototype

In order to tackle the issues introduced in section 2.3, ScenaProd (for Scenagrams Production) is a prototype conceived to make the event timeline exist. Figure 1 shows a general presentation of our prototype which contains a menu to create, play or stop a Scenagram as well as three easily identifiable areas (palette of components, configuration, and edition). Four components are implemented in the prototype. They represent the most common uses to help users' imagination and projection in a future complete software tool. Users can choose to play sound, to display text, to display a picture, or to wait until a user press a key on the keyboard. When playing Scenagram, texts and pictures are shown in a small window that allows visualizing the running Scenagram. Four components are enough to put users in context to use and understand the timeline.

Regarding timeline, the prototype displays a dashed line that shows a discontinuous time. The line functions like in music scores, by moving to the next line to avoid infinite horizontal scrolling which is more difficult to manipulate than vertical scrolling. Each time a user drops a component on the timeline, a black cross appears next. This cross is a contextual menu that let the user makes some timeline alterations. For example, in this prototype, it is possible to duplicate a timeline. That means a second line appears. Both lines are autonomous and played at the same time. Figure 1 shows four duplicated lines. That is the point of our evaluation. Indeed, duplicated timelines (perceptible on the figure with a vertical dashed line connecting them together) are autonomous which means there is no temporal synchronization between them while playing Scenagrams. It is different than video editing software where there is a vertical temporal synchronization: each component located on the same vertical axis is displayed or played at the same time. In the case of ScenaProd, there is no relation to time. Some components located in the same vertical axis can be played at different moments like in music scores.

This particular point constitutes the major scientific obstacle we identified. Can users accept this unusual desynchronization that appears opposite to their habits? Can they imagine using such software?

Figure 1. ScenaProd screenshot

4. Evaluation

The objective of this evaluation was to examine timeline acceptability and ScenaProd utility through participants' projection into this kind of software. Each session was made remotely through the Zoom video communication and lasted 30 minutes at most. Thus, inclusion criteria were having access to a computer and an Internet connection. To conduct the evaluation, participants remotely took control of a computer (under MacOs). The experimenter attended throughout the session to give instructions, to answer questions or to help participants if needed.

Table 1. Ten questions asked about software, timeline, and participants' projection

No.

Question

Q1

I experienced difficulties to perform the requested task.

Q2

I think that ScenaProd looks like a video editing software.

Q3

I think it is required to have computing skills to use ScenaProd.

Q4

I think it is difficult to understand how to place components on the timeline.

Q5

I think that ScenaProd can be useful for my professional activity.

Q6

After clicking on "playing the scenagram", I think that the progression in the scenagram is visually easy to understand.

Q7

I think that time management is destabilizing.

Q8

I think that it is difficult to understand that each timeline has its own time.

Q9

I think that it is easy to make a timeline duplication.

Q10

I think that I can create some new Scenagrams without help.

The evaluation was divided into three steps: (1) Participants (or parents in the case of underage participants) had to sign a consent form and were informed that there was no recording, that the data would be anonymized, and that they were able to stop anytime. (2) Participants had to follow the experimenter's instructions as a tutorial, to create three Scenagrams with increasing difficulty, one after the other. The tasks had been chosen to make participants experience the same Scenagrams playing, which was the subject of our study. The third Scenagram exposed them to simultaneous different stimuli that were not visually represented on the same X axis. The experimenter's objective was to ensure that each participant had seen and experienced playing this complex Scenagram (3) Participants had to fill out a questionnaire. The first part of the questionnaire was a System Usability Scale F (F-SUS) [18, 19] with the aim to verify whether the graphical interface was easy to use without being a bias or an obstacle to evaluate the timeline. The second part was composed of 10 questions using the same scale as the SUS (5-point Likert scale) and asking specific questions about the software, the timeline, and participants' projection (see Table 1). The third part collected information about participants and their opinion through open and closed questions.

In addition to that, the experimenter was also in charge of writing the total duration of the session, the number of asked questions, the number of times participants were blocked, and comments from participants, if existing.

Before this evaluation, we had conducted a preliminary evaluation with 5 participants to test the protocol and to ensure that the session lasted less than 30 minutes.

5. Results

5.1 Participants

We enrolled 50 participants coming from different French areas (31 women, 19 men; mean age: 34.5 years old; standard deviation: 15.4; range: 12 to 75 years old). Men were between 12 and 52 years old (mean age: 28.9 years old; standard deviation: 12.3) while women were between 12 to 75 years old (mean age: 38 years old; standard deviation: 15.8).

The majority of participants were in working life (33). The others were junior high school students (4), high school students (2), students (9) or pensioners (2). Among the participants in working life, 17 had an intermediate occupation, 12 were middle managers or in intellectual professions, 2 were employees, 1 was a craftsperson and 1 was in retraining.

Among the participants, 11 had an activity related to health, 12 to IT and 5 to teaching. Thirty participants had already used video editing software before. And at the end of the evaluation, 41 people did not know any tool similar to ScenaProd. The 9 others named educational, software, Microsoft Powerpoint or video editing software.

Each participant manipulated ScenaProd a mean of 18 minutes per session (median: 17.5; standard deviation: 5.1: min: 9; max: 33). Only five participants had a mental block that required the help of the experimenter (8 times, with a maximum of three for the same person), while 18 participants have asked a total of 47 questions by curiosity or to get a validation about the action to be done (with a maximum of 8 for the same person).

5.2 Timeline acceptability

ScenaProd was scored 84 from the F-SUS (standard deviation: 8.12) interpreted as an almost excellent acceptability. This result indicates that ScenaProd gave almost the best possible conditions to evaluate the timeline (prototype evaluation is out of the scope of this paper). It is interesting to note that 18% of the participants did not know any similar tool, although 60% of them recognized an inspiration coming from some existing tools.

Figure 2 shows the results to the ten questions presented in Table 1. They are promising since 92% of the participants felt able to redo a Scenagram on their own (Q10), 86% of them found that time management was not destabilizing (Q7) and 80% of them found that it was not difficult to understand that each line had its own time (Q8) knowing that all participants understood time management well at the end of the evaluation.

In addition to these questions, opinion questions results indicated that 96% of participants understood that the timeline represented the progression of the Scenagram and not the real time like in a video; 98% of them thought it was easy to understand and to become familiar to this time management; 84% of them were not disturbed by the absence of vertical time synchronization.

This evaluation has shown that vertical synchronization can disappear without disturbing users to provide a relative/event timeline in our future final software tool.

In the following, the paper will focus on participants’ perceptions about ScenaProd utility in the future and in the context of their professional activity.

Figure 2. Answers to users’ feedback questions. “R" indicates questions for which the coding has been reversed. Thus, results should be interpreted as if the question was positive.

5.3 ScenaProd utility

First, participants were asked to give their opinion about ScenaProd utility, thinking of what it could be once development finished. In total, 48 participants gave their opinion with 546 words. We obtained 78 proposals. The other two participants indicated that they had no opinion. These 78 proposals could be classified into 5 categories: 17 related to creation, 11 as perspectives in the field of disability, 20 about learning assistance, 18 about communication assistance and 12 varied proposals. This last category contained two isolated and therefore unclassifiable proposals as well as 10 proposals which seemed to be inspired from evaluation instructions and therefore possibly be influenced responses (7 concerning video editing and 3 concerning the creation of Scenagrams).

5.4 Examples of use in professional activity

Second, participants were asked to give, if they could, examples of using ScenaProd in a context specific to their activity. In total, 41 of participants gave their opinion with 456 words. We obtained 55 proposals. Among the 9 other participants, 3 indicated that they had examples without giving details and 6 indicated that they had no example in mind. These 55 proposals could be classified into 4 categories, similar to the previous question: 9 proposals related to creation, 10 as perspectives in the field of disability, 14 to learning assistance and 19 to communication assistance. Figure 3 shows the total number of proposals made by category and according to the long-term utility of ScenaProd and examples of use.

Figure 3. Number of proposals by category and by question

5.5 Additional comments

Finally, each participant was asked if she/he had additional comments. The answer was negative for 33 participants. The other 17 made comments with a total of 237 words: 8 participants complimented ScenaProd, 3 indicated curiosity to see the final tool, 2 indicated that they need a bit more time to be able to answer, 2 made comments related to ergonomics and 2 made comments about practical concerns.

6. Discussion and Conclusion

6.1 Main results

Results show that most participants imagined using ScenaProd in the future. Indeed, 96% of the participants named one or several use cases for the future ScenaProd and 88% of them thought it would be useful in the context of their professional activity. This is a very positive result which confirms that even if the proposed prototype was very simple, it was enough complete to allow users to think of future use. It also confirmed that ScenaProd is simple enough to allow everyone to use it quickly. After only an average of 18 minutes, participants spontaneously thought of 4 possible fields for ScenaProd through 118 opinions. Of these 118 opinions, 32.2% imagined ScenaProd as a tool to communicate or to make presentations, 28.8% as a training and learning tool, 21.2% as creation software and 17.8% as a solution for problems in the field of disability.

6.2 Communication and presentations

Of the 32.2% of opinions dealing with communication or presentations, 60% mentioned the possibility of using ScenaProd to make presentations and 18% even imagined it as an alternative for making slideshows. The other proposals were less unanimous: 3 people thought that ScenaProd would allow communication on social networks, 5 people that it would represent an interface with a robot, complete verbal information, be a suitable presentation tool, offer interactive resources or create talking family albums.

6.3 Formation and learning

As regards formation and learning, it is interesting to note that participants thought of three uses. First, they considered ScenaProd as a teacher assistance tool helping various audiences during training (44.1%). They also imagined ScenaProd as a self-learning platform to be used independently (38.2%). Finally, several participants saw the potential of our tool stimulating users, either by its multisensory nature or by its playful quality (17.7%).

6.4 Creation

Knowing that creation is at the heart of the PRIM project, it is interesting to note that 26.9% of the comments mentioned the possibility of making video, photo, or animation editing while 26.9% of them mentioned programming (classic, industrial machines, or home automation). However, other disciplines were also cited which shows the capability of ScenaProd to stimulate the creative process. Comments mentioned interactive stories (26.9%), 11.6% artistic creations (11.6%), and video games (7.7%).

6.5 Disability

This last category is quite transversal. On the one hand, there are 21 comments specific to the issue of disability. On the other hand, there are also proposals related to disability in the three other categories. By grouping and removing duplicates, we can list 4 categories: beneficiary, discipline, person development, assistance tool.

Regarding beneficiaries, participants highlighted ScenaProd utility for people with autism, for the elderly and for children. In addition to being a general compensation tool for autism spectrum disorder, ScenaProd would enable to communicate with pictograms and sounds, and visualize daily rituals. For the elderly, it would be useful for creating daily assistance scenarios, and for children it would be a good tool for designing early learning activities and stimulating their creativity.

Regarding disciplines, participants listed the following three: occupational therapy, computer science, and home automation. Our prototype would make possible creation of activities, occupations or task scenarios, which is the approach base of occupational therapists. It is also seen as a tool to teach programming or to give people with disabilities the possibility to create small programs. Finally, it is also seen as a tool to easily control home automation system or connected objects.

With respect to person development, participants saw in ScenaProd a potential to make adapted learning, memory rehabilitation or relearning, stimulation or training (cognitive, sensory, creative) and follow-up care.

Finally, regarding assistance tools, participants saw ScenaProd as a communication facilitator. On the one hand, it would enable people with disability situation to communicate (using different strategies). On the other hand, it would also allow others to communicate through virtual tours, personalized guides, home assistance systems.

6.6 Perspectives

Results show a very strong potential for ScenaProd to stimulate creativity and learning, and to help people with disabilities. In the future, the PRIM project will therefore explore its utility in the field of specialized education where there are many challenges to face. For several years, robots have been increasingly used in this field. But robots show reliability limits. In order to fill in the gaps of the robot, teachers often need to make modifications in the scenarios to adapt the learning situation. These modifications can be long to do or impossible if they have not been anticipated in the educational remediation schedule [15]. In this context, it would be really relevant to have such software that allows people to model and describe interactions between a person and a digital system without the need to learn programming. The teacher or re-educator would thus be able to easily adapt the actions of connected objects to the child needs. The latter could press on contactors to trigger actions related to multisensoriality. For example, a deaf child could trigger the vocalization of the name of a dish or the associated gesture in LSF (video), associated with a blowing hot air (hot dish) or cold (for example an ice cream), by touching a picture of a ready-cooked dish. The same associations could be used in the field of emotions. The video of a face or of a bodily attitude could be associated to sounds or appropriate music.

7. Conclusion

Although still incomplete, ScenaProd received strong support from the 50 participants of the evaluation. The 4 categories that emerged show that our proposal achieved our goals. First, ScenaProd is seen as a communication tool similar to PowerPoint software, which is a very positive comparison, knowing that this comparison is related to the ease of use and that this simplicity was one of the first design criteria of our interface, a second being the possibility to create multisensoriality exercises beyond text, image and standard audio-visual. Second, it is seen as a learning tool, which is the goal for Scenagrams. We consider then that we achieved our goal for this version. Indeed, creating a Scenagram means "programming interactions between human and digital devices with a common goal based on cognition". Third, our prototype inspires creation activities, which is also one of our objectives and which shows that our inspirations were felt by participants. For the moment, we consider that our prototype succeeds to be a hybrid of visual programming languages and video editing and musical design editors. Fourth, ScenaProd is widely considered as a tool to help people with disabilities, which is very encouraging to provide them with additional tool.

These results must, however, be qualified; because, despite the attention given to the writing of the instructions and to the animation, certain biases (pre-suggestion, reminders, etc.), due to the instructions or to the example of Scenagrams, may have generate some biases specific to the evaluation tools used (facilitation mirror expectation effect – participants, effect of the illustrative example on the participants – opening/channeling/blocking). Thereby, the answers of this evaluation may remain very biased: we observed that participants seemed to have mainly imagined to use cases according to their activity and that they have looked for examples among what they already knew. However, given the diversity of participants’ activities, it also highlighted Scenagrams multidisciplinarity and the applicative scope of our approach.

As perspectives of the PRIM project, we will organize workshops and seminars with health professionals and educational professionals to define the needs to be covered in the next releases of ScenaProd. This will require to show participants a collection of different Scenagrams to avoid blocking their imagination in a single way.

Acknowledgment

We would like to thank Paris 8 University, which funded this work through several projects calls on its own funds and with a doctoral grant from the CLI doctoral school. We also thank all the members of the PRIM project, all the participants in the evaluation, and all the students who joined us for an internship and who participated in the reflection. Finally, we thank Severine Maillet and Jessica Lament for their help in translating this paper.

  References

[1] Jost, C., Debloos, J., Archambault, D., Le Pévédic, B., Sagot, J., Sohier, R., Tijus, C.A., Truck, I., Uzan, G. (2021). PRIM Project: Playing and Recording with Interactivity and Multisensoriality. In ACM International Conference on Interactive Media Experiences, pp. 223-227. https://doi.org/10.1145/3452918.3465487

[2] Jost, C., Le Pévédic, B., Uzan, G. (2021). Using Multisensory Technologies to Stimulate People: a Reflexive Paper on Scenagrams. In Proceedings of the 1st Workshop on Multisensory Experiences-SensoryX'21. SBC. https://doi.org/10.5753/sensoryx.2021.15686

[3] Jost, C., Le Pévédic, B., El Barraj, O., Uzan, G. (2019). MulseBox: Portable multisensory interactive device. In 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), pp. 3956-3961. https://doi.org/10.1109/SMC.2019.8913987

[4] Jost, C., Pévédic, B.L., Uzan, G. (2019). MulseBox: new multisensory interaction device. In Proceedings of the 31st Conference on l'Interaction Homme-Machine, pp. 1-13. https://doi.org/10.1145/3366550.3372251

[5] Pot, E., Monceaux, J., Gelin, R., Maisonnier, B. (2009). Choregraphe: a graphical tool for humanoid robot programming. In RO-MAN 2009-The 18th IEEE International Symposium on Robot and Human Interactive Communication, pp. 46-51. https://doi.org/10.1109/ROMAN.2009.5326209

[6] Fraser, N. (2015). Ten things we've learned from Blockly. In 2015 IEEE Blocks and Beyond Workshop (Blocks and Beyond), pp. 49-50. https://doi.org/10.1109/BLOCKS.2015.7369000

[7] Pasternak, E., Fenichel, R., Marshall, A.N. (2017). Tips for creating a block language with blockly. In 2017 IEEE blocks and beyond workshop (B&B), pp. 21-24. https://doi.org/10.1109/BLOCKS.2017.8120404

[8] Malan, D.J., Leitner, H.H. (2007). Scratch for budding computer scientists. ACM Sigcse Bulletin, 39(1): 223-227. https://doi.org/10.1145/1227504.1227388

[9] Coronado, E., Mastrogiovanni, F., Indurkhya, B., Venture, G. (2020). Visual programming environments for end-user development of intelligent and social robots, a systematic review. Journal of Computer Languages, 58: 100970. https://doi.org/10.1016/j.cola.2020.100970

[10] Debloos, J., Jost, C., Le Pévédic, B., Uzan, G. Création de scénagramme: critères d’un logiciel «idéal» utilisable par des non informaticiens. Gérard Uzan Yann Morère (Eds.), 51-56. https://ifrath.fr/wp-content/uploads/2022/03/jcjc2021.pdf#page=60.

[11] Waltl, M., Timmerer, C., Hellwagner, H. (2009). A test-bed for quality of multimedia experience evaluation of sensory effects. In 2009 International Workshop on Quality of Multimedia Experience, pp. 145-150. https://doi.org/10.1109/QOMEX.2009.5246962

[12] Waltl, M., Timmerer, C., Rainer, B., Hellwagner, H. (2012). Sensory effect dataset and test setups. In 2012 Fourth International Workshop on Quality of Multimedia Experience, pp. 115-120. https://doi.org/10.1109/QoMEX.2012.6263841

[13] Waltl, M., Rainer, B., Timmerer, C., Hellwagner, H. (2013). An end-to-end tool chain for Sensory Experience based on MPEG-V. Signal Processing: Image Communication, 28(2): 136-150. https://doi.org/10.1016/j.image.2012.10.009

[14] Saleme, E.B., Santos, C.A.S. (2015). PlaySEM: a platform for rendering MulSeMedia compatible with MPEG-V. In Proceedings of the 21st Brazilian Symposium on Multimedia and the Web, pp. 145-148. https://doi.org/10.1145/2820426.2820450

[15] de Mattos, D.P, Muchaluat-Saade, D.C. (2018). Steve: A hypermedia authoring tool based on the simple interactive multimedia model. In Proceedings of the ACM Symposium on Document Engineering 2018, pp. 1-10. https://doi.org/10.1145/3209280.3209521

[16] de Mattos, D.P., Muchaluat-Saade, D.C., Ghinea, G. (2021). Beyond multimedia authoring: On the need for mulsemedia authoring tools. ACM Computing Surveys (CSUR), 54(7): 1-31. https://doi.org/10.1145/3464422

[17] Todea, D. (2015). The Use of the MuseScore Software in Musical E-Learning. Virtual Learn, 88.

[18] Gronier, G., Baudet, A. (2021). Psychometric evaluation of the F-SUS: creation and validation of the French version of the system usability scale. International Journal of Human–Computer Interaction, 37(16): 1571-1582. ttps://doi.org/10.1080/10447318.2021.1898828

[19] Joubert, O.R. (2015). L’enfant autiste, le robot, et l’enseignant: une rencontre sociétale. Enfance, 1(1): 127-140. https://doi.org/10.3917/enf1.151.0127