You are viewing a javascript disabled version of the site. Please enable Javascript for this site to function properly.
Go to headerGo to navigationGo to searchGo to contentsGo to footer
In content section. Select this link to jump to navigation

Part 5: Communication

A Machine Learning Assistive Solution for Users with DysarthriaDavide Mulfari, Gabriele Meoni, Marco Marini and Luca Fanucci Department of Information Engineering, University of Pisa, Italy

Background: With the rapid advancement in virtual assistant technology, many human computer interfaces exploit automatic speech recognition (ASR) solutions to get a verbal interaction with smart devices. Nowadays, such interfaces meet the demands of users without speech disabilities, while those with dysarthria, i.e., a neuromotor speech disorder leading to a poor intelligibility of user’s speaking, cannot benefit from these services due to their communication disorders. Dysarthria condition is also characterized by an extreme variability of the speech (inter and intra speakers) and it occurs with severe motor disabilities. Therefore people with dysarthria and reduced motor skills are unable to use their speaking to make easier their interaction with computing devices, for example in smart home scenario.

Method: To address the aforementioned issues, we employ machine learning technology in order to recognize speech commands (keywords spotting tasks) in dysarthric speech. Our effort is to build ASR models based on deep neural networks, we intend to collaborate with people with communication disorders who wish to share their utterances in order to train such models. To this aim, we have developed a mobile app allowing end users to record their speaking while they pronounce given speech commands. Several application scenarios may benefit from our speech model. For instance, we currently work on an accessible smart home control especially designed for people with dysarthria, by integrating our custom keywords recognizer with openHAB, an open source software framework for smart home. With this solution, users with dysarthria can use personalized keywords to perform basic actions within their smart environments (such as controlling plugs or TV).

Key results: With the collaboration of four native Italian speakers with dysarthria, an initial dataset for training has been achieved including a small vocabulary (n.11 Italian keywords). By using Google’s Tensorflow running on a GPU-enabled machine, a convolutional neural network model has been trained, it comes with two convolutional layers and a softmax classifier. In this early stage, we performed 18.000 training steps, and every 500 steps, we run a test on a validation-set. After the training stage, the same collaborators with dysarthria have been also involved into testing activities, aimed at recognizing given keywords in their utterances: our model showed acceptable accuracy level (58%) for 11 labels. This method is not tied to a given language, we plan to extend it in a multi-language way.

Conclusion: In the field of artificial intelligence, we present an ASR system for users with dysarthria. The machine learning approach is designed for detecting just a few number of predefined keywords within a reduced vocabulary. Initial experiments showed promising results thanks to a dedicated training procedure based on TensorFlow framework. In future works, we plan to better investigate the proposed application with the collaboration of many people with dysarthria who wish to share their utterances.

Keywords: Automatic Speech Recognition, Machine Learning, Speech, Dysarthria, Smart Home.

Corresponding author. E-mail: davide.mulfari@ing.unipi.it

Development of controllable electrolarynx contro-lled by neck myoelectric signalKatsutoshi Oe, Sho Oda and Shoya Uno Department of Mechanical Systems Engineering, Faculty of Engineering, Daiichi Institute of Technology, 1-10-2 Kokubu chuo, 899-4332, Kirishima, Japan

Background: Presently, there are many patients who lost their voice caused by laryngectomy, laryngeal injury and so on. The voice is very important communication method for human, and when the voice was lost, the patient often causes the mental distress. For these patients, the research about speech production substitutes are implemented. However, they have problems with regard of voice quality, articulation, and intonation. For example, the electrolarynx has good features of voice continuity, sound volume and acquisition. However, it has poor voice articulation because of its uncontrollable pitch frequency. To solve this problem, many researches about the control method for electrolarynx are being conducted. All of these techniques focused on controlling the pitch frequency. Our research aims to realize the electrolarynx with high-level controllability. We focused on using the myoelectric signal of the sternohyoid muscle (SH) to control the electrolarynx. The SH has the function of vocal cords relaxation and activate during the utterance of low-tone voice. Therefore, the pitch frequency of electrolarynx can be controlled using the myoelectric signal of SH.

Method: Our proposed controllable electrolarynx uses the myoelectric signal measured by the myoelectric electrode attached to human neck near the SH. To get the stable signal, the grand electrode was attached to the wrist of the test subject. The measured signal is processed by a computer and outputted as the control signal. The most important thing is the determination of the parameter for conversion of myoelectric signal to control signal. From our previous research, it is confirmed that the exponent and quadratic are better than that of linear function. To evaluate the controllability of pitch frequency control, the indication of the subject’s will was compared to the calculated pitch frequency and the errors were counted and calculated the error rate. The test subject indicated the indicators of “High”, “Mid” and “Low” by mouse pointer with his intension. At the same time, he was conscious to generate the myoelectric signal, and the height indicator of pitch frequency lighted on according to the generated myoelectric signal. The errors were counted from captured video. The test subjects were 3 normal adult males.

Key results: From the averaged result of error rate, it was clear that the error rate of the linear relationship (22%) and the quadratic relationship (21%) are higher than that of the exponent relationship (13%). To evaluate the control stability, the numbers that the subject failed to keep the constant tone was counted. From this result, the numbers of failures of exponent relationship (1) was the least of that of other two relationship functions (7 each). Therefore, it was clarified that the best control function was exponent.

Conclusion: In this report, we proposed the control method for controllable electrolarynx using of neck myoelectric signal. From the results, it was clarified that there was an exponent relationship function between the RMS value of myoelectric signal and pitch frequency of vocalized sound. Using of this function, the controllability of electrolarynx was increased in the viewpoint of numbers of errors.

Keywords: Speech production substitutes, Electrolarynx, Myoelectric signal, Sternohyoid muscle.

Corresponding author. E-mail: k-ooe@daiichi-kou dai.ac.jp

How do We Provide Necessary Support to Enable Remote Communication for People with Communication Difficulties?Margret Buchholza,b,, Ulrika Fermb and Kristina HolmgrenaaDepartment of Health and Rehabilitation, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Göteborg, SwedenbDART Centre for AAC and AT, Queen Silvia Children’s Hospital, Sahlgrenska University Hospital, Göteborg, Sweden

Background: To participate in today’s society, one needs to be able to handle digital remote communication (i.e., communication with someone who is not physically in the same place). This includes phone calls, texting, e-mail, chat, social media, and other online services for communication. Use of remote communication may facilitate social contacts, self-determination, and participation. Unintelligible speak, in combination with difficulties in reading and writing makes remote communication difficult. Research has pointed out several obstacles to remote communication access, and users’ right to communicate are not always being met. There is a need for increased support, but how this support should be administered has not been researched.

Method: The purpose of this study was to explore the need for necessary support to enable access to remote communication for people with communication difficulties. A qualitative design using focus group methodology was used to understand support persons’ views and thoughts in their role of supporting users of remote communication. The participants were support persons to people with communicative and cognitive disabilities, which, at some level, interfered with using remote communication in daily life. They were family members and/or staff who worked in sheltered housing, schools or as personal assistants. Five focus groups with 21 support persons in total were conducted. The focus groups were recorded and transcribed, and data was analyzed qualitatively using focus group analysis methodology.

Key results: The participants experienced a need for support for the users and their networks and identified it to be crucial in enabling remote communication for people with communication difficulties. They described a need for increased support on several levels. The users need lifelong support and individual training on a long term basis. There is a need for higher competency concerning remote communication among staff in the users’ daily lives. There is also a need for better coordination between all different professional efforts and interventions. The support persons lack access to information about remote communication and technology and access to expert advice. They also pointed out that it would be useful to have all of the information in one place, suggesting that there should be a digital platform to gather all information about remote communication for persons with disabilities. This platform might contain instructional videos with tutorials, examples, advice, and recommendations by experts, and might also offer opportunities for people with communicative and cognitive disabilities, support persons, and professionals to share experiences and “life hacks.” Further, it could also be a forum to connect with experts for online consultation.

Conclusion: Support is crucial in order to enable remote communication and participation in society for people with communication difficulties. Support needs to be provided on several levels; 1) Individual support to the user, 2) information and training for staff, 3) coordination of services providing support, 4) access to expert knowledge and sharing of knowledge. A digital platform with all supportive functions gathered is suggested.

Keywords: Remote communication, Social media, Communicative and cognitive disabilities, Training, Support.

Corresponding author. E-mail: margret.buchholz@ vgregion.se

Let’s Stay in Touch! Remote Communication for People with Communicative and Cognitive DisabilitiesMargret Buchholza,b,, Ulrika Fermb and Kristina HolmgrenaaDepartment of Health and Rehabilitation, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Göteborg, Sweden bDART Centre for AAC and AT, Queen Silvia Children’s Hospital, Sahlgrenska University Hospital, Göteborg, Sweden

Background: Remote communication involves communication between people who are not physically in the same place. Using everyday technology like smartphones, tablets and computers, including services for calls, messaging, video calls and social media are common means of communication in contemporary society. Digital communication is increasing as a required means of social interaction, communication for interactions for daily activities, like contact with healthcare, insurance or banks and, therefore, has become a prerequisite for participation in society. Despite this common use of the Internet and social media, several groups of people do not have access to remote communication. Being able to use remote communication requires either functional speech (phone calls and video calls) or the ability to read and write (texting, e-mailing or chatting). Several conditions, such as congenital, acquired and progressive disorders can affect communication abilities. People with a combination of communicative and cognitive disabilities may have limited speech and comprehension abilities as well as restricted reading and writing skills, which means that both their spoken and written communication are affected

Method: The aim was to explore and describe remote communication for people with communicative and cognitive disabilities in in relation to self-determination and participation from users’, professionals’ and support persons’ perspectives. The research project is based on four studies: three qualitative (I, III, IV) and a mixed method (II). For study I, semi-structured interviews were used with seven professionals after an intervention project on texting with pictures and speech synthesis. In study II, semi-structured interviews with 11 users on their experiences of remote communication using Talking Mats, a pictorial communication tool. Study III and IV involved focus groups with 21 support persons on their experiences and views on remote communication for the target group.

Key results: Professionals described how text messaging with pictures and speech could increase independence and participation, and how individual assessments and user-friendly technology were important. People with communicative and cognitive disabilities described how remote communication related to self-determination. Having a choice between types of remote communication and levels of independence was important, and technological limitations forced them to find their own communication strategies. Support persons discussed how remote communication enabled users to have more control and feel safer while increasing self-determination and participation. The results suggested communicative rights were not met, and there was a need for better provisions of technology and support.

Conclusion: Access to remote communication is crucial for participation in today’s society where an increasing part of human interaction is carried out through digital channels. Access to remote communication can increase independence, self-determination and participation in daily life for people with communicative and cognitive disabilities. It can also affect health and safety. The findings suggest people with communicative and cognitive disabilities may encounter difficulties in accessing emergency calls and e-health services. There is a need for further technology development so that remote communication technology can be accessible for all. People with disabilities and their network need improved support.

Keywords: Augmentative and alternative communication, Assistive technology, Remote communication, Digital communication, Social media.

Corresponding author. E-mail: margret.buchholz@ vgregion.se

Development of an AAC System for a Student with Speech Impairment and Spastic QuadriplegiaFrancesco Carbonea,*, Francesco Davide Casconea, Antonio Gloriab, Massimo Martorellia, Dan Natan-Robertsc and Antonio LanzottiaaDepartment of Industrial Engineering, Fraunhofer JL Ideas, University of Naples Federico II, Naples, ItalybInstitute of Polymers, Composites and Biomaterials, National Research Council of Italy, Naples, ItalycDepartment of Industrial & System Engineering, San Jose State University, San Jose, CA, USA

Background: In complex cases of disability, custom Augmentative and Alternative Communication (AAC) Systems should be developed because existing standardized commercial products often leave user requirements unmet. AAC systems are personalized to reduce this mis-match between technology and user needs. A semi-custom solution was developed by Department of Industrial Engineering of the University of Naples Federico II, adapting commercial devices, 3d-printing mockups for evaluation and design new solution for assembly to wheelchairs. Several studies were conducted including: usability evaluation, learning curve rate, software and hardware optimization and cognitive assessment for actuators and layouts.

Method used: A custom system was developed for a student with complex communication needs caused by a traumatic injury that led to motor impairment and a speech disorder. A usability test and interface optimization are discussed to fix usability issues, highlight critical areas for improvement and design new prototypes. Performance, specifically learnability curves of the product were found. Learnability can lead to a reduction of design and product development times. Improvements in communication rate, number of operations per sentence, and errors were found. Additional central topics include areas lacking in current product development and methods of eliminating barriers to access.

Key results: A usability index was obtained from usability data. An Analytical Hierarchy Process and Multiple Criteria Decision Analysis are used to identify, prioritize, and investigate the usability functions in the product that can be improved. A learnability curve showing incremental of performance in a time of 8 weeks was also found.

Conclusion: Guidelines were developed through data collection and through decision criteria. The guidelines are generic and can be applied to usability heuristics. The main benefit of implementing Interaction Design (IxD), usability and ergonomics is the possibility to extend the results to other users through a parametric approach and generate generic-purpose tools and methodologies that can be applied more widely.

Keywords: Augmentative and Alternative Communication, User Needs, 3d-Printing Mockups.

Corresponding author: fcarbone@unina.it

Augmentative and Alternative Gesture Interface (AAGI): Multi Modular Gesture Interface for People with Severe Motor DysfunctionIkushi Yodaa,, Tsuyoshi Nakayamab, Kazuyuki Itohb, Yuki Ariakec, Satoko Mihashic, Hiroyuki Awazawac and Youko KobayashicaHuman Informatics Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), 1-1-1 Umezono, Tsukuba, Ibaraki 305-8560, JapanbResearch Institute, National Rehabilitation Center for Persons with Disabilities (NRCD), 4-1 Namiki, Tokorozawa, Saitama 359-8555, JapancDepartment of Physical Rehabilitation Medicine, National Center of Neurology and Psychiatry (NCNP), 4-1-1 Ogawa-higashi-machi, Kodaira, Tokyo 187-8551, Japan

Background: Individuals with severe motor dysfunction are unable to use existing computer interfaces due to spasticity, involuntary movements, and the like. The interfaces these individuals can use, if any, are limited to customized switch interfaces, which makes it difficult to operate a computer with any degree of ease. For individuals who can use only simple switch-based devices, these more sophisticated operations are all but impossible. It is very expensive to develop an interface able to respond to changes in an individual’s movements caused by physical deterioration, so the main requirement is that it involve technology that can easily be customized for a diverse range of users at low cost.

Method: We have developed a switch gesture interface that utilizes a commercially available RGB-D camera. The system software recognizes gestures from 2D and 3D images, so the system can customize to each user easily than hardware systems. Furthermore, the software is easier to apply both daily and long term than hardware is. The 3D images can specify gestures by using shape information, thus enabling application to more varied environments and types of gesture than only using 2D images.

We used the RGB-D camera to gather data on the types of gestures that severely quadriplegic individuals want to use in an interface. The data included both moving RGB images and depth (range) images. A total of 211 gestures were collected from 55 individuals with motor dysfunction and the voluntary movements were classified on the basis of body part. We developed all algorithms for recognition in-house and used only a few basic camera libraries to obtain 2D and 3D images. If the RGB-D camera is discontinued, we can transport all software to another camera easily.

Key results: We developed seven recognition modules based on body part and two recognition modules independent of body part. Among the latter, the Front object module recognizes the closest region to the camera and is useful for movements of hands, arms, and toes, and the Slight movement module recognizes slight movement in the region of interest. We experimented on five testers with four recognition modules (Finger, Head, Foot, and Slight movement) in a long-term experiment (over three months). The user of the Foot recognition module is now using this system daily. Our original software and the RGB-D camera have been installed in the user’s own PC, and he uses WORD and WEB browser by foot gesture every day.

Conclusion: To enable a low-cost interface, we have utilized commercially available RGB-D cameras and developed a non-contact, non-constraint interface. We collected 211 gestures from 55 individuals with motor dysfunction and classified voluntary movements on the basis of body part. We developed seven recognition modules dependent on body parts along with two independent recognition modules and facilitated long-term experiments with five users. For one user, we completely succeeded in practical use by foot gestures.

We call this software the Augmentative and Alternative Gesture Interface (AAGI) and will open it sequentially. All software will be supplied freely from our HP.

Keywords: gesture interface, support for the disabled, 3D image recognition, AAC, human sensing.

Corresponding author. E-mail: i-yoda@aist.go.jp

Presentation Matters: A Design Study of Different Keyboard Layouts to Investigate the Use of Prediction for AACRolf Blacka,, Annalu Wallera and Conor McKillopaaSchool of Science and Engineering, University of Dundee, Perth Road, Dundee DD1 4HN, Scotland, UK

Background: Augmentative and Alternative Communication (AAC) applications on mobile technology typically use on-screen keyboards for text entry. AAC users with limited dexterity tend to use single finger typing, leading to slow text entry rates. Although word and phrase prediction have the potential to increase rates, users tend to keep typing rather than selecting predictions. It is hypothesized that the need to scan lists of predicted words or phrases necessitates an undesired shift of gaze, resulting in missed predictions.

Method: We propose new ways for on-screen presentation of predicted words and phrases and report on early results from a multiple single user study using three different keyboard layouts. Keyboard layout designs were informed by a literature review, an online questionnaire and focus group activities with participants who use AAC. Three on-screen keyboards were implemented on a touchscreen tablet – a standard keyboard and two layouts which display predictions closer to where the user’s visual attention is already focused. The Standard Layout (SL) displays a row of 4 predicted words above the keyboard; the ‘Above Typed Layout’ (TL) displays up to four predicted words in a 2 × 2 grid above the typed letter; while the ‘Above Predicted Layout’ (PL) displays up to 4 predicted words above the next predicted letters. Ten participants with neurological impairments affecting hand function were invited to copy type a number of short memorable phrases. All participants used single finger typing apart from one participant who used eye gaze with mouse pointer control for access. Due to physical limitations we were only able to record the gaze of four participants during their typing exercise using an eye gaze tracker. A semi structured interview was conducted after the activity. For the four participants with gaze data, we analysed text entry rate, error rate and personal preference using videos of the typing interaction, screen captures with gaze plots and verbal feedback.

Key results: Participants expressed different preferences of keyboards after the experiment. These ranged from preference for the standard layout (SL) due to the familiarity, to preference for PL due to perceived accommodation of finger and gaze movement. Entry rate was the highest using SL. Participants achieved the highest keystroke savings with PL and significantly lower savings with TL. Participants missed predictions using all layouts. Interview feedback conflicted with observed entry rates (e.g. preferring a slower keyboard layout, perceiving this to have a faster entry rate).

Conclusion: Preliminary results suggest that displaying predictions closer to the next letter to be typed may increase the selection of predictions. Although text entry rates were higher using SL, this may change with longer use of PL. Further studies which include the integration of phrase prediction into the PL keyboard and extended use are being undertaken.

Keywords: AAC, prediction, keystroke savings, communication rate, eye gaze

Acknowledgments: We thank Zulqarnain Rashid and Shuai Li for their part in coding the prototypes, Chris Norrie for data collection support, TobiiDynavox for supplying eyegaze technology and most importantly the participants of the study. This research is funded by the Engineering and Physical Sciences Research Council, Grant Ref. EP/N014278/1.

Corresponding author. E-mail: r.black@dundee.ac.uk

Communication Partners’ Perspective on the Use of an AAC Application Oriented to Just-in-time Language AcquisitionTetsuya Hirotomi Institute of Science and Engineering, Academic Assembly, Shimane University, 1060 Nishikawatsu-cho, Matsue, Shimane 690-8504, Japan

Background: Visual aids, such as pictorial symbols and photographs, are widely used for exchanging messages between children and their communication partners in augmentative and alternative communication (AAC). A variety of AAC apps have been developed, but in many cases, the vocabulary should be selected prior to exposing the words to children. The selected vocabulary is often insufficient to respond to children’s interests, needs, and actions as they arise during interactions.

Method: We developed a mobile application running on Android OS, named STalk2. It was oriented to “just-in-time” language acquisition. In other words, it was designed to increase the use of visual aids in the dynamic process of interaction between children and their communication partners. It is capable of recognizing voices and presenting visual aids stored in a local database and/or retrieved by an image search on the web; it also monitors communication activities. We conducted a longitudinal study (mean = 6 months, sd = 4 months) with 25 adults including parents, school teachers, and staff of after-school day service centers, and thirteen children with complex communication needs (CCN). In this study, the use of STalk2 in everyday communication was evaluated. At the end of the study, the adults completed a questionnaire consisting of ten statements. These statements were originally in Japanese, the native language of all participants. A five-point scale ranging from “strongly disagree” to “strongly agree” and an open-ended question to provide the reason were used for each statement. One of the statements was related to the overall satisfaction regarding whether needs were met using STalk2. The Spearman correlation coefficient was used to describe the relationships between the overall satisfaction and the responses to other statements. Statistical significance was set at p< 0.01.

Key results: The percentage top-two-box score, corresponding to those who agreed with the statement, for overall satisfaction was 50%. Responses to the following statements have strong relationships with the overall satisfaction: A) I thought the frequency of the child exhibiting problem behavior was reduced because he/she understood messages better (rs = 0.739), B) I thought the opportunities to present visual aids with verbal messages to the child were increased (rs = 0.660), C) I felt that the burden of communicating with the child was reduced (rs = 0.576), and D) I thought the communication on which the child focused his/her attention was increased” (rs = 0.572).

Additional comments made by adults included, “the response time of the child for performing certain tasks was reduced and the child could express their own will,” “the child could accept rescheduling because they could use STalk2 to confirm the information by generated speech to reduce their anxiety,” and “STalk2 enabled me to rapidly search for and present appropriate illustrations.”

Conclusion: The results suggested that STalk2 reduced difficulties for half of the communication partners of children with CCN by enabling visual aids to be presented during their conversation. By using STalk2, the partners presented visual aids more frequently, children with CCN focused on their communication and could understand messages better, and as a result, their problem behavior was reduced.

Keywords: AAC App, Communication Partner, Just-in-time Language Acquisition.

Corresponding author. E-mail: hirotomi@cis.shim ane-u.ac.jp

AsTeRICS Grid – a flexible web-based application for Alternative Communication (AAC), environmental and computer controlBenjamin Klausa,, Benjamin Aignera and Christoph VeiglaaResearch Group Embedded Systems, UAS Technikum Wien, Höchstädtplatz 6, 1200 Vienna, Austria

Background: In the field of Augmentative and Alternative Communication (AAC), there are several tools for the creation of layered grids, which show words and symbols and provide speech output. While most of these applications are designed for a single platform (e.g. iOS), in recent years also cross-platform web-based solutions emerged. Next to AAC functions, environmental- or computer control capabilities can be useful for the target audience of such systems. All currently available applications are either restricted to a single platform or miss extended capabilities beyond AAC. AsTeRICS Grid is a flexible system for the creation of AAC grids, environmental control solutions or alternative Human-Computer Interfaces, combining the advantages of existing web-based and native applications.

Method: The requirements for the communication-related features of AsTeRICS Grid were defined in cooperation with an AAC expert. Subsequently, the application was developed in multiple iterations (Kanban-based methodology), considering feedback by the AAC expert and students who worked with the application in the course of academic projects. A user study is planned for the near future. From a technical perspective, AsTeRICS Grid is a single page web application using various Javascript libraries. Native capabilities like environmental- and computer control can be added using bindings to the existing AsTeRICS framework (https://www.asterics.eu/get-started/Overview.html) as a backend. AsTeRICS Grid is hosted as OpenSource project on github.com (https://github.com/asterics/As TeRICS-Grid).

Key results: Compared to existing AAC solutions, AsTeRICS Grid provides the following unique characteristics: (1) grid layouts and element sizes are completely flexible, (2) possibility of environmental- or computer control via the application, (3) platform agnostic web-application that is usable offline, automatically synchronizes the configuration across devices and at the same time maintains perfect privacy. Environmental- or computer control capabilities are realized using the AsTeRICS framework as an optional backend. AsTeRICS includes different sensor- and actuator plugins which allow a wide range of interaction capabilities targeting the native platform, including emulation of keyboard or mouse functions or control of external devices via HTTP, infrared and home automation standards. Selecting an element in the AsTeRICS Grid can e.g. trigger an action on a remote tablet, switch on lights or change the channel of a TV. Although it’s a web application, AsTeRICS Grid can also be used without internet connection. Once the website was visited, the whole app is automatically stored locally using modern browser technologies and afterwards is usable offline. If users want to use AsTeRICS Grid on multiple devices, all configuration data can be automatically synchronized using the cloud, while the application remains fully functional without internet connection on each device. Data protection is guaranteed by implementing end-to-end encryption of all configuration data.

Conclusion: AsTeRICS Grid is a web-based tool for AAC that also provides environmental- and computer control capabilities and is usable across all major platforms. This facilitates new forms of usage which were not possible before: users can easily switch their preferred input device and control external appliances from the AAC application. Future research will focus on user-studies with persons who can benefit from these possibilities.

Keywords: AAC, Environmental Control, Web-based, Human-Computer Interface

Corresponding author. E-mail: klaus@technikum-wien.at

Attitudes and Usage of AAC in Bulgaria: A Survey among Special Education TeachersEvgeniya Hristovaa,b, and Maurice Grinberga,baDeparment of Cognitive Science and Psychology, New Bulgarian University, Montevideo 21, Sofia 1618, BulgariabASSIST – Assistive Technologies Foundation, Razv-igor 3A, Sofia 1618, Bulgaria

Background: Modern technologies for augmentative and alternative communication (AAC) form the basis for effective intervention for children with complex communication needs – children with physical disabilities, e.g. related to cerebral palsy, Rett syndrome, neuro-muscular dystrophies, but also children with autistic spectrum disorders or with intellectual disabilities. The knowledge and usage of AAC by the professionals working with these children is extremely important for the development of their full potential, their capabilities for communication and participation in social life. As Bulgaria is an emerging AAC country, in the paper we are interested in studying two main topics: the attitudes to AAC and inclusive education and the knowledge and usage of low-tech and high-tech AAC among special education teachers in Bulgaria.

Method: We developed a questionnaire consisting of two parts. The first part explores the attitudes towards inclusive education and attitudes towards using AAC. This is done by providing descriptions of children with 2 types of disabilities – severe forms of cerebral palsy (CP) and autism spectrum disorder (ASD). The second part of the questionnaire explores teachers’ knowledge and usage of AAC (low-tech and high-tech) in their work with children with disabilities. The questionnaire is filled in by 88 special education teachers from Bulgaria. Data is collected anonymously using paper-and-pencil questionnaires.

Key results: The results from the questionnaire show positive attitudes towards using AAC by children with disabilities. There are more positive attitudes towards low-tech AAC compared to high-tech AAC. With regard to the attitudes towards inclusive education, the ratings are not so positive. Also, special education teachers rate specialized school as more appropriate (compared to inclusive education schools) for children with severe forms of CP or ASD. The results however show very low rates of knowledge and usage of AAC. Forty-six percent of the special education teachers in Bulgaria are not familiar with low-tech AAC methods and only about 30% of them use low-tech AAC. For the high-tech AAC the results are the even more disturbing. More than 2/3 of the professionals have no knowledge about text-to-speech software and less than 10% use it when working with children with special education needs. Alternative access methods are unknown to 85% of the special education teachers and only 2% use such assistive technologies for children with disabilities.

Conclusion: The results of the survey show alarmingly low levels of knowledge and usage of AAC among special education teachers in Bulgaria. In the same time, the attitudes about using AAC are mainly positive. The obtained results are used to formulate steps for overcoming the existing barriers in the usage of augmentative and alternative communication in Bulgaria: raising the awareness about the benefits of AAC; providing information about low-tech and high-tech AAC and their benefits for children with disabilities; provision of training AAC courses for special education teachers both during their formal education and on-the-job training courses.

Keywords: AAC, attitudes towards AAC, AAC knowledge.

Corresponding author. E-mail: jenihristova@gmail. com

Sign Language Recognition through Machine Learning by a New Linguistic FrameworkTsutomu Kimuraa, and Kazuyuki KandabaNational Institute of Technology, Toyota College, 2-1 Eiseicho, Toyota, Aichi, JapanbNational Museum of Ethnology, 10-1 Senri Expo Park, Suita, Osaka, Japan

Background: We have developed Japanese Sign Language Dictionary System for several years. Because many Deaf are less intelligible to the written Japanese, when they want to find a proper sign for a Japanese entry word through the present system, they have a problem that the entry Japanese word is hard for them, in contradiction. They need a new system to solve the problem in the way that a Deaf user signs in front of the camera, the system recognizes and automatically it shows the candidates for the target sign as a thesaurus, or a Japanese translation.

Method: Our system learns signs through Deep Learning and it creates a learn model which constructs the sign recognition system. The model data collected were 101 signs in the vocabulary of Level 6 in Japanese Sign Language Proficiency Test which is widely accepted in the country. The 12 deaf and hearing informants of repeated the recording experiments for several times. The total number of the signing data was 7,763. OpenPose, the Skelton Models detects and represents human joints on single images. It acquires the vector data and all the data was filed by CSV format of the trajectories at X and Y axes which were recorded frame by frame. In the next the Neural Network Console (NNC) by SONY analyzes the data. Long Short Term Memory (LSTM) was also used. Briefly explained, the system analyzed the data of vectors through Deep Learning, being extracted from the signing video by OpenPose, and the accuracy of sign recognition was about 75% as a result. The solution to raise the rate was discussed.

Key results: Approximately 90% of the 7,763 signs data were taken as learning data and the rest were to evaluate the neural network. As result of cross validation, the average accuracy was about 75%. The false recognitions were analyzed and found the cause by similarity of the signs, and the remarkable joints were found.

Conclusion: The recognition rate is insufficient for complete machine translation but is usable for a dictionary, because our dictionary offers some candidate lexis which are to be selected by the user. We will expand the sign data in the future and we expect the higher rate by deep learning. In our project, we also propose some new sign linguistic constituents because we need new concepts to apply for an electrical process of sign language in an analogy to applying acoustics for phonology. We focused to the movements of signing which would be physically indicated by the trajectories and speed of joints of a body. The remarkable joints, an elbow and a wrist were found to behave an important role.

Keywords: Deep leaning, Machine learning, Sign language, Sign recognition, OpenPose.

Corresponding author. E-mail: kim@toyota-ct.ac.jp

Access to Non-Verbal Aspects of Group Conversations for Blind PersonsReinhard Koutnya, and Klaus MiesenbergeraaInstitut Integriert Studieren, Johannes Kepler University Linz, Altenberger Straße 69, 4040 Linz, Austria

Background: Communication between multiple people talking to each other in person, like during face-to-face business meetings, does not only consist of the spoken language and therefore requires more than the auditory channel to fully participate. Blind people who take part in this kind of meetings rely on the other participants to explicitly speak out any important non-verbal cues, like deictic gestures, which has proven to be especially challenging in heated discussions as people tend to fall back to their usual behavior of using non-verbal communication (NVC) to support their arguments. Perceiving NVC is key to successfully follow and contribute, particularly in these situations. Nowadays, a quite large set of non-verbal information can be captured using different means, including video and body-worn-sensor based motion capturing. However, the bandwidth of how much information blind people can perceive is limited in comparison to sighted people due to the lack of the visual channel. Therefore, a careful selection of relevant information is necessary to avoid cognitively overloading the blind person.

Method: This submission will outline NVC cues found to be helpful for blind people during face-to-face business meetings. Throughout the research multiple meetings have and will be analyzed and interviews with people from the target group have and will be undertaken to iteratively develop an ontology of the relevant information space, clustering pieces of information, modelling relations between them and help to process this data in an effective and efficient way to display a personalized stream of information to the blind user.

Key results: Analyses of meetings and interviews with blind people show that NVC is crucial in conversations, especially if more than two people are involved. Outcomes suggest that there is a disparity between NVC cues blind people think are helpful during a first interview and after exposure to additional NVC cues. Observations show that the set of NVC cues perceived to be helpful is a very individual matter as well, pointing out the necessity of a highly customizable stream of information conveying NVC cues. Creating an ontology which describes the information space is a continuous process. At the time of writing, 15 cues grouped in 5 main clusters of information have being identified covering verbal communication, non-verbal communication, visual artefacts, digital artefacts as well as spatial information of physical objects. Especially spatial aspects and non-verbal communication and their relations, like deictic gestures, show great potential of enhancing a blind person’s overall understanding of group conversations.

Conclusion: This research aims at identification of the information space relevant to blind people in group conversations. Non-verbal communication, spatial information and other subdomains linked to them show great potential to significantly improve the overall understanding of group conversations for blind people, which is a crucial prerequisite for effective participation. Furthermore, a highly customizable approach to access this kind of information according to observation and interviews seems necessary as this is a very individual concern and decisively varies from person to person, depending on their capabilities and experience with this kind of information.

Keywords: Face to face Communication, Non-verbal communication, Ontology, Blind People

Corresponding author. E-mail: Reinhard.koutny@jku.at

Selecting AAC Apps for Effective Communication in a Mainstream Classroom Setting. A New Framework of Where to StartJames Northridgea,aCentre for the Integration of Research, Teaching and Learning (CIRTL), University College Cork, College Rd, Cork, Ireland

Background: Innovative technologies such as iPads and other mobile devices are changing well-established areas of experience and practice, including medicine, business and education. The field of speech language pathology is no exception. The accelerated rate of technological advancement is also affecting AAC as an area of study and practice. Current AAC assessment tools are not designed to address criteria for differentiating between AAC apps, leading most families and practitioners to download their selection without any robust guidance or assessment. This study looked to shift the focus from mere technology-orientated solutions to the key goal of enabling communication. Therefore, it is important to understand how these apps are initially recommended and obtained. It was necessary to gauge the impact this is having on students and those who support them. The three-year project aimed to empower family members and practitioners to differentiate truly effective AAC apps from the many others that proliferate in today’s consumer-orientated market.

Method used: This project can be divided into key areas; Key Area 1: An environmental scan of current AAC assessment tools will be conducted to (1) evaluate which one(s) should be used to develop a tool for selecting among AAC apps and to (2) identify current barriers in app selection. Two online surveys were used to collect data from educators and practitioners, N = 497. The surveys consisted of both qualitative and quantitative questions focused on identifying the current usage of AAC, selection criteria, technical support requirements, perceived benefits and challenges from prior use of AAC. Four focus group sessions were facilitated with two for professionals and two for parents to gather further information and to gain an insight into how they currently select AAC apps and experience barriers in the app selection process. Two focus group sessions took place in the United States N = 12, and two in Ireland, with one for each group, professionals and parents, N = 12. Key Area 2: A formative evaluation of the results of Key Area 1 was conducted to develop the criteria that was used to guide the design of a new AAC app selection model to fit the consumer-orientated model we now find ourselves in.

Key results: The information gained was applied under Key Areas 1 and 2 to develop a consumer-friendly app selection tool that (a) helps potential users to identify potentially effective ACC apps and (b) assist them in acquiring the knowledge needed to overcome or bypass challenges to their effective use. To gain insight concerning its effectiveness, a participant feedback survey to obtain final feedback and direction for future improvements for this early prototype AAC app selection model was developed. The new AAC App selection tool is now available for use online – www.SelectingApps.com and is free to use.

Conclusion: Development of this new AAC App selection criteria will have fundamental and prevailing effects on the lives of individuals with complex communication needs. Furthermore, it will empower AAC users to reach their full potential by maximising their communication ability and participation in day-to-day life.

Keywords: Augmentative and alternative communication iPad Apps, AAC Selection, Communication Apps, (participation), (inclusion).

Corresponding author. E-mail: james.northridge@ucc.ie