Did you know that most states and territories have a program supporting the reuse of assistive technology and/or durable medical equipment? During 2018, over 68,500 devices were refurbished across the nation, many diverted from the scrap metal dumpsters of solid waste stations. These devices saved individuals and families more than $26,500,000! If you have durable medical equipment you or a loved one is no longer using, consider donating to your state reuse program before heading to the dump.
Reuse programs make it possible for seniors, adults, and children with disabilities to maintain their independence without having to wait for insurance approvals.
Reuse programs are also solutions for acquiring devices insurance won’t cover, including backup equipment.
They fill temporary needs while families and individuals with disabilities wait for new equipment to be approved, during the repair of a device, or during displacement following a disaster.
Across the country, power wheelchairs, rollators, standers, pediatric equipment, shower chairs and more are helping community members stay safe at home and active in their communities–often at no cost–with assistance from their state reuse programs.
Jule Ann Lieberman, MS CLVT/CATIS ATP, reviews a new app-based service for individuals with visual impairment. Lieberman is an Assistive Technology (AT) Specialist with TechOWL, Pennsylvania’s State AT Program.
Aira is a service which couples a mobile app with real-time sighted assistance from a live remote agent. I first discovered Aira at the California State University at Northridge (CSUN) Conference on Technology and Disability in 2016 while exploring in the exhibit hall. At first, I was hesitant and not convinced this service would add value to my daily routines. I have an arsenal of tech tools on my smartphone and I needed convincing that I could do more with Aira to justify the expense.
Then in 2017, I decided to sign up for an introductory plan. My incentive was a free access opportunity for an American Council of the Blind Legislative Conference. This offer meant I could try Aira without deducting from my allotment of purchased Aira minutes. To my surprise, I immediately found many tasks to accomplish with Aira and I have been building on them ever since.
Aira vs BeMyEyes
I have used various AT solutions for identification/reading tasks over the years, including video magnification and text to speech. However, as my visual acuity has decreased these solutions have become less productive for me. Apps on my smartphone allow me access to text-to-speech solutions and sighted remote agents, but they require extensive hardware navigation. They also sometimes put me at the mercy of a volunteer’s time.
For example, BeMyEyes is an excellent free app-based service. However, I may experience either a delay or no available volunteers at the time I need sighted assistance. Also, the volunteers are not trained in assisting a person with vision loss nor have they been screened/bonded. This makes using the service for some tasks inadvisable for lack of privacy and security.
In comparison, Aira agents are friendly trained professionals and on call 24 hours 7 days a week. Even when call loads are heavy I have not waited more than a minute or two to get in contact with an agent. Aira agents also have access to my profile of user preferences. These include how I would like directions and information provided to me (for example clock positions vs cardinal directions and a whole menu vs headings and details upon request).
Optional Aira Hardware
I have the Aira app installed on my iPhone, however, I prefer Aira’s Horizon system which consists of an Aira-dedicated smartphone tethered to a pair of glasses equipped with a camera. The Horizon system makes connecting convenient. It also provides a more natural video experience. There is a single button that calls up the phone’s digital assistant used to reach an Aira agent. The camera in the glasses is positioned above my nose (I can also just use the phone’s camera to access information with an agent).
Object Identification and Reading
I break down tasks into two main categories: 1) object identification/reading and 2) orientation/mobility. Examples of object identification/reading tasks that I accomplish with Aira include:
Identify and read the thermostat in a hotel room (and then change the temperature).
Read a menu at a coffee shop.
Identify the newer prescription of a medication for a refill.
Of course, I could call housekeeping to come and adjust my room’s temperature, but likely I’d face a long wait. I could try using an app such as SeeingAI to locate and read the fine print on my prescription bottle, but rounded surfaces are a challenge for optical character recognition (OCR). I could also use an app to read the menu, but I’d hear every menu item as well as OCR errors. The time and energy these solutions consume are much higher compared with simply calling an Aira agent for assistance.
Orientation and Mobility
I have experience with multiple GPS apps and, in general, I find they can be exhausting. Often, they provide too much information. Seeing Eye GPS or Nearby Explorer report points of interest, elevation, latitude/longitude, and more. This level of information can be distracting and toggling them off in preferences is difficult to do on the fly. Another problem is simply feeling at the mercy of a map’s accuracy.
Aira is a smoother solution. Once I was heading to a meeting in West Philadelphia. I had been to this location previously and followed a familiar route counting street crossings and surroundings by identifying them through sound and large objects. This works well for me unless the environment has changed or if I am distracted and miscount my street crossings. On this occasion I miscounted. The environment became unfamiliar and rather than navigating to and through a GPS app, I pulled out Aira and pushed the button to call an agent. The agent located me on the phone’s GPS, described my location on a map and provided my compass location. This allowed me to turn my direction back on course. The agent stayed with me and using the camera, suggested I turn onto a path that was visible to her and lead to the building where my meeting was held. She described each turn of a winding path and as I approached the door she let me know the number of stairs, rail location and which way the door opened. This was a better experience than the trial-and-error method of approaching buildings where I rely on strangers to confirm a building and locate the door. Aira agents save me time and provide a level of safety to my independent travel.
Aira is equally effective for interior wayfinding. This year Aira again offered the CSUN conference as a free access site. With the assistance of Aira agents, I was able to travel through the hotel and locate the exhibit hall and session rooms. The hotel map and exhibit hall map were available to the agents online and using my camera they could identify my location in the building and guide me turn by turn to my destination. At present, there is no interior wayfinding solution that competes. Most others rely on a physical beacon or other technology to communicate with an app or smartphone and very few institutions have made this investment.
Prior to Aira, I was a reluctant subscriber to Uber and Lyft. I feared I would not be able to recognize the driver and car and would make mistakes. Now that I have Aira, I take many more trips independently. No more bugging family or friends to take me to an appointment with my doctor or hair salon.
Aira has an agreement with both Uber and Lyft that once you have created an account you can have an Aira agent call for your rides. The Aira agent looks up which service is quicker and the expected price. I choose and the agent places the request. Since I have my home address already stored, often I just give the address of my destination. For my return, the Aira agent can once again locate me via the phone GPS.
I have the Aira agent stay on the phone until the driver arrives. The agent tells me where the car is located (clock position directions) and the driver’s name. On a recent trip, I encountered a driver who refused to allow me to enter the vehicle with my guide dog. I explained that he was violating company policy and the law. He simply drove away.
The Aira agent witnessed the event through my phone’s camera and heard the exchange between myself and the driver. The agent offered to request another ride and while we waited, he completed an online complaint at the ride service. Later that evening I received a call from this provider who promised to investigate the incident. The Aira agent was my witness and could provide an immediate report when I was not in a position to do so from the street. The experience was jarring and this time I had company and assistance.
Aira has various pricing plans based on minutes of usage. The basic plan is 30 minutes for under $29.95 (and does not include the Horizon equipment). This would be a great plan for a beginning user. In addition to the 30 minutes, users can call agents at no charge at free access sites such as Walgreens or Wegmans. My typical calls last three minutes or less, longer if I need more time waiting for a ride service or in complex interior spaces.
Aira has added value to my daily routines. I find I never leave home without my Aira Horizon phone and glasses. Simply put, I have a trained, patient, sighted friend available to me at all times.
Welcome to part three in our series on easy approaches to creating accessible videos. Part one discussed the iOS app, Clips, for its live captioning feature. Part two discussed YouDescribe, the online audio description tool. And now part three builds on these lessons and demonstrates InstaMorph, a moldable plastic your AT3 Center News and Tips editor has always wanted to play with.
I trust that got your attention. No, we cannot make an accessible video with moldable plastic.
However, the video embedded below, “Demystifying InstaMorph: An Accessible Video,” was inspired by Eileen Belton of the Missouri AT Program (MoAT). Belton taught herself to create accessible videos using an older model iPad and an outdated copy of iMovie. (See “AT Users Make the Case for Web Accessibility“).
Since iMovie resides neglected on my first generation iPad Mini, I thought I’d give it a try and make an InstaMorph pen grip in the process. My motivation was to use iMovie as Belton had, to ensure my video could be understood by individuals with visual impairments.
For MoAT’s video series, Belton used the iMovie feature that allows adding a second audio track. She used this feature to voice over her videos’ title slides. This was my plan too.
Clips App for Video Editing
I recorded my video using the Clips app on my phone (discussed in part one). I used Clips because I’d just learned to use it and knew I could easily record and stitch together multiple video segments, add title slides (“posters”), and rearrange them by selecting and dragging them around with my finger on my video project timeline. This was reasonably intuitive and quite fun.
YouTube Captions for Accessibility
I chose to forgo the Clips captioning feature, however, because I planned to use YouTube’s captioning tools. As mentioned at the conclusion of part two, YouTube’s closed captions are adjustable via YouTube preferences. This is an accessibility advantage for persons with certain visual impairments who are also deaf/hard of hearing and rely on captioning. Below is a screenshot of the subtitles options menu. Font, color, opacity and size, background color and opacity, window color and opacity, and character edge style may all be customized and retained as the viewer’s default.
Captioning at YouTube will be my last step, however.
Adding Audio Description with iMovie
Neither YouTube nor Clips can be used to add a second audio track for voice-over or audio description, just music. This is why it’s helpful to consider that copy of iMovie you may have uploaded at one time for your kid to play with (as in my case). I had not tried it before now.
Alternatively, audio description may be added using YouDescribe, as discussed in part two of this series. The advantage of using iMovie is that once complete, your fully accessible video can reside at YouTube, in your YouTube channel. The disadvantage is that iMovie cannot add extended audio tracks for long descriptions, only inline tracks (learn the difference in part two).
What I’ve learned, however, is that best practice is to script my demonstration video as descriptively as possible from the start and rely less on retrofitting audio description. This is a universal design approach and in this way, I avoid the need for extended audio tracks. I did, however, add two instances of inline audio description where I needed them, in addition to voice-over for my title slides. (See 2:27 and 4:08 below).
Adding a second audio track was easy with iMovie and I found there was little risk of messing up the project. I selected where on the video timeline I wanted to record audio, tapped the microphone icon, tapped record, and a visual and auditory three-second count down alerted me to exactly when I should commence speaking.
I ended my recordings with another tap and found I had the opportunity to review how I sounded before discarding or saving the track. I did try to vary my vocal style for these tracks so as to contrast with the video’s primary audio.
Once complete, I uploaded my described video to YouTube for auto-captions and captions editing. The wonderful thing about YouTube’s captions editor is that the auto-captions often need just a few corrections. For example, in a recent post I uploaded a YouTube video I had made of a man discussing pressure sores. In one location “pressure” had been captioned “precious.” This was easily corrected at the captions editor (learn more about editing YouTube captions).
To recap, here are the steps I followed to make “Demystifying InstaMorph: An Accessible Video”:
Planned my video mindful of description.
Filmed in segments using the Clips app on my iPhone (which has a superior camera compared with my outdated iPad Mini). Inserted title slides between segments using the “posters” feature.
Saved my Clips project and uploaded to my iPad for use with iMovie (accomplished via Dropbox but airdrop might have worked; it’s an old iPad however).
Opened my video in iMovie and added a second audio track for voice over and audio description.
Saved my iMovie project and uploaded directly to YouTube.
Opened in YouTube to edit auto-captions for accuracy.
Embedded the video here for sharing on the blog.
Of course, I could have created my entire video in iMovie to begin with, but Clips looked easier and I liked that I had a title slide option. iMovie, however, allows for overlaying titles and provides different styles for transitions between segments, which Clips does not. I recommend exploring the features of each and considering the pros and cons of different methods of rendering a project accessible. Certainly, it’s an advantage to complete the project using one app and iMovie has a nice interface for the iPhone. Next time!
Reminder: the AT3 Center and the Administration on Community Living (ACL) make no endorsement, representation, or warranty expressed or implied for any product, device, or information set forth in this newsletter. AT3 Center and ACL have not examined, reviewed, or tested any product or device referred to in this newsletter.