Thursday, May 23, 2013

Born Digital, Born Accessible Learning Sprint - Day 1 (Toolbox)

This week a diverse group of educators, technologists, and accessibility specialists (30 of us!) gathered to envision and prototype end-to-end solutions for born accessible eBooks; from creating, to discovering, to learning from accessible, rich, interactive eBooks. We were there to learn from each other and sprint together to build prototypes while strengthening the collaborative possibilities between the groups.

For any readers unfamiliar with the term 'accessible', it means making the material usable by as many people as possible, especially including people with disabilities or special needs (See also the wikipedia article on accessibility). Not only is accessibility critical in education to give every learner the ability to reach their potential, but often the benefits of accessible content extend to all learners in the same way that curb cuts have made roller bags possible. Here are some examples.
  • If someone is blind or low vision, they are likely to use a screen reader. It is important that all controls are accessible via the keyboard, and that the structure of a document is easy to navigate. Videos important for learning need an audio description to replace important information from the visual field (which is different from captioning). Images need descriptions if they are important to the learning. All of this extra information benefits all learners because it makes resources easier to find because the text descriptions are searchable. 
  • If someone has a very limited range of motion, then controls must be usable via a switch interface or voice commands. All learners benefit because the same hooks can be used as shortcuts and automations.
  • If someone is deaf or hearing impaired, audio content needs transcripts and videos need captions. Simulations need to make sure that information conveyed through sound is available in another way. In addition to added searchability, anyone in a noisy environment will benefit from these features.  
  • If someone has a reading or learning disability, assistive technologies can read aloud and highlight text as it is read, but not if text is embedded in images. That might seem rare, but mathematics is often presented as an image only. Although still in research, mathematics that is text (rather than image) will also be explorable one day. Each part or term can be queried, annotated, and manipulated, benefiting all learners.
OERPUB (my Shuttleworth Foundation funded organization), Benetech, and the Bill and Melinda Gates Foundation each helped to bring the sprint to fruition.


participants watching demosThe sprint was two and half days long and it is going to take a few posts for me to get all the information out about the sprint, but I definitely want to share all of it, because the sprint was incredibly informative and productive. The first morning, each of the groups showed off relevant tools, technologies, and processes. We wanted to know who was looking for help making their sites and teaching resources accessible, and who was bringing tools to make content more accessible. In a sense, this part of the meeting was about showing what we already have in the toolbox for accessibility.

Demos:
  • Accessible Authoring: OERPUB editor design: To break the ice, I showed features of the oerpub editor designed to help authors create accessible content. We mark images that need descriptions and say 'thanks' when they are added. We create tables with a header row by default, and math is written in a format (MathML) so that screen readers can read it. I asked for help doing even more, especially for training authors while they are creating and for finding and including accessible movies and sims. You can see what we have released so far at remix.oerpub.org, which also includes importers for Word, OpenOffice, LaTeX, Google Docs, and web pages.
  • Accessible Videos: YouDescribe Owen Edwards from Smith Kettlewell showed YouDescribe, an experimental platform for crowdsourcing extended video descriptions. It is analogous to the Amara platform for crowdsourcing closed-captions.  Viewers pause videos and then record a narration of what they are seeing. Often parents and relatives do this already if someone in their family needs this. They are describing exactly what they know their relative needs to hear about the video. This would be a way to make that work benefit many more people.
  • Accessible Simulations: Ariel Paul of PHET (simulations for math, chemistry, and physics that make the invisible (like electrons) visible) is creating HTML5 versions of their simulations and taking the opportunity to make them accessible to more learners. Ariel demo'd an alpha version of an HTML5 tug-of-war simulation to show basics of forces and motion.  The new simulation could be operated via keyboard, switch devices, or voice activation. They are taking this rewrite to HTML5 as an opportunity to really think through accessibility. He was here to learn as much as possible from all the accessibility experts here.
  • Learner Controls and Accessible Video: Yura Zenevich and Joanna Vass of the Inclusive Design Research Center demonstrated Learner Options (example - show display preferences), Speak.js, and an accessible video player. Learner Options is a javascript library that gives learners a set of controls to adjust text size, button and link size, spacing, font, contrast, text-to-speech, navigation and layout. The video player has controls that are all keyboard operable, and it pulls in any corresponding captions it finds from amara (caption crowd sourcing).
  • Accessible Annotations: Jake Hartnell demonstrated Hypothes.is, a distributed, open-source platform for annotating the web. He asked for help in making annotations accessible -- both the discovery of annotations and the creation of them. In addition to seeking to make annotations more accessible, annotations are also a potentially powerful tool for accessible learning. Bookshare (an accessible online library) regularly receives requests for some way to take notes within books. Learners using accessible books need accessible ways to track their learning. Additionally, annotations might provide a way to request and receive help making resources useful to more learners. For instance an annotation on an image with no description could provide a description.
  • EBook Authoring: Phil Schatz of Connexions demonstrated github-book (code):  an authoring system for books that uses the OERPUB editor for each chapter and automatically creates an EPUB ebook as a result. Versions of the book are all stored in github and people can easily make their own copy of a book and adapt it.
  • Accessible Math and Chemistry: Volker Sorge of University of Birmingham and Google demonstrated ChromeVox which can read mathematics on the web, demonstrated a system that can analyze an image of a molecule and outputs three structured text format alternates for the molecule, and finally demonstrated Maxtract which converts PDFs created using OCR (optical character recognition) to LaTeX or HTML.
  • Accessible EBOOK Reading, Image Captioning, and Text-to-speech with highlighting: Gerardo Capiel from Benetech demonstrated an accessible version of the Readium EPUB3 reader, POET (an image description tool), BeneSpeak and Accessibility Metadata (a11ymetadata) for Schema.org. The readium version uses special tags so you can navigate using Safari, IE 9 and 10, Firefox, and Chrome. With POET, an entire book in DAISY format is uploaded and then all of the images can be described or marked as decorative. Soon POET will support books in EPUB3 format. BeneSpeak does word-level highlighting in conjunction with the Chrome specific TTS APIs speech engine. The highlighting helps readers with learning disabilities like dyslexia or who are learning a second language follow and comprehend. The accessibility metadata Capiel showed has been proposed to Schema.org, based on the a11ymetadata project. It will make it easier to find accessible education resources. Examples -- you can indicate that a resource can be used via keyboard only, or via mouse only. You can indicate that a resource has described images, transcripts for video, etc.


No comments:

Post a Comment