The Jupiter Speech System, Makes Linux Accessible To The Blind The Jupiter speech system captures console output in a large circular buffer and allows the blind user to read the accumulated text via a voice synthesizer. There are several features that separate this adaptive software from the many excellent packages on the Blinux archive. (http://leb.net/pub/blinux) 1. Jupiter does not lock the blind user into a screen oriented paradigm. It does have a screen review mode, which reads directly from screen memory, but it can also run in line mode, which captures the accumulated output of the prior commands, exactly as it was generated. Most adaptive systems restrict the user to the text that is currently on the screen. This makes sense for screen oriented applications such as emacs or lynx. However, other applications, such as the standard Unix shell, present a command-response interface. Under these circumstances, an ongoing log of the interactive session is more appropriate. The user can review each line of text, exactly as it was typed (input) or generated (output). Jupiter allows the user to run in linear or screen mode. 2. Jupiter generates audio feedback via the PC in-built speaker as text is displayed on the screen. The user quickly learns to recognize familiar messages simply by their sounds. Hence there is no need to read them, saving quite a bit of time. The user can also issue several commands and passively monitor the computer's response, while reading through the output of an earlier command. A simple click indicates a "$" prompt -- all is well -- so type the next command -- while your synthesizer continues to read an earlier message. This audio feedback isn't much help in screen mode. In fact, you might tire of the clicks and chirps as the screen is constantly being refreshed. However, it is a simple matter to turn it off when running in screen mode, and reenable it for line mode. Those wishing to incorporate console audio feedback into their own adaptive software should visit my clicktty directory. http://eklhad.hispeed.com/linux/clicktty 3. Jupiter allows a great deal of customization, well beyond simple key binding. The atomic speech commands are quite simple: start of line, start of word, back one character, read character, read word, start continuous reading, etc. The user can string these together to make a composite command, and bind this composite to a key. For instance, ^N eline for eline for read will cause control-N to advance to the end of the line, move forward one space (onto the next line), advance to the end of that line, forward one space, and start reading. In other words, ^N skips down two lines and begins reading. In screen mode, composite commands might locate the visual cursor and read the character, word, or line that is indicated by the cursor. In a more contrived, yet not implausible example, suppose you often read the second to last column in a spreadsheet, and you'd like a single keystroke command to do this. eline lspc back sword lspc back word might work, if the entries in your spreadsheet are simple enough. Other composites let you move up and down this all-important column, or switch between this column and the first column (the customer's name). You can usually create whatever commands you need in your work, and bind them to single keystrokes. 4. Jupiter allows speech functions to be activated from the keyboard or the escape sequences produced by a running program. This can be used in several ways. 4A. Simple commands such as /bin/ls can be wrappered or rewritten to "read" the output as soon as it is generated. First issue the escape sequence that places the reading cursor at the end of the buffer, run the Linux command, then initiate reading with another escape sequence. This reads the output of the command without any additional keystrokes. 4B. A background job, running in another virtual terminal, can literally tell you when it is done, or give periodic updates on its progress. This will interrupt any reading you are doing in the foreground session, but that's probably what you want. If self-speaking applications are well designed, we almost have a true windowing environment for the blind. 4C. One could imagine a comprehensive tutorial that reads itself. After all, how can the blind user learn to run a speech package if he cannot use the speech package to read the user's manual? A good tutorial will be fully interactive, teaching the user how to run the various speech functions, build composite commands, bind them to keys, respell words for improved pronunciation, and so on. An application running under Jupiter could do this, and much more. However, the tutorial that I envision is probably larger than the Jupiter speech system itself, and I don't have time to write it at present. Jupiter is available from the following web site. http://eklhad.hispeed.com/linux/jupiter You can contact the author, Karl Dahlke, via email, mail@eklhad.hispeed.com, or by phone, (USA) 248-524-1004 during regular business hours.