teaching machines

Homework 1 – xiongchu

September 30, 2011 by . Filed under cs491 mobile, fall 2011, postmortems.

My flashmob application asks the user a set of questions and stores their answers. I implemented a back button, which allow the user to return to previous questions and edit the user’s input. I also created a save button that will save the inputs to that specific question. I created a menu with menu items to clear all inputs, summarized the user’s inputs, and speak back the inputs. Also, I implemented a question navigation gesturing recognition

The back and save button implementation was trivial. I created two arrays, one for the questions and one for the inputs and I keep an index of the one to display. I also used the index to keep track of where to store the input.

The clearing input menu item goes through the array of inputs and clears them. The summary menu item concatenates all the inputs and displays them in an alert dialog; I used the same code we implemented together in class.

For the speak menu item I did some research for it. I used an android.speech.tts.TextToSpeech instance. You first had to implement the TextToSpeech.OnInitListener, and add the abstract method:

The method basically initializes the TextToSpeech instance. When you want to speak a string you call speak method with the message to speak:

One more important thing is you need to release the TextToSpeech instance:

My last feature is the gesture recognition. You first create legal gestures using the emulator’s Gestures Builder app, which saves a raw file in the emulator’s virtual sd card. Copy this file into your raw resource folder. Next you create a GestureOverlayView in your layout, this is the view that captures user’s gestures.

How this works is you create a gesture library that holds all your legal gestures, initializing the library with the raw gesture file does this. When the user executes a gesture, the onGesturePerformed method is called, which is an abstract method from the GestureOverlayView.OnGesturePerformedListener interface. Then you pass the user’s gesture to your gesture library instance to match it with your list of legal gestures. This returns an array of predictions of the possible legal gestures. Each prediction has a values that represents how close of a match the user’s gesture is compared your legal gestures. I used this value to navigate questions.