Brandon Muramatsu, Andrew McKinny and Phillip Long presented at the EdTech Fair at MIT in Cambridge, MA on October 14, 2009. We provided an on-going demonstration on the automated lecture transcription, search and playback functions of the SpokenMedia project.
Archives for October 2009
Introduction The SpokenMedia Project is developing a software application suite/web-based service that automatically creates transcripts from academic-style lectures and provides the basis for a rich media notebook for learning. The system takes lecture media, in standard digital formats such as .mp4 and .mp3, and processes them to produce a searchable archive of digital video-/audio-based learning materials. The system allows for ad hoc retrieval of the media stream associated with a section of the audio track containing the target words or phrases. The system plays back the media, presenting the transcript of the spoken words synchronized with the speaker’s voice and marked by a cursor that follows along in sync with the lecture audio. The project’s goal is to increase the effectiveness of web-based lecture media by improving the search and discoverability of specific, relevant media segments and enabling users to interact with rich media segments in more educationally relevant ways.