The OpenCourseWare Consortium has posted the video of our talk during OCWC Global 2010 in Hanoi, Vietnam.
Brandon Muramatsu presented on SpokenMedia at the OER10 Conference in Cambridge, UK on March 23, 2010.
Brandon Muramatsu, Andrew McKinney and Peter Wilkins presented on SpokenMedia at the NERCOMP 2010 Conference in Providence, Rhode Island on March 9, 2010.
During our trip to India in early January 2010, Brandon Muramatsu, Andrew McKinney and Vijay Kumar met with Prof. Mangala Sunder and the Indian National Programme on Technology Enhanced Learning (NPTEL) team at the Indian Institute of Technology-Madras.
The SpokenMedia project and NPTEL are in discussions to bring the automated lecture transcription process under development at MIT to NPTEL to:
- Radically reduce transcription and captioning time (from 26 hours to as little as 2 hours).
- Improve initial transcription accuracy via a research and development program.
- Improve search and discovery of lecture video via transcripts.
- Improve accessibility of video lectures for the diverse background of learners in India, and worldwide, via captioned video.
Brandon Muramatsu and Andrew McKinney presented on SpokenMedia at the Indian Institute for Human Settlements (IIHS) Curriculum Conference in Bangalore, India on January 5, 2010.
Brandon Muramatsu, Andrew McKinny and Phillip Long presented at the EdTech Fair at MIT in Cambridge, MA on October 14, 2009. We provided an on-going demonstration on the automated lecture transcription, search and playback functions of the SpokenMedia project.
Introduction The SpokenMedia Project is developing a software application suite/web-based service that automatically creates transcripts from academic-style lectures and provides the basis for a rich media notebook for learning. The system takes lecture media, in standard digital formats such as .mp4 and .mp3, and processes them to produce a searchable archive of digital video-/audio-based learning materials. The system allows for ad hoc retrieval of the media stream associated with a section of the audio track containing the target words or phrases. The system plays back the media, presenting the transcript of the spoken words synchronized with the speaker’s voice and marked by a cursor that follows along in sync with the lecture audio. The project’s goal is to increase the effectiveness of web-based lecture media by improving the search and discoverability of specific, relevant media segments and enabling users to interact with rich media segments in more educationally relevant ways.
Brandon Muramatsu presented on SpokenMedia at the Open Education 2009 Conference in August 2009 in Vancouver, British Columbia, Canada.
In the uStream video below, the SpokenMedia presentation starts at about 19:30 in. The first part of the presentation is Mara Hancock from UC Berkeley talking about Opencast Matterhorn. (Unfortunately they forgot to start saving the stream at the start of her talk.)
The presentation to the IEEE-CS Bangalore Section was also the best presentation of the three–this presentation really wants to be an hour long, and we got great questions from the audience. Unfortunately I forgot to record the presentation, it would have made a great slidecast.
Embedded below is the presentation to the Technology for Education 2009 Conference, the one with a slidecast.
Brandon Muramatsu and Phillip Long presented at the NMC Summer Conference in Monterey, CA on June 12, 2009.