This site has been archived

How Google Translate Works

Google posted a high level overview of how Google Translate works.

Source: Google

An interesting hack from Yahoo! Openhack India

Sound familiar?

Automatic, Real-time close captioning/translation for flickr videos.

How?
We captured the audio stream that comes out to speaker and gave as input to mic. Used Microsoft Speech API and Julius to convert the speech to text. Used a GreaseMonkey script to sync with transcription server(our local box) and video and displayed the transcribed text on the video. Before displaying the actual text on the video, based on the user’s choice we translate the text and show it on video. (We used Google’s Translate API for this).

Srithar, B. (2010). Yahoo! Openhack India 2010- FlicksubZ. Retrieved on July 28, 2010 from Srithar’s Blog Website: http://babusri.blogspot.com/2010/07/yahoo-openhack-india-2010-flicksubz.html

Check out the whole post.

Converting .sbv to .trans/continuous text

As a step in comparing the output from YouTube’s Autocaptioning, we need to transform their .sbv file into something we can use in our comparison tests (a .trans file). We needed to strip the hours out of the timecode, drop the end time, and bring everything to a single line.

Update: It turns out we needed a continuous text file. So these have been updated accordingly.

Continue Reading

Caption File Formats

There’s been some discussion on the Matterhorn list recently about caption file formats, and I thought it might be useful to describe what we’re doing with file formats for SpokenMedia.

SpokenMedia uses two file formats, our original .wrd files output from the recognition process and Timed Text Markup Language (TTML). We also need to handle two other caption file formats .srt and .sbv.

There is a nice discussion of the YouTube format at SBV file format for Youtube Subtitles and Captions and a link to a web-based tool to convert .srt files to .sbv files.

We’ll cover our implementation of TTML in a separate post.
Continue Reading

SpokenMedia at T4E 2010 Conference

Brandon Muramatsu presented on SpokenMedia at the Technology for Education 2010 Conference in Mumbai, India on July 1, 2010.

Cite as: Muramatsu, B., McKinney, A. & Wilkins, P. (2010, July 1). Implementing SpokenMedia for the Indian Institute for Human Settlements. Presentation at Technology for Education Conference: Mumbai, India. July 1, 2010. Retrieved July 14, 2010 from Slideshare Web site: http://www.slideshare.net/bmuramatsu/implementing-spokenmedia-for-the-indian-institute-for-human-settlements

Towards cross-video search

Here’s a workflow diagram I put together to demonstrate how we’re approaching the problem of searching over the transcripts of multiple videos and ultimately returning search results that maintain time-alignment for playback.

Preparing Transcripts for Search Across Multiple Videos
Source: Brandon Muramatsu

Preparing Transcripts for Search Across Multiple Videos

You’ll notice I included using OCW on lecture slides to help in search and retrieval–this is not an area we’re currently focusing on, but we have been asked about it. A number of researchers and developers have looked at this area–if/when we include it, we’d work with folks like Matterhorn (or perhaps others) to integrate the solutions that they’ve implemented.

Making Progress

In the last month or two we’ve made some good progress with getting additional parts of the SpokenMedia workflow into a working state.

Here’s a workflow diagram showing what we can do with SpokenMedia today.

SpokenMedia Workflow, June 2010
Source: Brandon Muramatsu

SpokenMedia Workflow, June 2010

(The bright yellow indicates features working in the last two months, the gray indicates features we’ve had working since December 2009, and the light yellow indicates features on which we’ve just started working.)

Continue Reading

Video from OCWC Global Presentation

The OpenCourseWare Consortium has posted the video of our talk during OCWC Global 2010 in Hanoi, Vietnam.

Cite as: Muramatsu, B., McKinney, A. & Wilkins, P. (2010, May 5). Opening Up IIHS Video with SpokenMedia. Presentation at OCWC Global 2010: Hanoi, Vietnam, May 5, 2010. Retrieved May 6, 2010 from Vimeo Web site: http://vimeo.com/11969270

PageLayout as a step towards Rich Media Notebooks

During a meeting with our collaborators at ICAP, of the Universite de Lyon 1 in France, Fabien Bizot demonstrated the PageLayout Flash/Air app that he was working on for Spiral Connect.

When we launched the SpokenMedia project, we knew that we wanted to ultimately focus on how learners and educators use video accompanied by transcripts. Over the last year, we’ve focused on the automatic lecture transcription technology that was developed in the Spoken Lecture project–as a means to enable the notions of rich media notebook we had been discussing.

Fabien Bizot’s work with PageLayout may be the first step to a user interface learners and educators might use to interact with video linked with transcripts.

PageLayout Towards a Rich Media Notebook
Source: Brandon Muramatsu/Fabien Bizot PageLayout

PageLayout Towards a Rich Media Notebook

SpokenMedia at OCW Consortium Global 2010 Conference

Brandon Muramatsu presented on SpokenMedia at the OCW Consortium Global 2010 Conference in Hanoi, Vietnam on May 7, 2010.

Cite as: Muramatsu, B., McKinney, A. & Wilkins, P. (2010, May 5). Opening Up IIHS Video with SpokenMedia. Presentation at OCWC Global 2010: Hanoi, Vietnam, May 5, 2010. Retrieved May 6, 2010 from Slideshare Web site: http://www.slideshare.net/bmuramatsu/opening-up-iihs-video-with-spokenmedia

Creative Commons License Unless otherwise specified, the Spoken Media Website by the MIT Office of Digital Learning, Strategic Education Initiatives is licensed under a Creative Commons Attribution 4.0 International License.