MIT’s OpenCourseWare uses MIT’s Google Search Appliance (GSA) to search its content. MIT supports customization of GSA results through XSL transformation. This post describes how we plan to use GSA to search lecture transcripts and return results containing the lecture videos that the search terms appear in. Since OCW publishes static content, it doesn’t incorporate an integral search engine. Search is provided through Continue Reading
The software that processes lecture audio into a textual transcript is comprised of a series of scripts that marshall input files and parameters to a speech recognition engine. Interestingly, since the engine is data driven, its code seldom changes; improvements in performance and accuracy are achieved by refining the data it uses to perform its tasks.
There are two steps to produce the transcript. The first creates an audio file in the correct format for speech recognition. The second processes that audio file into the transcript.
In any but a trivial implementation, searching lecture transcripts presents challenges not found in other search targets. Major among them is that each transcript word requires its own metadata (start and stop times). Solr, a web application that derives its search muscle from Apache Lucene, has a query interface that is both rich and flexible. It doesn’t hurt that it’s also very fast. Properly configured, it provides an able platform to support lecture transcript searching. Although Solr is the server, the search itself is performed by Lucene so much of the discussion will address Lucene specifically. The integration with the server will be discussed in a subsequent posting.
We want to implement an automated work flow that can take a file that contains all the words spoken in the lecture, along with their start and stop times and persist them into a repository that will allow us to:
- search all transcripts for a word, phrase, or keyword with factored searches, word-stemming, result ranking, and spelling correction.
- Have the query result include metadata that will allow us to show a video clip mapping the word to the place in the video where it is uttered.
- Allow a transcript editing application to modify the content of the word file, as well as the time codes, in real-time.
- Dependably maintain mapping between words and their time codes.
The stand-alone player allows users to view and search video transcripts without network access. Due to the technologies that the player uses, the stand-alone player requires a small web server to work. These instructions describe how to package a video, its associated transcripts, and its supporting files into a stand-alone sm-player. The package can be zipped into a single file, downloaded, unzipped, and run locally.
This package is what is downloaded when we publish a contributed video and its transcript.
This document is intended for those who will create these packages. A separate README describes how to deploy and run the package.
We have settled on an editing protocol for communication between our player/transcript editor and the service that stores transcripts and videos. The document in PDF format is attached below:
The protocol conforms to WC3’s proposed Timed Text Markup Language (TTML) 1.0 specification. We selected this specification because our primary data is time-aligned text and this specification is a standard used by our collaborators.