Predator video data includes both "live" feeds,
and voice-annotated stored data
We envision many applications for collaboration involving
video and speech data, e.g., the Atlanta bombing investigations involved
many videos from various sources, with no good tools for collaboration/coordination
- Speech is combined with gesture: e.g., "track this"
- Speech is used for collaboration: e.g., "Marcia,
look at this"
- Speech is used for freeform annotation: e.g., "I
think this is a tank, but I'm not sure" to be used for later access
via topic spotting technology
NEXT SLIDE
PREVIOUS SLIDE
RETURN TO STAR LAB HOME PAGE