Twelve Labs Raises $12 Million for AI-Powered Video Understanding Platform
Twelve Labs has raised $12m in a seed extension round, taking its total funding to 17m. The artificial intelligence-powered video search and understanding platform offers developers the ability to build programs that can see, listen and understand the world.
The California-based start-up has built "foundation models" for multimodal video understanding, which can be accessed via a suite of application programming interfaces. These models allow for semantic search and other tasks such as chapterisation, summary generation and video question and answering.
About Twelve Labs
Twelve Labs offers developers a cloud-based platform for video search and understanding using AI. The platform allows developers to build programs that can see, listen, and understand the world.
The platform uses AI to extract information from videos such as movement, objects, and speech, and converts it into mathematical representations called "vectors". By forming "temporal connections" between frames, applications like video scene search are enabled. The platform is currently in closed beta.
At Twelve Labs, they believe that understanding video is key to understanding the world. Video is the most accurate way to capture the world's stories, and by enhancing the ability of machines to understand video, we are one step closer to true human-AI symbiosis.
The team says they're dedicated to advancing the field of video understanding and contributing to the eventual achievement of singularity. This funding will helpthemus to continue their work and bring our platform to more developers and customers.
Learn more about Twelve Labs
Check out MLQ VC: Discover Recently-Funded Startups to get access to all 100+ venture funding deals this week as well as our database of 10,000+ VC-funded startups.