The second installment of IPV’s monthly workflow webinar series was live streamed last week and covered the importance of logging rich metadata, how AI can streamline the process and why it’s critical for getting maximum ROI on video assets
IPV Product Manager James Varndell and IPV Product Marketing Manager Ryan Hughes returned for the second iteration of MAM:Explained last week, this time focusing on metadata logging and the role that AI plays in the process. Hosted the first Thursday of each month, the all-new webinar series takes a closer look at media asset management through the lens of relevant use cases.
We invite you to watch the full webinar here and check out some highlights from the audience Q&A below. Don’t forget to register for the next MAM:Explained on December 3rd.
Can Curator import metadata from a different PAM or MAM system?
Yeah, definitely, it can. Curator includes a number of tools for doing that. Actually, it's a pretty common use case for us. You can import metadata from another PAM system. For example, if you have XML for every asset in that system, or you have XMP metadata, or a CSV metadata, EXIF data, you can ingest that into Curator. Curator also includes a REST API, so you could push data into Curator using that REST API. We have a number of partners who are enabled as well to do exactly that kind of processing, taking data from one system using our API to push it into Curator.
I think furthermore, as well, if your PAM system is set up with a folder structure, so you have storage with files literally on shared storage or on disk, where a folder structure has been created, and really the folders in some cases, can actually include metadata, like the names of projects, or dates, or even users who have created content. Curator can index that storage, get metadata from the folder structure, and ingest assets as well.
Can Curator correct captions from speech-to-text data?
Yeah, absolutely, and that's actually what we do up front. When we get that data back from the speech to text service, we'll create a caption, and then we are creating sub clips for each of the caption blocks. Now there are a couple of benefits to that. So firstly, I can convert that caption into an editing format, so I could edit it in an SRT format in Adobe, for example, or I could publish it as a web VTT format to an OTT platform like Vimeo, or similar.
You can edit the closed captions in Curator as well. So if your speech-to-text service has led to a caption generation in Curator, and Curator has done that for you, you can then edit the caption text before you in a Curator UI, in the Logger UI before you then use it in the edit, or before you publish it to its ultimate destination. That's pretty useful. Some of the speech-to-text services, they actually won't create captions for you. Some of them do, so more media-centric, media and entertainment-centric speech-to-tech services might actually generate a caption file for you. Some don't. Some generate a transcription, which effectively, is just a stream of text.
In that case, Curator includes some intelligence, which will break up the transcription into caption blocks based around regular sentence structure. So you can go from a speech-to-text result, which is a transcription of the entire media including the entire audio track, to usable captions that are nicely separated into caption blocks.
Which AI service providers does Curator integrate with?
Most of the leading cloud providers, so AWS, Microsoft Azure, Google Cloud Platform, are all providers that Curator supports. One of the interesting things that Curator does, and again, I think a really helpful way that Curator makes AI services accessible in your workflow is that it will take your high-res content and create a proxy as standard. Then, Curator will send the proxy audio or the proxy video to the AI service. Some of those services will accept just a standard MP4 file, but not a C300 camera card. Curator is standardizing the files that are being sent to the AI services, so you can capture media-specific high-res formats and still get your AI data out as a result.
Do the speech-to-text AI services support other languages such as Japanese, German, etc.?
Yeah, typically they do. I think one of the beauties of Curator is you can bring your own preferred AI service. If you prefer AWS, or Azure, or Google, or whoever it is, you can connect Curator to those services. Many of them do support different languages, and Curator ensures that the correct language tag is sent to the speech or text engine before it starts processing the data. Let's say, you had an MXF with 32 audio tracks, say track five is English, and tract six is Spanish, Curator will send just audio track five to the speech-to-text service with a language tag that says “this is English” while it sends track six with a language tag that says “this is Spanish.”
Can some metadata be retained by the file but removed from searchability? For example, if speech-to-text is used for captions, but the vocabulary is so broad, that it breaks the searchability of the file...
The answer really is yes, the metadata that's searchable in Curator does not have to be the same as the metadata that actually exists on the asset. You can, through permissions, for example, store a bigger vocabulary of metadata, but only allow users to see a subset of that. Curator will take keyword labels from an AI engine, filter them down to a subset of words that you said you're specifically interested in. However, the original sort of broad set of all the labels is still available and stored by Curator, but it’s only presenting a subset you said you're specifically interested in for search.
I think when it comes to captions, there are some additional options there as well. For example, if you have particular results which are low confidence, Curator can filter those out, Curator can flag those to you. I think that's maybe more relevant in this piece in this case.
Can Watch Folders be set up so content coming from other sources can be automatically ingested?
Yes. I showed Curator Connect earlier on in this demo, which is a drag and drop experience designed for finished content, and especially for camera card content. You can also ingest content through watch folders or hot folders. In that case, Curator can spy a folder as soon as a media file lands in there. Could be a growing file, could be something that you're recording as a feed that's being written out as a growing file. Curator can pick that up and create a growing proxy. You can see the growing proxy in Curator's web interfaces. You can log the grand proxy, and you can also edit with that growing proxy as well, and you can edit, of course, with a growing high-res if you have access to it. That can be detected by Curator through a watch folder.
It's also possible for Curator to ingest content in batch, in bulk from watch folders, or by indexing, or storage, and then you can kind of enrich the metadata on that content using the Logger interface as well. It's a good use case for the Logger.
You mentioned IPV is running off Google Cloud. Does this work in the same way using small blocks such as LucidLink?
Curator, in terms of deployment, can run in three models, an on-premise deployment, a hybrid cloud deployment, which is a really, really popular option at the moment, whereby you have a ground station with on-premise storage, but then you have cloud services, which give you the search indexes, which give you the proxies and global accessibility of your content, which is fantastic if you're working remotely. There's also a full cloud option as well that's available to you as a part of the Curator deployment whereby you can have your media content, you could have a workspace, potentially, clients in the cloud, or you can work locally, and I could work on my MacBook Pro and connecting to a cloud instance. But then your high-res content is in the cloud.
There are a number of ways, a number of technologies that we can use to get content in and out of the cloud in terms of the storage layer. I think key points to emphasize here is Curator is not tied to any specific storage technology, so you can make the choice that makes most sense to you. Curator sits on top of the storage layer and ingest content from that storage. If you wanted to use technology for getting content from on-premise into the cloud, or perhaps out of object storage onto block storage, Curator can sit on top of that and read those files in.
Questions? Let’s chat!
We want to get to know you and your business needs. Book time directly with Gabrielle below to see how Curator can help you take control of your video assets and produce quality video content faster than ever! 👇👇👇