If you work in media management, you know that metadata is great. Metadata brings visibility allowing you to create workflows and archives that are traceable and transparent. Without metadata, searching your archives for just the right image, or just the right clip, is basically impossible. The more metadata you have attached to an asset, the easier and more stress-free your searches are likely to be.
But there lies the trouble!
As we accelerate into the 2020s we have more metadata than ever and more ways to create it. Yet, while this can be liberating when we need to trawl our archives, it can create problems in its own right. Metadata takes time to create, and the more metadata you add, historically the longer and more involved the ingestion process has been. The price of more searchable archives has been slower ingestion.
However, metadata best practices are changing! There are new tools designed to improve the metadata you create and the way it can be accessed. Here, we are going to give you an update on all things metadata. This is an exciting time in video asset management, and metadata sits at the heart of that transformation.
Just so we’re all on the same page, let’s briefly recap what we mean by metadata in the world of Video Asset Management (VAM). A good analogy is to think about your cat or dog. When you got them, you likely got them microchipped so that your pet is traceable if they run away or get lost. Within that microchip is a registry number from which anyone who finds your dog can glean your contact information like your name, address or telephone number. Think of your pet as data and everything the microchip tells them as metadata.
So, every video you create is the data and everything you need to know about the video such as the date it was created, the date it was created and ingested, filename, length and description is the metadata.
We live in an era where video content is more in demand than ever and in order to keep up with that demand, media managers are producing more and more content meaning that archives are growing at an exponential rate. Metadata is absolutely integral to keeping those archives manageable.
As metadata has become more widely used, it has (by necessity) grown more sophisticated and descriptive, going way beyond the date and time of creation and ingestion. Today’s metadata is designed to make large archives easier to parse and specific files easier to find. Recent years have seen greater use of descriptive metadata which tells anyone searching exactly what the file contains.
The easiest, and most commonly used way to do this is through the use of tags. Creating tags can help media managers find video files based on their content. However, this is something of a double-edged sword. Inconsistencies in how tags are used and formatted can undermine the efficacy of the whole system.
For example, let’s say you have several pieces of footage and each contains an image of a lion cub walking through the savanna. A media manager might label one “Lion cub savanna”, another might label the next “Lion cub walking” and a third “Baby lion walking”. Or another might come up with something totally different like “cute animals”. Now let’s say a new media manager takes over at the same house. They have access to all four of these files. But will their search yield all of these? This lack of consistency can make it very hard to compare and group similar images together.
The use of presets can mitigate inconsistencies to a degree. But a lack of tags can still make interrogating archives a painstaking and time consuming process. All too often, media managers end up using a media player to find out exactly what’s contained in a video file.
The good news is that the latest generation of VAM platforms use sophisticated automation to make even huge archives easier to understand. These platforms deploy object recognition and speech detection tools to automatically identify images and sounds during ingestion and create and add tags accordingly that media managers can than approve.
Recognition tools hold the promise of delivering true archive transparency — able to retrospectively find archived and in-production material based on flexible search criteria. To a degree, current capabilities allow for this, but with significant room for error. It’s still an evolving technology.
With that said, when used with oversight during ingest, recognition tools create such detailed metadata tags that true searchability is delivered to material put through this process. Finding a particular image, face or line of dialogue becomes as easy as performing a “CTRL+F” search on an MS Word document.
It’s easy to see why so many media managers are excited by the practical applications of object recognition and speech detection when creating and interrogating metadata. And the good news is that these are only growing more sophisticated. Being able to find a clip of an actor playing a certain line or even being able to group footage of an actor or performer using face recognition are on the cusp of becoming a common industry practice. For media managers working in forward-thinking environments, it already is.
So, if you desperately need to find a few seconds of a lion cub walking through a savanna when you’re up against a testing deadline, VAM platforms have your back!
The cutting edge detection software in new VAM platforms goes a long way towards accelerating the metadata process through the use of sophisticated automation techniques. Because metadata is an integral part of the ingestion process, production houses and media managers don’t need to take the time to create and manage it. However, having this rich and advanced metadata can have a knock-on effect upon other workflow aspects.
When you have a clearer idea of what you have, it’s much easier to use it. With transparent archives, you can call on those clips for current projects. That might be as simple as re-using an old clip rather than purchasing new stock footage. However, you can look to your archive for inspiration and create content based solely on what you already halve.
Of course, better metadata doesn’t just help you to make better use of your archives post-production. It can also allow faster access to the right footage during production to ensure that directors and their video editors have fast and easy access to all the right shots, making for more compelling video copy. What’s more, VAM platforms can open up new opportunities for remote and collaborative video editing.
Closed captions are an important accessibility requirement and, in an age where more and more people view online video content through their phones, can make your videos easier to enjoy on the go and in public places. Because search engines rely on text to assess relevance, captions also help with SEO.
The trouble is that creating these captions can be a painstaking and time consuming process. The speech recognition capabilities of VAM platforms allow for fast and accurate automated caption creation.
The integration of object recognition into metadata best practices has a wealth of practical applications. With such unparalleled access to their archive footage, any of the following becomes a breeze:
Because this technology is evolving at an exponential rate, media managers are on the cusp of a new age of richer, more immediate and more useful metadata with a wealth of useful applications and positive outcomes for workflow. Investing in the right tools is the key to turning these trends in to outcomes.
Find out how Media Asset Management software can help you get the right metadata in the right place, fast. Book some time with Gabrielle below. 👇🏼👇🏼👇🏼