Digital Innovation & i media cities

The i media cities platform has implemented several innovative tools and digital technical solutions in order to improve the way all our visitors find, watch and interact with the films and images on our platform. The project has been build on three key digital strategies, aimed at maximising impact and helping users: Open Source / Machine learning / Linked Open Data.


Open Source Software

We believe open source software is good and necessary for the free development and evolution of digital services. This is why, in agreement with the developer of the software, CINECA, we have released our code under an MIT license. The source code can be downloaded from our Gitlab page

Machine learning & A.I.

The platform uses a series of machine learning tools, all provided by Fraunhofer IDMT. These tools activate computers to process the films and images automatically, creating a large number of data, which are then translated into interface solutions

  • Automatic shot segmentation

This tool segments the films automatically into shots by having a computer automatically detect so-called cuts and transition frames.

As soon as these frames are detected, the data is extracted and as a result you will be able to find an automatically generated shotlist next to every film in our platform.

shot 2.png
 
  • Concept detection and classification

Computers automatically scan and analyse all images and all frames of every film to determine whether they contain specific urban related concepts, such as person, tram, train, clock … . The computer can currently detect nearly a hundred concepts, and will be trained for more in the years to come.

As soon as a concept is detected, it is generated as a keyword and added to the corresponding image or frame (indicated by an ‘a’).

At the same time the computer also stores some other useful information, such as the region of the image or the shot the concept is detected in (graphically indicated by coloured borders), and a percentage that indicates how certain the computer of his detection. Below you can find a side by side comparison of the same shot. On the left the computer is 20% sure of the concepts he has detected. For the shot on the right he is 70% sure.

 
 
 

It is not always easy to find the correct balance, and as such in the platform we have chosen to display all concepts for which the computer is at least 65% sure he is correct. We are constantly monitoring and improving this feature to better serve you, but note that mistakes might still be detected.

  • Building and Monument Recognition

Similar to the concept detection, the building recognition tool scans all frames and images for a set of famous landmarks and buildings in every city. This is a brand new tool, and it is therefore highly likely that the tool might not always be correct.

 

Linked Open Data

 

To ensure the latest in metadata management, the project has integrated several linked open data sources that can help user when they are adding information to the films and images. These implementations connect the content and the corresponding information to the wealth of knowledge that is already present on the internet. When annotating, visitors are able to search for the pages representing keywords, famous persons, streets … in the the entire wikidata repository.

By adding a wikidata concept, all the information that is listed on the corresponding page in wikidata, is also available to search through. The linked open data structure of i media cities connects the films and images to the world outside of the archive.

 

 

If you want more information or have any further questions, get in touch, or check out our Gitlab-page