Kinesense Ltd, Overcast HQ Ltd and Trinity College Dublin (TCD).
Project overview
Video’s integration into mobile phones has transformed the way we communicate, access information, and connect with others. Society as a whole is inundated with video content, generating billions of hours of video per year. This is supplemented by drones, smartphones and body cameras. It is estimated that over 200,000 petabytes (equivalent to 200 billion gigabytes) of raw video will be recorded in 2024 with this figure expecting to rise as technology continues to advance and allowing for the production of higher resolution video content.
Kinesense Ltd, Overcast HQ and Trinity College Dublin (TCD) have developed an innovative, intelligent, cloud-based digital platform that has incorporated artificial intelligence (AI) to exploit video as a valuable resource, providing new insights and new market opportunities.
Watch on YouTube: DTIF Kinesense AI Awards 2022 - YouTube
Project inputs
Over the course of the project, 18 jobs were created. Kinesense Ltd created 2 full-time positions and sustained a third. TCD created 11 and sustained 2, and Overcast HQ hired 5 new developers. It is expected that these numbers will increase as the VISP platform is commercialised through a vigorous sales and marketing campaign.
The VISP project was awarded €1.5 million in funding. Both Kinesense and Overcast HQ were eligible for pre-finance, giving them an instant financial injection of funds. The prefinance was important to the success of the project as it facilitated the procurement of staff and materials shortly after award announcement.
“The fund was at all times supportive”
Philippe Brodeur, Overcast HQ, CEO
The problem: managing video data in an age of unprecedented technological advancements
In an era of unprecedented technology advancements, managing video data has become increasingly complex. With the volume of video footage generated from recorded devices expected to soar exponentially, the challenge lies with utilising this video as a valuable resource for criminal justice system, media or entertainment markets.
Focusing on the criminal justice system, this platform has significant disruptive potential in addressing major incidents and combating organised crime by simplifying video processing with search technologies. There is a clear unmet need for efficiently reviewing and processing surveillance footage, that is instantly accessible and will not consume additional resources or compromise on security. The project sought to develop advanced deep learning algorithms capable of discerning accurate observations and attributes from large quantities of surveillance data.
The solution: Video Intelligent Search Platform (VISP)

This project sought to develop the VISP platform, which utilises artificial intelligence video analytics and cloud-based processing technologies. It utilises machine learning or a rule-based approach to develop a deep learning neural network capable of identifying certain characteristics in a video or image based on the most likely match rather than a review of thousands of matches which is significantly time consuming.
Accomplishing this required each consortium partner to participate in the development of advanced algorithms. These were then applied to the application programming interface (API), designed and developed by Overcast HQ.
While traditional machine learning can identify basic characteristic for example, dog or cat, deep neural algorithms can identify characteristics on a significantly higher level of detail. These capabilities extend to the person attribute recognition which can identify features like gender, clothing style or accessories such as a hat, glasses and so on.
The VISP platform aims to influence major incidents and combat organised crime by simplifying video processing with search technologies. By leveraging a cloud-based platform, the transfer of evidence with be instantaneous, accelerating evidence handling but also underscores the platforms operational efficiencies. Algorithms require large quantities of video data to train the models capable of identifying multiple attributes.
The disruptive potential of this project is apparent as it can be applied to various sectors and scenarios that requires an analysis of video footage.
Outcomes
- Strong end-user feedback
- Recipient of multiple awards and recognitions for its AI algorithms
- Collaborative research
- Video exploitation using artificial intelligence (AI)
- Novel cloud optimisation techniques
- Resulted in several novel AI technologies
- High potential for the criminal justice market
- 12 Papers published
Lessons learned
- Easy installation of software and training AI models from continuous user engagement
- Client engagements gave the consortium a better understanding of industry needs
- A strong collaboration with regular meetings gave each partner insights to technological improvements not previously considered
- Cloud-based platforms remove the need for physical media