UK Company Brings AI to Worldwide Conservation Efforts

Aug. 14, 2023
Nonprofit detects threats to endangered species in real time using images and software.

Conservation AI (Liverpool, UK) has added AI/Deep learning to its arsenal in the fight to save endangered species and preserve the planet’s biodiversity.

The nonprofit, which was started in 2020, seeks to help protect endangered species across the world by providing a means by which conservationists can parse massive amounts of data quickly, track animals and respond in real time to mitigate potential threats.

Conservation AI’s platform can analyze footage, identify species of interest, and alert the appropriate entities, via email, in seconds. It can also quickly identify, model and analyze environmental trends, utilizing a massive database of images and data that could otherwise take years to analyze.

Inefficient Manual Process

Typically, animal studies are conducted as abundance counting events done via camera trap images, which essentially entails setting multiple cameras out and generating thousands of images, notes Carl Chalmers, a founder of Conservation AI. Once the images are captured and gathered, a person, or people, goes through all the captured images manually, removing the blank images as well as performing species level classification. Once they have identified and counted the animals they are monitoring, they analyze the data and try to distribute the results of their activities as quickly as possible to people and organizations that can positively and decisively take action.

The problem, however, is this is not only labor intensive but can take significant time, even years, to analyze the data and produce useful results, Chalmers says. Worse, because the process can take so long, too often, by the time the appropriate entities receive the information, it is too late to take meaningful action.

So, for example, if a camera captures images that could indicate poaching activity, but dissemination of that information to appropriate resources is delayed, even by only few days, chances are that the poachers will be long gone by the time anyone can respond.

Another challenge is the fact that many of these creatures inhabit very remote, difficult to reach places that do not have a modern communication infrastructure.

Conservation AI works with more than 200 partners around the world, who provide real time and/or historical data in the form of images and video. Conservation AI designs and manages an AI assisted system that is primarily used to identify animals of endangered species as well as help develop useful insights in as close to real time as possible.

How It Works

Chalmers says he and his team came up with the idea of training convolutional neural networks to look through thousands of images and then put them into a database that's sortable and filterable. They do that by training a number of deep learning models using the quick draw CNN architecture, which refers to a convolutional neural network platform used in AI and machine vision designed to recognize and identify objects based on something as simple, obscure, or crude as a line drawing, because it works accurately despite issues such as occlusion and poor image quality.

For example, they initially trained models to recognize lions on images taken in a safari park in the U.K. But a photo shot in the U.K. is far different—tawny animal on backdrop of short green grass—than an image of the same species shot in its natural habitat in Africa—camouflaged in thick brush or tall grass, and/or further obscured by complex lighting situations or even bad weather conditions.

“So, what we do is train the model multiple times, multiple iterations, to get it to a point where the accuracy is really good, which is really difficult with camera trap images,” Chalmers says.

Because each partner has different needs and varying equipment inventories, the system is designed to be agnostic—it will work with any 3 or 4 G camera that supports SMTP.  

Conservation AI trains and inferences its deep learning models with NVIDIA RTX 8000, T4 and A100 Tensor Core GPUs, which are designed especially for AI and deep learning research applications. In addition, they use the NVIDIA DeepStream software development kit for vision AI applications.

Chalmers said the system works with still images, although it can extract images from video. Real time transmission of video tends to use up bandwidth and slow the process.

“A lot of people do transmit video; the difference is it extracts all the frames out the video, so if you got 30 frames a second, it extracts the 30 jpegs out of it and then classifies them,” he says. “Obviously, if you do every frame 30 fps, then that takes much longer to process opposed to doing things like frame skipping. If you’re shooting 30 fps, we don’t need to analyze all 30 frames. So, the users onsite can specify how many frames per second they do want to analyze.”

None of the storage or analysis is cloud-based, as the sheer quantity of space needed is cost prohibitive.

Partners either email their data or log into Conservation AI’s database to upload their information. The process is  usually very fast – the time the image is sent from the camera to Conservation AI, analyzed, and information sent back, is on average, about 30 seconds.


The system has been successful, Chalmers says. In fact, in the past 18 months, Conservation AI, working with more than 200 partners worldwide, has deployed more than 70 cameras and analyzed more than 2 million images. The Chester Zoo (Chester, UK) has been using the system to study pangolins, one of the most elusive and rarest animals on earth, that sadly is often a target for poachers for its meat and its scales. A wildlife preserve in Limpopo, South Africa, has been using the system to help protect animals such as Black and White Rhinoceros from poachers.

“At the moment, when you put a camera trap out and it's fixed, if you don't see the animal 2 degrees off the camera, you've lost it. We're working on audio models that can sit on the cameras and work out where noises are coming from. So, it hears a twig snap, and it will rotate the camera to where that is and then start doing the detections.”

About the Author

Jim Tatum | Senior Editor

VSD Senior Editor Jim Tatum has more than 25 years experience in print and digital journalism, covering business/industry/economic development issues, regional and local government/regulatory issues, and more. In 2019, he transitioned from newspapers to business media full time, joining VSD in 2023.

Voice Your Opinion

To join the conversation, and become an exclusive member of Vision Systems Design, create an account today!