16 April 2018

Surveillance without "bugs"

Scientists were able to observe laboratory animals without installing tags

Anton Bugaichuk, Naked Science

American and German scientists have jointly developed an automatic tracking system for laboratory animals – DeepLabCut. Now the researchers monitor the movements and actions of the subjects, without marking them.

Traditionally, video surveillance provides a lot of material for scientific work with animals, but it also takes a lot of time. Viewing records is a lengthy procedure that cannot always be accelerated. It is difficult to distinguish animals from each other, and you have to put labels on them. If it is easy to mark a rat with colored spots, then there is a problem with small insects. In addition, animals erase tags.

The problem of recognizing individual individuals with markers has long been solved, thermal signatures are also used – they are well suited for large animals on the loose. If the experiment requires monitoring individual limbs or other parts of the animal, then the task becomes much more complicated. The scientist himself easily recognizes the details on the video, but is forced to view the recordings in real time. For automatic recognition of finger movements, the labels will have to be placed very closely. At the same time, it is difficult to make fasteners reliably, since animals will gnaw them.

A team of scientists led by Matthias Bethge refused labels, deciding to use neural networks. The method is simple: the images are marked up manually with the indication of points, which are then tracked by the program. The neural network is trained, and it determines for each pixel of the image the probability of the appearance of the corresponding part of the body, taking into account the position of the animal in space. The number of images that need to be marked manually is small: the system has been working confidently since a hundred, the researchers recommend marking 200 frames for reliability.

Scientists used convolutional neural network (CNN), the structure of which is similar to the work of the human visual cortex. Therefore, this network architecture is well suited for pattern recognition. The specialists additionally applied the deep learning method using DNN (Deconvolutional Neural Networks) technology. The parameters and filters generated during CNN training are used for primary signal processing, which improves object recognition.

Experts conducted two experiments on mice and one on fruit flies.

The rodent's running on a paper reel with a "painted" smell track was studied during the first experiment. Video recording was specially complicated by interference: uneven lighting, dynamic shadows from an animal, distortion from a wide-angle lens. While running, the mouse often crossed the trail and turned.

Seven mice were used for the experiment. The shooting was carried out by two cameras, 640 ×480 and 1700 × 1200 pixels, with a frequency of 30 Hz.  High-resolution frames are too large to process, so they were cropped to a size of 800×800 pixels around the mouse image. Scientists took 1080 random frames from different surveys and put marks on the muzzle, the tips of the ears and the base of the tail.

In the video, green and blue dots show 30 future and past positions of the muzzle with a periodicity of 33.3 milliseconds. Purple diamonds indicate the location of the body and muzzle with ears in the past. Together, these four points determine the direction of the mouse's body and head. The path with the smell is drawn in gray.

Monitoring mouse movement by scent trail

In the second study, the movements of the mouse's front foot were tracked. Previously, the animals were taught to pull a special lever for a reward. It is almost impossible to use tags with such an observation.

Five mice were used in the experiment. The shooting was carried out on a camera with a resolution of 2048 × 1088 pixels and a frequency of 100-320 frames per second. The researchers marked 159 frames, there were four marks on each finger: at the tip, the interphalangeal and metacarpophalangeal joints, the base of the wrist. The image was cropped to the area containing the desired movement. The video clearly demonstrates the capabilities of the method.

Monitoring the movement of the mouse's front paw when pulling the lever

The third experiment is monitoring the behavior of fruit flies during oviposition. In this case, marking is extremely difficult due to the size of the flies. 12 dots were marked on the frames: four on the head, seven on the body and one on the ovipositor. The method worked fine in this case as well.

The calculations of each experiment required about half a million steps of neural network training, which took from 24 to 36 hours of operation of the NVIDIA GTX 1080 Ti graphics card.

Scientists have put the computing program in free access.

Portal "Eternal youth" http://vechnayamolodost.ru


Found a typo? Select it and press ctrl + enter Print version