When the cat's away, the mice will train


By Lauren Davis
Wednesday, 13 December, 2017


When the cat's away, the mice will train

It’s an almost inevitable part of working in a lab that, at one point or another, you’re going to find yourself training mice — a necessary evil that can take up a substantial amount of time. But what if there was an easier way?

Seeking to move neuroscience research into the fast lane, researchers at Japan’s RIKEN Brain Science Institute have constructed and deployed a high-throughput system to study mouse behaviour and physiology at a much faster rate than that achieved via manual methods. Described in the journal Nature Communications, the system aims to deliver large, standardised datasets, a reduction in the number of experimental animals and time savings through complete automation.

According to the RIKEN scientists, behavioural neuroscience — for example, studying vision or cognition — always entails training animals to do experimental tasks, like pushing a button to indicate a preference or demonstrate a memory. This training can take several months, making it a full-time job for one or multiple researchers, who must monitor their subjects using head-fixed assays in order to take tedious and labour-intensive brain recordings.

“However, the specificities of these paradigms and their integration with the growing array of state-of-the-art brain physiological recording systems differ greatly among and within laboratories due to the variability introduced by the experimenter’s intervention,” the researchers wrote. “This lack of standardization generates inherent reproducibility issues and eliminates the possibility of large, sharable data sets that could significantly accelerate the pace of scientific discovery and validation.”

These problems have become particularly apparent in mouse studies, which is unfortunate as the mouse contains “the largest methodological toolbox for neural circuit research on behaviour”, according to the researchers. In addition, mice can get stressed from being handled by experimenters, and training and experiments vary from lab to lab.

“It is hard to compare data across labs and even within the same lab, and we waste a lot of person-hours getting comparatively little data,” noted Andrea Benucci, the leader of the RIKEN research group.

So what’s the alternative? According to Benucci and his fellow researchers, the ideal mouse training system would feature the following:

  • Self-head fixation for behavioural training and rapid exploration of several complex behavioural parameters with minimal experimenter intervention.
  • High-throughput automated training.
  • The capability to explore various sources of psychometric data.
  • Flexible integration of multiple physiology recording/stimulation systems.
  • The efficient generation of large, sharable and reproducible datasets to standardise procedures within and across laboratories.

In order to realise his goal, Benucci collaborated with Japanese laboratory equipment manufacturer O’Hara & Co. The result was the creation of an automated experimental platform for mouse behavioural training, featuring full automation, voluntary head fixation and high-throughput capacity.

“The platform is scalable and modular allowing behavioral training based on diverse sensory modalities, and it readily integrates with virtually any physiology setup for neural circuit- and cellular-level analysis,” the study authors wrote. “Moreover, its remote accessibility and web-based design make it ideal for large-scale implementation.”

Automated self-latching. Top-left panel shows a 3D rendering of the latching mechanism. Panels 1–5 show the sequence of steps leading to self-head fixation: (1) The head-plate (black bar on mouse head) is progressively restrained by narrowing rails (grey converging lines). (2) The forward motion of the head-plate mechanically lifts up the first pair of latching pins. (3, 4) The first pair of pins then lowers by gravity, and the continued forward motion of the animal similarly lifts up and down the second pair of latching pins, leading to the final self-head fixation (4). During 3, 4, small tilt and forward movements are allowed that reduce the probability of a ‘panic’ response due to a sudden head fixation. (5) When the task session ends, a computer-controlled servo motor actuator lifts up both pairs of latching pins and releases the animal. Original technical drawings edited by the study authors with permission from O’ Hara & Co and shared under CC BY 4.0.

The researchers demonstrated their platform by training mice in two behavioural tasks: one visual and one auditory. For the visual task, the mice were trained to perform in a forced choice orientation discrimination task relying on binocular vision. They designed a 2D interactive visual task in which a circular grating placed in the central part of the visual field had a clockwise (c) or counterclockwise (cc) rotation relative to vertical.

For trial initiation, mice had to keep their front paws on a small wheel and refrain from making wheel rotations for 1 s (within ±15°). A stimulus was then shown on a screen for 1 s, during which time possible wheel rotations were ignored by the software (open loop). After this period, mice reported their percept of the stimulus orientation with c/cc rotations of the wheel for corresponding c/cc rotations of the grating stimulus, with the wheel rotation controlling the orientation of the visual stimulus in real time (closed loop). A correct response was a c (or cc) rotation to a cc (or c) rotated stimulus, resulting in a vertically oriented grating, the target orientation. After a correct response, the vertically oriented grating remained on the screen for an additional 1 s to promote the association between the vertical orientation and the reward. Correct responses were rewarded with a small amount of water, while incorrect responses were punished with a 5 s time-out stimulus consisting of a flickering square-wave checkerboard with 100% contrast. If there was no rotation crossing a near-vertical threshold of 10° for 10 s after the onset of the closed loop, the visual stimulus disappeared and the next trial started.

Mice performed three 20-minute sessions per day, discriminating orientations as small as 15° from vertical. Eight out of 12 mice that entered the pre-training phase learned the task. Learning for the initial 45°/-45° orientation discrimination task took ~4 weeks, while it took on ~8 weeks to reach 75% accuracy with the smallest discrimination angle (±15°).

The second step saw the researchers train a group of mice in an auditory go-no-go task. They placed a speaker for auditory stimulation in front of the animal and enclosed the set-up in a sound isolation box to reduce ambient noise. Mice had to detect the occurrence of an 80 dB, 10 kHz pure tone played five times, which was presented in 70% of the trials (go stimulus). In the remaining 30% of the trials, the mouse was exposed to an unmodulated ~50 dB background noise (no-go stimulus). Mice had a 2 s window from the end of the go-stimulus to report the tone detection by rotating a small wheel at least 70° in either direction.

In hit trials (go responses to go stimuli), mice were rewarded with water in between trials. In miss responses (no-go responses to go stimuli), mice did not receive any reward or punishment. Similarly, in correct rejection trials (no-go responses to no-go stimuli), mice were not rewarded or punished. In false alarm trials (go responses to no-go stimuli), mice were punished with additional waiting time and shown a square-wave checkerboard with 100% contrast. Mice performed two sessions per day and learned the task over 12.5 ±3.5 days.

Finally, the scientists showed that they could image the mice once they had learned their tasks. Before commencing training, mice had been imaged using standard methods for retinotopic mapping to identify V1 and higher visual areas. Afterwards, a latching unit for physiology was connected to the mice’s home cage, with the platform placed under a two-photon microscope. In typical two-photon imaging experiments, the researchers recorded from a volume 850 x 850 x 3 μm3 of L2/3 neurons in the primary visual cortex. Using a common analysis for cell segmentation, they could identify ~200 neurons per volume. Using vascularisation landmarks, they could image the same cells over days or weeks, and segregated their responses as a function of the animal’s choices or stimulus orientations.

“As a corollary of this cellular-level resolution, our semi-automated procedure can then be easily combined with a large variety of other imaging, optogenetic, and electrophysiology systems requiring a similar degree of stability of the neural target of interest,” the researchers wrote. “In summary, the training setup combined with the latching unit for physiology is a convenient compromise for the relatively effortless integration of automated behavioral training with a large diversity of physiology systems.”

The study authors thus demonstrated that the mice learned to engage in the behavioural training tasks at will, without any human intervention. A single system was able to operate around the clock, training four or more mice per day. And with multiple set-ups and mouse cages stacked in what resembles a row of server racks, the system has already been used to train 100 mice.

“Previously, training just one mouse took about 15 hours of a researcher’s time,” Benucci said. “Now, with 12 set-ups we are down to less than one-and-a-half hours.”

Crucially, the mice learned to self-stabilise their heads, which is key for collecting high-fidelity physiology data and gives the system a great deal of experimental versatility. Furthermore, because the mice learned to self-direct and become familiar with the modular system, the experimental possibilities are said to extend beyond studying mouse behaviour to real-time brain imaging and physiology.

“Normally we see a decline in mouse performance or other incompatibilities when moving from highly trained behaviours to different types of experiments for brain recordings, but that doesn’t happen with our system,” said Benucci.

With the neuroscience platform having already been patented by RIKEN, Benucci now hopes it will be widely adopted nationally and internationally.

“Standard hardware and training protocols across labs that do not require the experimenter’s intervention can go a long way to addressing data reproducibility in science, and in neuroscience in particular there is a pressing need for large, shareable datasets to validate findings and push the field forward,” he said.

Image credit: ©stock.adobe.com/au/mrks_v

Related Articles

AI can detect COVID and other conditions from chest X-rays

As scientists compare different AI models to improve automated chest X-ray interpretation, a new...

Image integrity best practice: the problem with altering western blots

Image integrity issues are most likely to come from western blots, so researchers and...

Leveraging big data and AI in genomic research

AI has fast become an integral part of our daily lives, and embracing it is essential to the...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd