LabNet hardware control software for the Raspberry Pi
Abstract
Single-board computers such as the Raspberry Pi make it easy to control hardware setups for laboratory experiments. GPIOs and expansion boards (HATs) give access to a whole range of sensor and control hardware. However, controlling such hardware can be challenging, when many experimental setups run in parallel and the time component is critical. LabNet is a C++ optimized control layer software to give access to the Raspberry Pi connected hardware over a simple network protocol. LabNet was developed to be suitable for time-critical operations, and to be simple to expand. It leverages the actor model to simplify multithreading programming and to increase modularity. The message protocol is implemented in Protobuf and offers performance, small message size, and supports a large number of programming languages on the client side. It shows good performance compared to locally executed tools like Bpod, pyControl, or Autopilot and reaches sub-millisecond range in network communication latencies. LabNet can monitor and react simultaneously to up to 14 pairs of digital inputs, without increasing latencies. LabNet itself does not provide support for the design of experimental tasks. This is left to the client. LabNet can be used for general automation in experimental laboratories with its control PC located at some distance. LabNet is open source and under continuing development.
Editor's evaluation
LabNet is an exciting new platform for experimental control using Raspberry Pis. As experiments get more complex in neuroscience, new validated tools are needed to continue to allow users flexibility, precision, and be fast, and LabNet is such a tool. Through extensive benchmarking and documentation of their tool, they demonstrate excellent performance, scalability, and provide examples of how their platform can be adopted.
https://doi.org/10.7554/eLife.77973.sa0Introduction
The combination of open-source software, low cost microcontroller electronics, and the easy access to digital fabrication have led to a plethora of open-source solutions for animal behaviour experimental systems (Open Behaviour [Laubach et al., 2021], Bpod [Sanders, 2021], Autopilot [Saunders and Wehr, 2019], pyControl [Akam et al., 2022], MiniScope [Cai et al., 2016], Bonsai [Lopes et al., 2015], Whisker [Cardinal and Aitken, 2010], OpenEphys GUI [Siegle et al., 2017]). Using our 10-year experience with the Rasperry Pi for animal behaviour experimental control and after two decades with different self-developed embedded control approaches, we have developed in C++ a new, powerful, and highly versatile platform for hardware control via the Raspberry Pi.
We had two major goals: the platform has to be suitable for time-critical operations and be easy to extend. Furthermore, LabNet had to support a wide variety of hardware components and we wanted to simultaneously control multiple animal behaviour experimental operant boxes. When conducting automated behavioural experiments, it is advantageous to test many animals in parallel with identical or, if necessary, individually specific experiments. This is the only way to obtain complete data sets quickly and can only be achieved through automation. Figure 1 shows examples of operant conditioning cages (Skinner boxes; Skinner, 1938) as controlled by LabNet. Our intention was not to create a completely new ecosystem like Autopilot. We wanted to simplify communication with hardware for projects using their coding language of choice on the PC. Also, we wanted to remain general so that LabNet can become a general platform for experimental laboratory automation.

Examples of behavioural setups controlled by LabNet.
A Skinner box (left) contains (a) a feeder magazine that typically has a photo gate for nose-poke detection and a reward pellet dispenser. It also has (b) a row of LEDs and (c) a tone generator. (d) A monitor displays visual stimuli and may have a touch sensor for touchscreen functionality. The T-Maze (right) also has (a) a food magazine and (b) LEDs, and furthermore (e) an optical sensor to detect the return of the mouse to the start position and (f) two motorized doors that can be lowered to restrict access to the arms. Legend: Images and diagrams generated with TikZ.
We selected the Raspberry Pi because it is low cost, powerful, has a wide selection of I/O add-ons and software components. To keep signal lines short, we gave each experimental setup its own Raspberry Pi. All systems are connected to the Ethernet network and are manageable via a central instance. This instance can be a normal PC and be located outside of the laboratory. Thus, the condition of experimental states and the animals can be monitored at any time, even without entering the laboratory.
Autopilot also uses a swarm of Raspberry Pis. However, it implements a hierarchy where each Raspberry Pi can take a different role. This requires an additional configuration step and can complicate troubleshooting. We wanted to avoid this as well. This is the reason why each of our Raspberry Pis runs the same software and overall experimental control is executed by the central instance that is for example run from a PC. This separation also determined the network architecture of the entire system. The local instances on the Raspberry Pi are servers, and the central instance is the client. There can be one client to control experiments on all systems or multiple clients each controlling an experiment on one or more systems. But, at least so far, not multiple clients connected to one server.
Results
System overview
We designed LabNet as a distributed system (network) where LabNet presents a node running on a RasPi. We had two important requirements for this system: openness and scalability (see van Steen and Tanenbaum, 2017). Openness means that each node can control an experimental chamber on its own or together with a number of other nodes (for experimental system examples, see Figure 1 and 4). Scalability means that there can be any number of nodes and thus experimental chambers in the system. However, a node or a chamber has to be removable from the system without adjustments on the other nodes. To ensure this, each node in our system is controlled by a RasPi, each RasPi is configured in the same way and controlled by the same software. However, this also comes with the restriction that at most one experimental system can be connected to each RasPi to be removable without electrical adjustments. But this also means a simplification: LabNet only needs to accept a single connection and does not need resource management for multiple connections, because only one experiment runs on one system and the hardware is not shared.
Thus, the network of LabNet nodes represents the distributed system and offers, as servers, the hardware resources in the network. However, hardware control in the context of the experiments is the responsibility of clients and not a LabNet duty. For example, LabNet does not decide about an output pin state, but LabNet knows how to switch the state and performs it at the client’s request. One client could take the control over of the entire LabNet distributed system or divide the nodes among several clients. It all depends on the situation and requirements: a large number of identical experimental chambers with identical experimental tasks are usually controlled by one client while different experimental tasks may better be controlled by separate clients, also to start and stop experiments independently. For communication between LabNet and client a flexible and fast message protocol using Protobuf was developed (section Message protocol). The clients can be implemented in any language with Protobuf support, for example, Python, C#, C++, etc.
Since the Raspberry Pi is a single-board computer, it runs ‘Raspberry Pi OS’: a Debian-based Linux distribution. This allows a large freedom in the choice of programming language and software tools. Both interpreted languages, such as Python, and compiled languages are available. LabNet was required to meet two criteria:
Time-critical: all operations should be performed as quickly as possible.
Flexible: new functionality extensions should be as simple as possible.
Unfortunately, all interpreted programming languages have a disadvantage in execution speed compared to compiled languages. Nevertheless, many of the tools developed recently, such as Autopilot and pyControl, use Python. Python is a simple language and provides many packages for all purposes. However, because execution speed was of primary importance we decided to use C++.
Extensions with new functionalities is generally possible in two ways: (i) software adaption with recompilation in case of compiled languages and (ii) a plug-in system. In the current LabNet version, we use recompilation but our road map also includes a future plug-in system. To simply modifications the software must have a suitable architecture and a high degree of modularization.
Since its version 2 the Raspberry Pi has 4 cores. In addition, most of its hardware controllers, such as USB or Ethernet, operate asynchronously, thus they do not require CPU capacity because of DMA (Direct Memory Access), and they report their work completion via interrupt messages. LabNet needed an architecture that optimally leverages this already available hardware asynchrony for parallel execution. Handling GPIO lines is fast, but accessing a UART may lead to considerable delays in sequentially executed software. This presupposes the use of multiple threads. Since programming with many parallel threads is a very error-prone and time-consuming task, we decided to develop an actor-based software (see sections Actor model and SObjectizer). This also provides higher flexibility and software modularity.
Example
The following example and the corresponding listings (1–3) show how a client can initialize and control the hardware together with a LabNet server on a RasPi with a simple hardware setup. The client could run on a PC and use any language that has support for Protobuf-like Python, C++, C#, etc. Since we use C# in our experiments, the C# notation is also used in the listings. Basically, it shows the use of some of the LabNet messages, but the communication via TCP/IP is omitted for simplicity. We simply assume that a TCP/IP client exists and handles all operations like send, receive, and serialization.
Let us assume an experimental setup with an LED and audio as stimuli, a valve to release a liquid reward and a photo gate as a nose-poke sensor to detect animal behaviour. All these components can be connected directly to the GPIO pins via a simple circuit. The headphone jack can be used for audio output. Then, we need to send five commands to LabNet to initialize all components; see Listing 1. It would usually be necessary for the client to wait for the responses from LabNet and check the initialization results. Here, we skip this step.
During experiments, animals must usually perform some operant behaviour. This can be as simple as nose poking to trigger a photo gate after a certain stimulus has been perceived. In Listing 2, LabNet activates an LED and produces a sine tone. In the case of the tone, it is instructed to automatically generate a pulsed output. On detecting On and Off state changes, LabNet transmits such photo gate state changes to the client. In response to the photo gate state change, a reward can be provided. In Listing 3, a liquid reward valve is opened for 100 ms.
A typical experiment in combination with LabNet comprises several phases:
establishing a TCP/IP connection;
initializing all hardware components;
turning stimuli on or off in a specific order;
waiting for an animal reaction and potentially providing a reward.
Performance evaluation
Because the neurons in the brain work in the millisecond range, the response times in behavioural experiments are critical and should match that range.
// start GPIO interface with WiringPi
var initIo = new GpioWiringPiInit();
// init a digital output on pin 5
var led = new GpioWiringPiInitDigitalOut {
Pin = 5,
IsInverted = false
};
// init a digital output on pin 26
var valve = new GpioWiringPiInitDigitalOut {
Pin = 26,
IsInverted = false
};
// init a digital input on pin 23
var poke = new GpioWiringPiInitDigitalIn {
Pin = 23,
IsInverted = false,
ResistorState = PullUp
};
// start sound interface
var initSound = new InitSound();
// create a sine tone
var sine = new DefineSineTone {
Id = 1,
Frequenz = 1000,
Volume = 0.5
};
Listing 1
Each generated object represents an initialization message to be serialized with Protobuf and transmitted to LabNet. The first message initializes the digital I/O interface with WiringPi pin notation. The next three initialize an LED, a valve, and a poke sensor on the WiringPi interface. The fifth creates the sound generator on the headphone jack. The last, initializes a sine tone with 1 kH frequency and 50% volume. Object initialization in C# notation. Serialization and TCP/IP communication not listed.
// change the state of the pin 5 to true
var setLed = new DigitalOutSet {
Id = new PinId { Interface = GpioWiringpi, Pin = 5 },
State = true
};
// turn the sine tone in pulses of 500ms on and off
var pulseSound = new DigitalOutPulse {
Id = new PinId { Interface = Sound, Pin = 1 },
HighDuration = 500, // ms
LowDuration = 500, // ms
Pulses =10
};
Listing 2
Information for transmission via Protobuf, building on the initialization from Listing 2.2. The LED is set to ON state (until an OFF command). A sound, defined as 1 kHz in Listing 2.2, is emitted as 10 pulses of 500 ms. Object initialization code in C# notation.
DigitalInState pokeState; // new poke state from LabNet
if (pokeState.State) // if new state true -> on
// give a reward
var reward = new DigitalOutPulse {
Id = new PinId { Interface = GpioWiringpi, Pin = 26 },
HighDuration = 100, // ms
Pulses = 1
};
}
Listing 3
In this example, LabNet has transmitted via Protobfuf a new poke sensor state (pokeState) to the PC. If the new poke state is true, a new message directs LabNet to deliver a reward by opening valve at pin 26 for 100 ms, as initialized in Listing 1. Code shown in C# notation.
LabNet was subjected to three tests to determine the latency times when executing different commands. A RasberryPi was connected to a PC via a router. For all tests, the client ran on a Linux PC (Ubuntu 20.04, Intel Core i7-6700 3.4 GHz with 16 GB RAM). To allow a comparison, the client was implemented in three languages: Python, C#, and C++. We used Python version 3.8. Python tests ran directly on top of the socket, synchronously with no additional software layers. In Python, it is also essential to deactivate Nagle’s algorithm. The C# version was implemented under .NET 6 and used Akka.NET, an actor framework. For C++, we used GCC 9.4.0, Boost version 1.75, and SObjectizer 5.7.2. Thus, all tests in C# and C++ were implemented as actors inside an actor framework. The source code of all tests is included in the GitHub repository under ‘examples’.
As always, all benchmarks must be interpreted with a certain degree of caution. For example, it is generally not possible to create the same initial situation for the implementations in all languages. With C++, an external library like Boost must be used for communication via TCP/IP. In addition, we used SObjectizer for C++ and Akka.NET for C# for asynchronous message processing. This theoretically gives the implementation in Python a slight advantage, as it runs synchronously and also has no complex calculations. Performance problems usually occur in Python code whenever true parallel execution is required (because of the Global Interpreter Lock) or when the calculations cannot be outsourced to a library implemented in C. Nevertheless, all three implementations provide a reasonable expectation of the latency in real cases. This is especially true for C# and C++, since they run asynchronously and thus simulate the execution of several parallel-running experimental tasks with animals in a first approximation.
LabNet was also compiled with different optimizations to investigate the performance effects of the different RasberryPi boards. However, GCC 8.3 was used for all versions. The first version had only the default release optimizations from CMake and could run on all RasPi boards. The optimization flags were as follows:
-mcpu=cortex-a7 for Pi 2;
-mcpu=cortex-a53 for Pi 3;
-mcpu=cortex-a72 for Pi 4;
-mfpu=neon-vfpv4 -mfloat-abi=hard - floating-point number optimizations for all versions.
Each test was run 10,000 times and was performed on three different RaspberryPi boards: RasPi 2B, 3B+, and 4B with 1 GB RAM. Statistical variables such as: mean, STD, median, and percentiles were calculated from the time measurements.
Set digital out test
In the ‘set out test’, a digital output is alternately set to 0 and 1. After the command for setting the pin has been received and processed, LabNet automatically sends back an acknowledgement. In this test, the time between sending the set command and receiving this confirmation was measured. Because of the simplicity, this test can also be seen as a type of ping measurement.
The set command from the client has 10 bytes. The server response is 22 bytes long, and includes the execution timestamp.
The results are shown in Figure 2a. The median for C# was between 0.36 ms for 4 and 1.01 ms for 2. For Python 0.32 on RasPi 4 and 0.80 on RasPi 2. For C++ 0.26 on RasPi 4 and 0.80 on RasPi 2.

Results from LabNet performance tests.
(a) Time to set a digital output as a Ping equivalent. (b, d) Latency to set a digital output in response to a change on a digital input. (c) Run the ‘read and set’ test for up to 14 IO-pin pairs in parallel. LabNet ran on RasPi 4. Tests were repeated 10,000 times and results are in milliseconds. Tests were performed on three different RaspberryPi boards: Rv2 is 2B, Rv3 is 3B+, and Rv4 is 4B with 1 GB RAM. optimized refers to the LabNet with some additional optimization flags (see main text). 1 kHz refers to the version without optimizations running on Rv4 with 1 kHz polling and max refers to non-stop polling. Box plots in (a–c) show median, lower and upper quartile, and whiskers the 2.5th and 97.5th percentiles. Data in (d) given as means and STD.
Read and set GPIO
In this test, the reaction time to external events was measured. LabNet first had to detect the interruption of a photogate by an animal’s nose, send a message to the client and in response the client had to initiate the change of a digital output state through a message to LabNet. To simulate the nose-poke events, a second RasPi was used. Two pins between the two RasPis were connected: one for the test signal from the second RasPi and one for the response from LabNet. The second RasPi was only responsible for switching the first pin to 1, stopping the time and waiting until the RasPi with LabNet had also switched the second pin to 1 in response. The time between these two high events is the latency (see Figure 3b). The measurement software on the second RasPi was written in C++ and ran on RasPi 3B+. We also verified how fast this software can detect the response signal. To do this, we simply connected test and response pins on the second RasPi together. This way the response pin goes immediately high if the test pin is set. The latency in this case was only 0.7 ± 0.6 µs. This means that the second RasPi acts like a 1 MHz oscilloscope. This is entirely sufficient in our case.

Latencies comparison and measurement.
(a) Comparison of execution latencies. All tools performed the same ‘read and set’ task to achieve comparability, except Whisker. Whisker server implements a 1 kHz polling frequency on the PC. LabNet for digital input polling depends on the internal RasPi 4 kHz polling frequency. Only for LabNet do the latencies include the message transfer over the Ethernet wire. Values give means with STD. LabNet was operated with a C# client. LabNet and Autopilot use the RasPi 4. (b) The latency measurement in Read and set GPIO test. The measurement RasPi generates the high ‘test’ signal, saves the time, and waits until the ‘response’ signal is also high. The time tk between these two high events is the latency. The test RasPi repeats this for 10,000 times and saves the results in a CSV file. RasPi acts here as 1 MHz oscilloscope. All packages (Autopilot, Bpod, pyControl, and LabNet) were tested in the same way. In the Stress test we have multiple ‘test’ and ‘response’ lines.
The digital input state message from LabNet is 22 bytes long, and includes the timestamp. The set command from the client has 10 bytes. LabNet has to send two messages to indicate the input state. The client has also to send two messages to switch the output state. Additionally, LabNet sends two messages to acknowledge the output state switch. Thus, there is a total of six messages per iteration.
The results are summarized in Figure 2b and d. For C# the median was 0.89 ms for 4 and 1.23 ms for 2 and mean values were 0.9±0.19 ms for 4 and 1.24±0.18 ms for 2. For Python the median was 0.89 ms for 4 and 1.28 ms for 2 and mean values were 0.88±0.18 ms for 4 and 1.24±0.16 ms for 2. For C++, the median was 0.93 ms for 4 and 1.12 ms for 2 and mean values were 0.92±0.15 ms for 4 and 1.12±0.07 ms for 2.
LabNet uses a polling mechanism to detect changes in digital inputs. By default, LabNet runs at 4 kHz polling. But we also evaluated latencies for 1 kHz and non-stop polling on Pi 4. Mean values in case of 1 kHz were 1.23±0.16 and 0.75±0.13 ms for non-stop for C#. Four kHz with 0.9 ± 0.19 ms offers slightly worse results compared to non-stop polling, but on the other hand only utilizes 10% of the capacity of one CPU core.
As a further result, the compiler optimization flags did not influence any of the tests. This indicates that LabNet has no performance issues on the RasPi.
The 1 GigE update of the RasberryPi 4 causes a performance increase over models 2 and 3. Despite the differences in implementation, all clients are relatively close with their performance. The results also show that the language used at the client side is not important, at least for the simple cases considered here.
Stress test
This is an extension of the ‘read and set’ tests. But now 14 pairs of pins were connected; all 28 GPIOs on both RasPis were used. The C++ program on the second RasPi ran up to 14 tests in parallel, each in an own thread. The pause between single measurement runs was set to 1 ms. We needed this pause to give all threads a chance to be executed. The C++ measurement program ran on the RasPi 3B+ as before and LabNet on 4.
The results in Figure 2c show that LabNet can monitor and control up to 14 pairs of IO-pins in parallel without any loss in performance. Interestingly, the latencies went down slightly for two test signals, but then remained at this level; the median for C# was 0.89 ms for one signal and 0.70 ms for two. This probably has to do with the polling in LabNet and the parallel test execution of the second RasPi. With two signals, it is more likely that LabNet will notice the pin state change within shorter delay.
Additionally, we looked at how many latency measurements per second the second RasPi could execute. With a single test signal this was just over 400 events per second. With the maximum of 14 tests each single pin switched only 200 but all pins together a total of 2800 times per second. The drop in the number of events per second from 400 for a single IO-pin occurs as soon as more than 4 IO-pin pairs are handled. This is a consequence of the four CPU cores on the RasberryPi. As soon as the test signal has been set, the software continuously monitors the state of the response pin. This keeps one CPU core fully busy and prevents the execution of the other test threads. However, this performance evaluation also shows that LabNet has no problems to process several thousand messages per second in each direction.
Comparison
Our comparison of LabNet latency performance with other software tools is summarized in Figure 3a. We implemented an adapted version of the ‘read and set’ test for Autopilot, Bpod, and pyControl to achieve measurement comparability. The latency measurements were performed in exactly the same way as previously with LabNet. Two pins were connected to the measurement RasPi. The same C++ measurement program ran again on a RasPi 3b+, set the test pin to 1, and waited until the response pin was also set to 1. Tests were repeated 10,000 times. Different from LabNet all tools ran locally and did not send commands over the wire in the network. For source code and data, see the Code availability section.
In Whisker, the communication occurs over the network; however, both Whisker and all task clients usually run on the same PC. Such communication is extremely fast and is also reported by the Whisker authors (Cardinal and Aitken, 2010) to require only 0.066 ms. The 1 ms latency comes from Whisker’s internal 1 kHz polling frequency for processing incoming commands. For Whisker we could not perform the ‘read and set’ test ourselves. Therefore, 1 ms is used as reference value.
Autopilot runs in a Rasberry Pi swarm. However, the tasks ran locally. To perform the ‘read and set’ test, the ‘free water task’ from the Autopilot GitHub repository was adapted. This waits for a digital input event, activates a digital output for a short time, and repeats. The measured mean latency was 0.93 ± 0.08 ms on RasPi 4.
The pyControl state machine is also very simple. It has only two states, which simply monitor the digital input and turn the digital output on and off. The mean latency is 0.66 ± 0.11 ms. This is comparable with reported results 0.56±0.02 ms from Akam et al., 2022. The used MicroPython pyboard version was 1.1.
Since the Bpod state machine runs at 10 kHz, we expected it to perform best which was the case. The mean latency was 0.1 ± 0.002 ms. We tested the version r2 of the Bpod State Machine.
According to these measurements, LabNet achieves latency times comparable to locally executed applications, even despite client control of LabNet over the network via TCP/IP.
Discussion
With LabNet we present a C++ optimized control layer software to control Raspberry Pi connected hardware over a simple network protocol. LabNet can be used for general automation in experimental laboratories. And the controlling PC can be located at some distance. The version of LabNet presented here is not our first solution of distributed experimental hardware control. After initially using PC digital IO boards for 760 parallel IO lines (Winter and Stich, 2005), we moved to a custom developed microcontroller board connected to the PC initially via UART and later via Ethernet. In 2015 we switched to the Rasberry Pi avoiding own hardware development. We used a prior version of LabNet for 5 years before rebuilding over the past 2 years from the ground up the current highly optimized version of LabNet using our prior experience with laboratory experimental control. In the following we present some of the experimental systems that included LabNet control.
For our experiments with nectar-feeding bats we controlled a system of up to 76 artificial sugar water feeders (artificial flowers) each of which included a nose-poke sensor, an LED, a motorized swivel arm to close the flower and two valves for reward (Winter and Stich, 2005). Later the flowers were extended with RFID readers for the individual identification of ID chipped bats and has been used with freely ranging bats both in the rainforest (Nachev et al., 2017) and in the laboratory (Wintergerst et al., 2021). These systems had up to 23 flowers and each flower was accessible to all bat. While in the earlier systems we used a UART-to-Ethernet converter from Perle Systems to receive RFID data, we now use a custom 32-channel UART HAT for the Rasperry Pi with LabNet. Also the stepper-motor nectar pumps and the rest of the hardware (nose-poke sensor, LED, valves, etc.) are now connected to the RasPi and controlled via LabNet. As to the network, in the case of individually kept animals, each flower had its own RasPi while in flower fields, several flowers shared one RasPi, depending the distance between experimental units.
We also perform behavioural experiments with rodents. In a study on rational choice by mice, ID chipped mice in a group home cage could choose between four water dispensers built very similar to our artificial bat flowers, with nose-poke sensor, a valve, an RFID reader, and a syringe pump to deliver the water. Here, all hardware was connected to RasPi and controlled with LabNet. We have also used LabNet in connection with commercially available experimental chambers. An example is the touchscreen system for rats from Campden Instruments that we extended using LabNet that controlled a gating system (ID sorter) to automatically perform experiments with group housed rats (Marion et al., 2017) (see Figure 4 and more below). The program for the sorting procedure on the PC started the experiment in the touchscreen chamber via a TCP/IP protocol implemented in collaboration with Campden Instruments every time a new animal was sorted in. This allowed us to conduct the experiments with multiple animals automatically and unsupervised.

A complete behavioural setup controlled by LabNet.
On the left is the touchscreen chamber. (a) Feeder magazine with a nose-poke sensor and a pellet dispenser, (b) a row of LEDs, (c) a tone generator, (d) monitor displays for visual stimuli with IR touch frame sensor. In the middle is the sorting module. () three RFID readers. r2 and r3 are positioned so that the animals can be read anywhere inside the sorter and r1 so that the animal is read when it leaves the sorter. (d1, d2) Two doors to catch the animal inside the sorter and guide it to the experimental chamber or the home cage. The animals live inside the home cage and can participate voluntarily and unsupervised in the experiments in the touchscreen chamber via the sorter. Both the touchscreen system and the sorter are each equipped with their own RasPi and connected to PC via Ethernet. The sorter can be removed without electrical adjustments, because it is controlled by its own RasPi and is therefore completely independent from the touchscreen system. Then experiments can be conducted with manually introduced animals.
More recently we have implemented an experimental touchscreen system for group housed mice that is fully under RasPi and LabNet control (Figure 4). This consists of a touchscreen system, a sorter, and a home cage. The touchscreen system has a monitor with an IR touch frame, a pellet magazine with pose-poke sensor, a row of LEDs, and a tone generator. All components are connected to a RasPi with LabNet control. Listings 2.2–2.2 show how this hardware can be initialized and controlled from the PC. The sorter has three RFID readers, two motorized doors, and two hall effect sensors, all connected to a RasPi. The readers connect via UART-USB converters, motors are controlled via UART, and the hall effect sensors for door state connect to IO ports. The sorting procedure, that is, when which door goes up or down, is realized by the PC, the client. The animals live inside the home cage and participate voluntarily and unsupervised in the experiments in the touchscreen chamber. Figure 4 shows only one system, but we had up to four systems connected and controlled by one PC. This also shows the advantage of a distributed system like LabNet. In order to control more systems, they were simply connected to the network without further adjustments. With identical systems, the same experiments could run everywhere at the same time.
LabNet is a very versatile distributed system which allows to control the hardware in laboratory and field experiments. It achieves almost real-time hardware control despite the network communication. Our stress test measurements have shown that thousands of Ethernet messages can be handled by LabNet per second. Indeed, the bottleneck here is the client and its ability to process and react to LabNet messages. However, none of our systems had reached the number of messages per second as in the stress test and we never had performance issues. The only problem could be very large messages in the network communication, for example, video data. These could significantly worsen the latency of other messages. But, there is the possibility to put RasPis with cameras into another network and on the PC receive the data via another network card. LabNet can also execute multiple tasks on one RasPi at the same time. In our experience with the touchscreen system we observed inputs, generated multiple pulse trains, played sound, and displayed visual stimuli, all at the same time but never reached performance limits on the RasPi. The LabNet architecture with actors explicitly targets the execution of multiple tasks.
Raspberry Pi as the hardware platform allows connecting a wide variety of readily available sensors and actuators. LabNet supports already a range of hardware modules which can thus be addressed via network. For example, GPIOs, communication via the UARTs, sound output via a headphone jack or HDMI, and some Raspberry Pi HATs developed in-house. This already allows many types of experiments. LabNet can also be used with hardware adaptors with available modules for operant experiments from open source such as Bpod and pyControl or commercial systems from MedAssociates or Coulbourn Instrumentsor. In addition, LabNet can be integrated into existing systems, as shown above with a Campden Instruments’ system.
Here, we do not show how LabNet can be extended in C++ with new functionality. This is part of the API documentation which may undergo changes between versions and will be available online. The next version in progress will support the display of visual stimuli, touchscreen support, and communication via I2C. This version will also include a complete API documentation. The support for a Raspberry Pi-based configuration file is also planned. This would make configuration via the network no longer necessary and LabNet could already initialize the hardware correctly on Raspberry Pi start.
In the future, we plan to implement a software plug-in system. This will make it possible to support new hardware without LabNet recompiling. This will require a rework of the current LabNet API. This will then support messages that are unknown at LabNet compile time.
Materials and methods
Reagent type (species)or resource | Designation | Source or reference | Identifiers | Additional information |
---|---|---|---|---|
Software, algorithm | pyControl | https://github.com/pyControl/code.git | RRID:SCR_021612 | pyControl source code repository, v1.7.1 |
Software, algorithm | Autopilot | https://github.com/auto-pi-lot/autopilot.git | RRID:SCR_021518 | Autopilot source code repository, v0.4.4 |
Software, algorithm | Bpod | https://github.com/sanworks/Bpod_Gen2.git | RRID:SCR_015943 | MATLAB software for Bpod, Gen2 |
Other | Bpod | https://www.sanworks.io/index.php | RRID:SCR_015943 | r2 Bpod State Machine |
Software, algorithm | LabNet | https://github.com/WinterLab-Berlin/LabNet.git | SHA-1: 333bd58 | LabNet source code repository |
Data of the performance measurements, the source code for Autopilot, Bpod, and pyControl tasks and the source code for the graphs are included in the article’s data and source code repository.
Actor model
Developing a system with multiple threads still requires much care and can be challenging. Thread local state and program global state have to be protected. Some type of locking mechanism is required. Unfortunately, the locking mechanism itself increases not only the scalability but also the code complexity and error-proneness due to the locking order. Locking problems such as race conditions or dead-locks must be avoided. But time and execution order-dependent errors can be difficult to find and fix.
LabNet is a concurrent system. The operations on the GPIOs, sending and receiving data via UARTs, sound output, etc. have to be independent from each other. For such purpose, message-passing approaches have been developed. In those, all inter-thread state sharing is encapsulated within messages sent between threads. All messages must be immutable or be copied for each thread.
Hewitt, Bishop, and Steiger (Hewitt et al., 1973) proposed in 1973 with their actor model one of the first message-passing systems. Actors are active objects that communicate only over messages. Each actor has only knowledge about itself and its own functioning (shared-nothing principle). No global state exists in an actor system. Messages also do not block the sender (fire-and-forget principle). This avoids problems such as race conditions or dead-locks.
This has further developed to a level of abstraction from only considering shared memory to independent actors that communicate through a well-defined message protocol. In the late 1980s, Ericsson developed Erlang (Armstrong, 1996), an actor-based programming language, and successfully used it in ATM network switches. The Akka (Lightbend, 2021) actors framework was released in 2009 for Java and Scala.
SObjectizer
Request a detailed protocolFrom the several actor model libraries that are available for C++ such as the C++ Actor Framework (CAF) (Charousset et al., 2013), SObjectizer (Stiffstream, 2021), or Theron (Mason, 2019) we chose SObjectizer.
In SObjectizer a class or struct is sufficient to define a message. Actors are also normal classes derived from an agent_t base class. Thus, actors automatically have a ‘message box’ (Mbox), through which messages can be received, and also methods that are automatically called, for example, before an actor is started or stopped.
An Mbox can receive messages of all possible types. The Mbox of an actor has no name and must be communicated to other actors. However, named Mboxes can also exist. A reference to such an Mbox can be created anytime via its name. This practical feature is used in LabNet to access important actors that always exist.
It is also possible to mix actors with other paradigms. This allows to move some parts of the application into the Boost ASIO (Boost, 2021) or into threads. Mboxes can still be used for communication with actors from the outside. For communication with code from outside the actor world, the so-called MChain are used. An MChain looks like a queue: actors can place messages there and threads can pull them at a later time. For example, LabNet uses Boost ASIO for TCP/IP communication and threads for some clearly defined tasks such as digital input polling.
One important feature of SObjectizer is the built-in support for hierarchical state machines (HSMs). All actors in SObjectizer are state machines. They can pass through several states in their lives and react to incoming messages depending on their current state. An interface actor in LabNet (see Implementation section) goes through several states: hardware initialization, operation, error, etc.
Dispatchers are another important cornerstone of SObjectizer. Dispatchers provide an actor with the working context. They manage all message queues and execute the actors if there are messages in the queue (Mbox). We have chosen the dispatcher with a thread pool. It provides a good compromise between thread management overhead and parallelism. But it is still possible to use other dispatcher types (e.g. one with one thread per actor) without having to adapt the actors.
Message protocol
Request a detailed protocolOur criteria for choosing the serialization tool were good performance, small message size, and support in as many programming languages as possible. Text-based serialization formats, such as XML, JSON, or ASCII-based plain-text, have the advantage of being human-readable. The Whisker server uses an ASCII-based format (Cardinal and Aitken, 2010). The disadvantages are the message size, higher computing requirements, and, at least for the ASCII version, a custom message parser.
For our application, a binary format is a better solution, and we chose Protobuf Google, 2021. It is very popular and offers support for many programming languages. However, Protobuf has some disadvantages. For example, it is not the most memory or computationally efficient tool. Libraries such as Flatbuffers, Cap’n Proto, or Simple Binary Encoding (SBE) are more efficient. However, these negative aspects of Protobuf only become critical when sending extremely large messages (some MBytes) or at a very high rate (millions per second) . This is typically not the case in experiments that focus on actions of animal behaviour.
Protobuf uses a special meta-language to define messages. With protoc-generator, it is possible to create these message protocols for each supported programming language. Files with message definitions are a part of the Git repository.
One Protobuf disadvantage must be mentioned. A serialized Protobuf message contains no information about the byte length nor the message type. Protobuf leaves this information to the transmission medium. We have solved this simply: each message begins with two pieces of information: type and size. This is also the officially recommended approach. Both are encoded as a number in Protobuf’s varint notation and are easy to parse with the Protobuf API.
Implementation
Request a detailed protocolThe current implementation does not contain configuration files for LabNet. The hardware initialization is exclusively performed through client messages. LabNet comprises several loosely coupled actors. The most important are briefly described below (see also Figure 5).

The core of LabNet is the actor environment with thread pool dispatcher.
Within the environment, the main actors are always present. The actors of the individual interfaces are started by the interfaces manager as needed. The interface actors can also outsource their work to other actors and threads. All main and interface actors can communicate with each other. The threads themselves and network communication via Boost ASIO, on the other hand, are hidden behind their actors.
The network communication runs over TCP/IP. The server in LabNet is implemented in Boost ASIO (Boost, 2021). The implementation is hidden behind the server actor. This actor can send and receive the Protobuf messages and also informs the actor world about the connection state. If the connection is lost, the actors can stop their work and automatically continue it later at the same point on a reconnection.
At the beginning, no single interface actor to communicate with the hardware exists. These actors combine all the possibilities for the hardware control: for example, initialization, set or get digital pin state, etc. They are automatically created and started by the interface manager. Currently, several ‘interfaces’ exist:
GpioWiringPi to control input and output pins with WiringPi (Henderson, 2019).
IoBoard is a self-developed PCB top plane with power supply and pin connectors.
UART can send and receive data over the internal RasPi UART and USB to RS232 converters.
UartBoard is a self-developed PCB top plane with up to 32 UART connectors using SPI.
Sound allows a simple sound output in the form of sine tones over HDMI or a headphone jack.
BleUart is a Nordic UART service over Bluetooth Low Energy (BLE) and allows to communicate with Bluetooth devices.
Many pins on the Raspberry Pi offer more than one functionality. Clear responsibility for a hardware resource must be ensured. Each ‘interface’ actor must request the resources from the resource manager. This is one of the first steps during the interface initialization state.
The ‘interfaces’ with digital outputs offer only the possibility to switch the output pin state. More complex procedures are implemented by the digital out helper actor. This actor can automatically turn off a pin after a defined time or generate pulses by specifying the on/off duration and number of pulses. Additionally, a group of pins can be automatically switched on and off together in a loop.
Related work
Most of the comparable software control tools published for behavioural experiments are more general packages. In addition to hardware control, they offer a more or less powerful tool for creating experiments, a user interface and a possibility to visualize the data. Although LabNet is only responsible for the hardware, a comparison is still worthwhile.
Wisker-Server
Request a detailed protocolThe development of Whisker control suite started in 1999 by Cardinal and Aitken at the Department of Experimental Psychology, University of Cambridge, and is ongoing (Cardinal and Aitken, 2010). Initially, the aim was to use the existing resources of a PC and plugged-in IO cards to control behavioural experiments with visual stimuli and touchscreens in several boxes simultaneously. This was solved by an additional software layer where Whisker operates as the server and controls the hardware. The clients must connect to the server over TCP/IP, and each one controls an experiment in one of the chambers. The clients themselves can be written in any programming language. Communication occurs through a plain-text protocol.
Because of the outsourcing of the experiments to the clients, Whisker’s approach is similar to ours. Due to the flexibility in implementing the clients, complex experiments can be realized with Whisker. Hardware support includes digital I/O devices (National Instruments, Advantech, etc.), visual stimuli on computer monitors, touchscreens, audio, and more. Whisker is commercially used in ‘ABET II’ by Campden Instruments Ltd.
pyControl
Request a detailed protocolpyControl (Akam et al., 2022) is an open-source hardware and software framework for controlling behavioural experiments. The hardware is based on the MicroPython microcontroller that typically controls a single experimental box each. Several pyControl breakout boards can connect to a PC via USB. Each board has six so-called behaviour ports and four BNC ports. Each port can be connected to a module: to drive LEDs, nose-poke sensors, stepper motors, and speakers. Two behaviour ports have I2C internally and can drive a port expander module to increase the number of ports.
Tasks on the MicroPython microcontroller and pyControl on the PC use Python. A task is defined as a finite-state machine. It comprises a collection of states and events that cause the switch between states. In data management, all events and state changes are stored with timestamps.
pyControl provides sufficient I/O ports to realize most tasks on a system. However, for the hardware types, we are limited to the firmware capabilities and available modules, although free wiring is also possible. The mandatory requirement to define the task as a state machine can be useful but may also become a limitation.
Bpod
Request a detailed protocolBpod Sanders, 2021 was originally developed in the Brody lab and is now maintained by Josh Sanders (Sanworks LLC.). It also has been expanded to PyBpod as a python port of the Bpod MATLAB project by members of the Champalimaud Foundation. Bpod offers only four I/O ports but has additional module ports that each provide an interface to Arduino-powered modules. Thus, Bpod gains additional flexibility: analog I/O, I2C, Ethernet, and more can be accessed via these modules.
A MATLAB package is offered to write experimental tasks. Unfortunately, the package documentation is limited. The tasks are also defined as finite-state machines. After starting the task, the state machine is transferred to the Bpod. From there, it communicates with the MATLAB frontend. This design results in the restriction that only a single Bpod can be controlled per MATLAB session. Therefore, Bpod is much more limited regarding software than pyControl or Whisker. Multiple systems cannot run simultaneously, and the functionality is limited by the firmware and the state machine.
Autopilot
Request a detailed protocolAutopilot Saunders and Wehr, 2019 is an open-source framework for behavioural experiments developed in the Wehr Lab at the University of Oregon. It uses Python, and the target platform is the Raspberry Pi.
The focus of Autopilot from the beginning has been the ability to control multiple systems. The basic unit in the software architecture of Autopilot is an agent. Each agent runs on its own Raspberry Pi and can communicate with other agents. Currently, three types of agents exist: terminal, pilot, and child.
Terminal agents are the only user-oriented with a graphical user interface. They are responsible for data logging and visualization. The experimental tasks are also managed here and transferred to the pilots, which are also responsible for experimental task execution. The pilots communicate with the external hardware that is connected to the Raspberry Pi and forward the experimental data to the terminals for logging or visualization. Each pilot can also have several child agents. Child agents can take over a part of a task if the task has been configured accordingly. The child agents are invisible to the terminals and communicate only with their parent pilot.
Among all tools discussed here, Autopilot offers most flexibility. It already supports a whole range of hardware. This includes digital I/O, audio, cameras, and some sensors such as temperature. Moreover, since it is open-source, support for additional hardware can be added. New behavioural experiments can also be implemented. However, in both cases, we are limited to Python.
Code availability
Request a detailed protocolThe source code of LabNet is available over the GitHub repository under a GPL-3.0 license.
Data of the performance measurements, the source code for Autopilot, Bpod and pyControl tasks, and the source code for the graphs are also accessible via the GitHub repository, (copy archived at swh:1:rev:d52e52c51e3f7c5b0e12f95829b8cf4886bb3379; Schatz and Winter, 2022).
There are instructions for two possible compilation paths. The first is on the Raspberry Pi with Visual Studio Code and CMake. The second is with Visual Studio 2019 and Docker. The archive also contains the source code of all tests from Performance evaluation under ‘examples’.
Data availability
Tool source code and performance measurements are available on GitHub (https://github.com/WinterLab-Berlin/LabNet and https://github.com/darki-31/LabNet_manuscript_data respectively).
-
GitHubID LabNet_manuscript_data. performance measurements data.
References
-
ConferenceErlang - a survey of the language and its industrial applicationsIn In Proceedings of the symposium on industrial applications of Prolog (INAP96.
-
Whisker: a client-server high-performance multimedia research control systemBehavior Research Methods 42:1059–1071.https://doi.org/10.3758/BRM.42.4.1059
-
ConferenceNative Actors – A Scalable Software Platform for Distributed, Heterogeneous EnvironmentsIn Proc. of the 4rd ACM SIGPLAN Conference on Systems, Programming, and Applications (SPLASH ’13), Workshop AGERE.https://doi.org/10.1145/2541329.2541336
-
ConferenceA universal modular actor formalism for artificial intelligenceIn Proceedings of the 3rd International Joint Conference on Artificial Intelligence, IJCAI’73.
-
Bonsai: an event-based framework for processing and controlling data streamsFrontiers in Neuroinformatics 9:7.https://doi.org/10.3389/fninf.2015.00007
-
SoftwareLabNet manuscript data, version swh:1:rev:d52e52c51e3f7c5b0e12f95829b8cf4886bb3379Software Heritage.
-
Open ephys: an open-source, plugin-based platform for multichannel electrophysiologyJournal of Neural Engineering 14:045003.https://doi.org/10.1088/1741-2552/aa5eea
-
BookThe Behavior of Organisms: An Experimental AnalysisNew York: Appleton-Century-Crofts.
-
SoftwareDistributed systemsDistributed-Systems.Net.
-
Foraging in a complex naturalistic environment: capacity of spatial working memory in flower batsThe Journal of Experimental Biology 208:539–548.https://doi.org/10.1242/jeb.01416
Decision letter
-
Mackenzie W MathisReviewing Editor; EPFL, Switzerland
-
Kate M WassumSenior Editor; University of California, Los Angeles, United States
-
Jonny L SaundersReviewer; University of Oregon, United States
-
Gonçalo LopesReviewer; NeuroGEARS Ltd, United Kingdom
Our editorial process produces two outputs: (i) public reviews designed to be posted alongside the preprint for the benefit of readers; (ii) feedback on the manuscript for the authors, including requests for revisions, shown below. We also include an acceptance summary that explains what the editors found interesting or important about the work.
Decision letter after peer review:
Thank you for submitting your article "LabNet: hardware control software for the Raspberry Pi" for consideration by eLife. Your article has been reviewed by 2 peer reviewers, and the evaluation has been overseen by a Reviewing Editor and Kate Wassum as the Senior Editor. The following individuals involved in the review of your submission have agreed to reveal their identity: Jonny L Saunders (Reviewer #1); Gonçalo Lopes (Reviewer #2).
The reviewers have discussed their reviews with one another, and the Reviewing Editor has drafted this to help you prepare a revised submission.
Essential revisions:
Alexej Schatz and York Winter wrote "LabNet," a C++ tool to control Raspberry π (raspi) GPIO (General Purpose Input-Output) and other hardware using a network messaging protocol using protobuf. The authors were primarily concerned with performance, specifically low execution latencies, as well as extensibility to a variety of hardware. LabNet's network architecture is asymmetrical and treats one or many raspis as servers that can receive control signals from one or more clients. Servers operate as (approximately) stateless "agents" that execute instructions received in message boxes using a single thread or pool of threads. The authors describe several examples of basic functionality like time to write and read GPIO state to characterize the performance of the system, the code for which is available in a linked GitHub repository.
Overall, the described performance of LabNet is impressive, with near- or sub-millisecond latency across the several tests when conducted over a LAN TCP/IP connection. The demonstrated ability to interact with the server from three programming languages (C++, C#, and Python) also would be quite useful for a tool that intends to be as general-purpose as this one. The design decisions that led to the use of protobuf and SObjectizer seem sound and supportive of the primary performance goal. We thank the authors for taking the initiative to contribute more open-source tools to the community, and for addressing these hard challenges that keep recurring in systems neuroscience experiments. We absolutely need more people working on this in an open way.
We do ask the authors for the following revisions:
Technical:
– The main method for evaluating the performance of LabNet is a series of performance tests in the Raspberry π comparing clients written in C++, C# and Python, followed by a series of benchmarks comparing LabNet against other established hardware control platforms. While these are undoubtedly useful, especially the latter, the use of benchmarking methods as described in the paper should be carefully revisited, as there are a number of possible confounding factors.
– For example, in the performance tests comparing clients written in C++, C# and Python, the Python implementation is running synchronously and directly on top of the low-level interface with system sockets, while the C++ and C# versions use complex, concurrent frameworks designed for resilience and scalability. This difference alone could easily skew the Python results in the simplistic benchmarks presented in the paper, which can leave the reader skeptical about all the comparisons with Python in Figure 3. Similarly, the complex nature of available frameworks also raises questions about the comparison between C# and C++. I don't think it is fair to say that Figure 3 is really comparing languages, as much as specific frameworks. In general, comparing the performance of languages themselves for any task, especially compiled languages, is a very difficult topic that I would generally avoid, especially when targeting a more general, non-technical audience.
– The second set of benchmarks comparing LabNet to other established hardware control platforms is much more interesting, but unfortunately, it doesn't currently seem to allow an adequate assessment of the different systems. Specifically, from the authors' description of the benchmarking procedure, it doesn't seem like the same task was used to generate the different latency numbers presented, and the values seem to have been mostly extracted from each of the platform's published results. This reduces the value of the benchmarks in the sense that it is unclear what conditions are really being compared. For example, while the numbers for pyControl and Bpod seem to be reporting the activation of simple digital input and output lines, the latency presented for Autopilot uses as a reference the start of a sound waveform on a stereo headphone jack. Audio DSP requires specialized hardware in the π which is likely to intrinsically introduce higher latency versus simply toggling a digital line, so it is not clear whether these scenarios are really comparable. Similarly, the numbers for Whisker and Bpod being presented without any variance make it hard to interpret the results.
Documentation
– Could the authors provide some example code and minimal working examples such that the average user could easily jump in?
Currently, LabNet has no documentation to speak of, outside a brief description of the build process for a relatively voluminous body of code (~27k lines) with relatively few comments. There is no established norm as to what stage in a scientific software package development a paper should be written, so I take the lack of documentation at this stage as just a sign that this project is young. The primary barrier for the broader landscape of scientific software is less that of availability of technically proficient packages, but the ease with which they can be adopted and used by people outside the development team. The ability of downstream researchers to use and extend the library to suit their needs will depend on future documentation. For example, at the moment the Python adapter to the client and server is present in the examples folder but relatively un-annotated, so it might be challenging to adapt to differing needs at the moment (https://github.com/WinterLab-Berlin/LabNet/blob/34e71c6827d2feef9b65d037ee5f2e8ca227db39/examples/python/perf_test/LabNetClient_pb2.py and https://github.com/WinterLab-Berlin/LabNet/blob/34e71c6827d2feef9b65d037ee5f2e8ca227db39/examples/python/perf_test/LabNetServer_pb2.py). Documentation for projects like this that aim to serve as the basis from which to build experimental infrastructure can be quite challenging, as it often needs to spread beyond the package itself to more general concerns like how to use Raspberry Pis, how to set them up to be available over a network, and so on, so I look forward to seeing the authors meet that challenge.
Manuscript:
– It also would be worth commenting more on the intended mode of use of LabNet. The authors do this a bit in the introduction setting the scope of the package as performant GPIO control rather than full experimental control, but they might consider expanding a bit on how they intend it to be used to run experiments. The model of having one computer that runs the experiment and another that executes the hardware control is more similar to Bpod than Autopilot or PiControl, both of which are intended to have the raspi/micropi be autonomous (so the network latency for issuing commands is less relevant because you would put time-critical operations on a single π or else hardwire them with a GPIO pin), and this comparison is useful for understanding the broader landscape of experimental control software in my opinion.
– Some of the technical explanations of the choices of software libraries are unclear to me and I had to do a decent amount of additional reading to understand them. I'm still not exactly sure what SObjectizer does. The same is true of the description of the actor model – a bit of history is given, but I think more of a description about why qualities like statelessness are valuable, what the alternatives are, and a figure that clarifies the concurrency/agent model would be very useful. I recognize this is challenging because it's not altogether clear what you can expect from your audience, but the manuscript as written leaves me a bit unclear at the internals and I had to go read the code.
Please provide a key resource table, if you have not already done so.
Reviewer #1 (Recommendations for the authors):
First, I want to thank the authors for their work! I put most of my positive comments (and most of them were positive!) in the public review section, so my apologies if this section is mostly recommendations rather than praise.
The comparison in Figure 4 compares a reasonably broad range of functionality: the reported LabNet measurements are for the latency from sending an output signal on one pin to reading it on another. PyControl is the closest (latency from reading an input to writing an output), but the Bpod latency is from a software command to the onset of sound output, and the Autopilot latency is from a hardware input to the onset of sound output, both of which are different tests in substance because of the typical need to write audio output in frames rather than as scalar values which have intrinsic latency. Relying on reported values is ok, but the interpretation in the text lacks some clarifying context for the comparison in Figure 4, and in my opinion, the authors would be well served by running these tests themselves on the other systems (in the case of Autopilot, this would require purchasing no additional hardware). This leads some of the descriptions in the text to be inaccurate: for example, a comparable test to the "set and read GPIO" test in Autopilot takes actually ~2.3ms rather than the 12ms the authors estimate (see: https://gist.github.com/sneakers-the-rat/41683e42da73712277c355dfa612af96), and setting and checking a digital output locally takes ~.14ms +/- 0.05 rather than the reported 1.75ms. To be clear, I don't see this as a major problem with the manuscript, since it is not a problem with the software in question, but with reporting of prior results.
It also would be worth commenting more on the intended mode of use of LabNet. The authors do this a bit in the introduction setting the scope of the package as performant GPIO control rather than full experimental control, but they might consider expanding a bit on how they intend it to be used to run experiments. The model of having one computer that runs the experiment and another that executes the hardware control is more similar to Bpod than Autopilot or PiControl, both of which are intended to have the raspi/micropi be autonomous (so the network latency for issuing commands is less relevant because you would put time-critical operations on a single π or else hardwire them with a GPIO pin), and this comparison is useful for understanding the broader landscape of experimental control software in my opinion.
Some of the technical explanations of the choices of software libraries are unclear to me and I had to do a decent amount of additional reading to understand them. I'm still not exactly sure what SObjectizer does. The same is true of the description of the actor model – a bit of history is given, but I think more of a description about why qualities like statelessness are valuable, what the alternatives are, and a figure that clarifies the concurrency/agent model would be very useful. I recognize this is challenging because it's not altogether clear what you can expect from your audience, but the manuscript as written leaves me a bit unclear at the internals and I had to go read the code.
A point that I raised a bit in the public review is that I think it's important to weigh the different practical considerations of experimental code: LabNet's performance seems great, but how does that trade-off with using C++, a language that in my experience far fewer scientists know how to use than Python? This called out to me especially strongly considering that in several of the performance tests the Python client seemed to be faster than the C# or C++ clients (Figure 3a and c, RV4) – for what it's worth this is how Autopilot works, controlling a low-level process from Python. Apart from a discussion of these tradeoffs, that should be addressed specifically because the framing of the paper suggests that C++ should always be faster. I am also not sure what 3d adds, and in any case, both set and read gpio results should be presented sequentially (ie. 3b should be set test, 3c should be set and read gpio medians, 3d should be means with stdevs) so that the values can be compared between the two metrics on the same test.
I'm not really sure how the listings relate to the library code, as the examples in the github repository seem considerably more complicated than presented in the text. If these are intended to be illustrative, that should be stated clearly. I think that some inline comments to clarify the finer points of the code (eg. what does `Interface = GpioWiringpi` do? how is that different than `GpioWiringPiInitDigitalIn` ?) would also be useful.
As I noted in the public review, I think it's worth discussing the state of the library, in particular, any plans for documentation – without docs, it looks like it would be quite intimidating to adopt, but a note clarifying that they are in progress, etc., might soothe nerves.
I was unable to run the tests myself, and I raised a few issues with compilation and use (https://github.com/WinterLab-Berlin/LabNet/issues/2 and https://github.com/WinterLab-Berlin/LabNet/issues/1) but didn't want to hold up the review any longer trying to get it to work – I have no doubts that the code does what the authors describe and have no reason to doubt their results.
All in all, thanks again for your work, hopefully, these suggestions are useful!
Reviewer #2 (Recommendations for the authors):
As mentioned in the public review comments I fully subscribe to the two stated goals for LabNet mentioned in the paper, and I think most of the improvements for future revisions should focus on discussing them both more thoroughly. I would frame the entire Design discussion around these two topics, rather than on strictly the Actor-model. Indeed, the Actor model should feel more like a means for you to achieve your stated aims, and specifically, I think it would be great if you could clarify how exactly it helps to achieve both time-critical operations and ease of extending the system.
L52-53: The way this last sentence is phrased somehow gave me the misleading impression that a server would support more than one client, which made me even more surprised to read L161.
L56-L58: It is stated that the Implementation section will provide implementation details to compare LabNet with other systems, but it's not clear how any of these details are important for the benchmarks. It might be just an issue with phrasing.
L65: The discussion on how to develop new functionality extensions seems to be entirely missing, and to me, this was one of the big surprises when I finished reading the paper. Could it be you were thinking purely about combinatorial flexibility, i.e. combining existing functionality to create a new experiment? This is not clear, and right now it is really compromising the entire manuscript, especially since at the end of the Conclusions section you seem to imply that LabNet is not ready to easily accept modifications without recompiling everything from scratch.
Figure 1: When seeing the example applications, the first thing that jumps to the forefront is no mention of video anywhere. Also, these applications should really be more detailed in later sections. Are these already working in LabNet? If so, they would make great examples to illustrate how exactly the components are combined to make a real-world experiment. If there is any kind of preliminary data that could be published, this would of course really strengthen the manuscript.
L71-72: This sentence is very confusing, but I guess the point was to emphasize the existence of asynchronous systems in the Pi?
L3: "fast" instead of "rapid".
L76: Reference to actor-based models should be first introduced here.
L80-L85: This whole bit on locking seems unnecessary. It is not that locking is not important anymore, but the way you introduce it currently is confusing as it is not clear how any of this relates to LabNet specifically.
I think the end of the Actor model section talks about too many topics which are never clearly connected back to LabNet, such as CSP. If they are indeed connected, this should be made explicit in the text, and the relevance of the connection to the argument should be presented. Otherwise, they should be removed.
L119: The reference to Boost ASIO should be introduced here, to avoid confusing unfamiliar readers.
L123-125: The relevance of the built-in support for hierarchical state machines is unclear. Elsewhere it is mentioned that it is not intended for LabNet to implement complex state logic in the nodes, so it is not obvious whether this support would be useful for anything. If it is, it should be made explicit.
L126-134: It seems like LabNet currently only uses the default dispatcher, so I would condense this paragraph and move the details to the software documentation pages.
L145-146: How large is "extremely large" (e.g. in bytes) and how fast is "a very high rate" (e.g. in Hz)?
L152: Given there are custom modifications to Protobuf, it might be worth spending some time describing how exactly messages are encoded, and what kind of information is transmitted by device events and commands.
L157: It seems the client must know beforehand what devices are connected to each Raspberry Pi, and in what pins. If so, it would make the presentation clearer to list what are the available devices and their characteristics.
L162: It is not clear to a reader unfamiliar with the underlying implementation what is the issue with clients accessing "foreign hardware". It would be important to clarify.
L165: What, if any, state is recovered when actors stop and resume their work?
L168: What exactly is an "interface"? This point needs more elaboration to allow a reader to understand the underlying LabNet architecture in detail.
L193: Isn't the sine tone defined in Listing 1 rather than Listing 2 where it seems to be a digital square pulse? The purpose of each example in the list is unclear.
Listing 1 and 2. The switch to using C# would benefit from a bit more clarification and explanation. I understand the platform supports clients to be written in any language, and this might be an example of that fact, but it would still be beneficial to clarify that these examples are not running on the Pi, but rather in the client software which talks to the π instead.
L277: It is strange to benchmark python as a stripped client running directly on top of the sockets when both the C++ and C# clients make use of much more complex frameworks handling a variety of concerns. Specifically, I don't think it's fair to call this a comparison between "languages", since neither SObjectivizer nor Akka.NET forms part of C++ or C#, not even part of their standard libraries. Also, C# has now moved on to.NET Core for cross-platform server and network implementations, with presumably much-improved performance.
L311: How do you distinguish here whether the increase in latency is due to "C#" as opposed to some implementation detail in Akka.NET?
L344: Did you reproduce this test on LabNet running the audio headphone jack output? Just to make sure the DSP output is exactly the same and exclude all other possible forms of latency?
L361: This claim was not investigated in the manuscript, since no example of an actor implemented in SObjectivizer was given. See public review comments.
L370: How are visual stimuli defined in LabNet? General stimulus display frameworks are notorious for developing intricate dependencies and it would be a great exercise to include how you are thinking about composition in that case.
L375: These limitations seem to go against the 2nd goal of LabNet?
[Editors’ note: further revisions were suggested prior to acceptance, as described below.]
Thank you for resubmitting your work entitled "LabNet hardware control software for the Raspberry Pi" for further consideration by eLife. Your revised article has been evaluated by Kate Wassum (Senior Editor) and a Reviewing Editor.
The manuscript has been improved but there are some remaining issues that need to be addressed, as outlined below:
Overall, some of the more critical revisions we feel were not adequately addressed. The reviewers and I have consulted, and I agree that the points below need to be addressed for the manuscript to be suitable in eLife. To summarize required major revisions:
– Remove language about other packages not meeting their claims re: stress test (as they were not tested).
– Clarify wording on stress test results (see R#2 review).
– Describe means of measuring timings on both LabNet and other packages.
– Link to specific versions of code for each of the tests.
– Describe versions (with git hashes or semver) of all software, LabNet and other packages.
– Respond to R#1 questions about example tests for other packages.
– Describe an example experiment done within the lab using LabNet and how it fits into the rest of the experimental setup.
– Label the axes in all figures.
– Clarify the benchmarking protocol well, with a diagram.
– Please include a key resource table.
Reviewer #1 (Recommendations for the authors):
General comments:
– Many of these will read as negative, so I want to start by saying I appreciate the author's work, apologize for the lateness of my review (life has been hectic pre- and post-dissertation!), and thank them for writing this package! All that I don't comment on here I think is good.
– In general I like to see software timestamp measurements supplemented with hardware measurements (from eg. an oscilloscope), even just to confirm that the software timestamps are close. I don't think it's of huge importance here, but I wanted to make that future recommendation to the authors, especially when taking timestamps from an interpreted language like Python.
– The mismatch between L84-L88 and the results is made more salient with the addition of L143-L147 – L84-88 say Python is intrinsically slow and thus C++ was chosen, but then L143-147 say that Python has an advantage because the C++ implementations are more complex. L84-88 thus read like theoretical concerns that were demonstrated here to not be true because of additional details in the C++ implementation, right?
– I appreciated the expanded discussion of the intended uses for the package, like the discussion of the potential for using multiple pis together, etc. I think that and the brief descriptions of potential tasks help the paper!
– I don't see a discussion of documentation in the main text, I don't think it's worth holding the paper up over, but I again make the recommendation to the authors to at least discuss their plans for documentation and future maintenance, as that is really the critical factor for whether a package like this will be adoptable by other labs. The authors briefly address this in their response, but yes this is important information for prospective users to have!
– Some of the other concerns that I raised in the prior recommendations for the authors were not addressed, perhaps that was my fault in not understanding how the public review. vs recommendations to authors work at eLife.
Figure comments:
Additional comments on new text:
– Listings: The inline comments and in-text descriptions are much appreciated!
– Figure 1: You designed that in TikZ? I am amazed. I would love to see that code. I checked out the TikZ code for the other figures and am very impressed.
– Figure 2b-d: The y-axes are unlabeled.
– L17-18: I don't see stress test comparisons for the other packages, so the "unlike others" doesn't seem to be supported by the text.
– L18: typo, latenies -> latencies.
– L61-63: This seems like an odd definition of openness to me, which typically means either that the source is inspectable. I would call the "control an experimental chamber on its own" part independence or modularity, and the "or together with a number of other nodes" interoperability or scalability. I am unsure how one would use multiple LabNet nodes in the same task, as an example doesn't seem to be in the text! This also seems to contradict L66-67 "However this comes with the restriction that at most one experimental system can be connected to each RasPi" – what counts as an experimental system here? are the authors just referring to a particular set of hardware components which could be combined in a single experimental chamber? that clarification would resolve the conflict to me.
– L71-72: I am not sure what this means, the client is the controlling computer, but not sure what a task is in this context. And I thought that the hardware control happened on the raspi (server?).
– L72-73: from what I recall you also provide clients in these languages? might be worth some clarification describing what you mean by writing clients in multiple languages – eg. that clients written in multiple languages can interact with the underlying C++ library?
– L171: I found the description of the new crossover-based tests a bit hard to follow and it took me a few reads, I think a diagram would be helpful here :
– L210-221: I can't really tell how the code works for the new tests, it would be really helpful to link to the source code for each of the tests in the main text so we know what you're referring to! for example, the stress test looks like it does not send instructions of the network but operates on local logic as well: https://github.com/WinterLab-Berlin/LabNet/blob/3963f3371610d828e44af1e27ba6374cacc79748/examples/cpp/perf_test/main.cpp
I see the data availability statement and found the repo, but it would be nice to have those separated out instead of inside of a zip file so you could link to them.
– L228-229: The inclusion of whisker now reads as odd, since I think assuming latencies based on polling frequency is probably a pretty bad assumption in most cases.
– L218-241: I am not sure how the latencies are measured for the other systems, as it looks like you were taking software timestamps for the LabNet tests, but I don't see timestamps being taken in any of the other comparison tests, so I assume that external timestamps were being taken? It also seems like some external trigger would be needed in some of these tests as well (see below)? It is also important to validate any software timestamps with external hardware timestamps, and they shouldn't be mixed without validation (eg. software timestamps could either exaggerate or underestimate latencies depending on where they are taken). Some additional clarification is needed here.
From my reading it doesn't look like the pycontrol or autopilot tests would work, but I am out of the lab and don't have a π to run them on myself.
For the pycontrol test, it seems like it would go
- on_state(event='entry') -> p_off
- off_state(event='p_off') -> goto on_state
- on_state(event='entry') …
and if the 'on' state was triggered manually from an external trigger it would go like
- on_state(event='p_on') -> goto off_state
- off_state(event='entry') -> switch p_out.on()
- on_state(event='p_on') -> goto off_state
- off_state(event='entry') …
But I admit I am not altogether familiar with pycontrol. Some comments in the source would be lovely.
For the Autopilot test, there are two stage methods in a cycle, water() and response(). the water method clears the stage_block which would cause the running pilot to wait for any set triggers. A trigger is set for the 'sig_in' pin that should set the 'sig_out' pin high. When the sig_in trigger is received, that should cause the response() method to be called which sets 'sig_out' low after some delay and then return to the water stage immediately. This would require some external input to trigger the sig_in event, and then the timestamps of the external input and the sig_out pin would be compared. If the sig_out pin were wired to sig_in, the test wouldn't work, as the sig_out pin would never be set high. Having the `prefs.json` file from the pilot would be useful to include here to avoid ambiguity in system configuration, as I am assuming the default polarity configuration settings are used on the digital input and output classes.
A set of tests that are more similar to the tests described for labnet are available in the plugin accompanying our manuscript: https://github.com/auto-pi-lot/plugin-paper/blob/c6263a4890b7d6101688158d8acb3aaeb9199533/plugin_paper/scripts/test_gpio.py documented here: https://wiki.auto-pi-lot.com/index.php/Plugin:Autopilot_Paper We find the roundtrip (oscilloscope measurement of input to output) latency to be 400us.
I don't doubt the authors ran the tests, and please correct where my read of the code is wrong, but I think some additional detail is needed in the reporting of the results in any case.
Reviewer #2 (Recommendations for the authors):
The manuscript introduces LabNet as a network-based platform for the control of hardware in Neuroscience. The authors recognize and attempt to address two fundamental problems in constructing systems neuroscience experiments: on one hand the importance of precise timing in the measurement and control of behavior; on the other hand, the need for flexibility in experimental design. These two goals are often at great odds with each other. Precise timing is more easily achieved when using fewer, dedicated homogeneous devices, such as embedded microcontrollers. Conversely, flexibility is more easily found in the diversity of devices and programming languages available in personal computers, but this often comes at the cost of a non-real-time operating system, where timing can be much harder to predict accurately. There is also a limitation on the number of devices which can be simultaneously controlled by a single processor, which can be an impediment for high-throughput behavior studies where the ability to run dozens of experiments in parallel is desirable.
LabNet proposes to address this tension by focusing on the design of a hardware control and instrumentation layer for embedded systems implemented on top of the Raspberry π family of microprocessors. The idea is to keep coordination of experimental hardware in a central computer, but keep time-critical components at the edge, each node running the same control software in a Raspberry π to provide precise timing guarantees. Flexibility is provided by the ability to connect an arbitrary number of nodes to the central computer using a unified message passing protocol by which the computer can receive events and send commands to each node.
The authors propose the use of the C++ programming language and the actor-model as a unifying framework for implementing individual nodes and present a series of benchmarks for evaluating the performance of LabNet in the Raspberry Pi, followed by a series of benchmarks comparing LabNet against other established hardware control platforms.
The first set of benchmarks is presented in Figure 2 and is used to understand how LabNet retains its performance across a variety of different Raspberry π hardware, and clients written in different languages (C++, C# and Python). Different tests are used to measure latency over the network (Set digital out test), and full round-trip latency by using a digital input event to directly trigger a digital output through LabNet (Read and set GPIO test).
A new stress test is also introduced in this revised manuscript to evaluate how many pins in parallel can be monitored by LabNet. Unfortunately, the plot in panel C is confusing, since the text mentions a measure in events per second, but the plot seems to have the same range of values as the other panels in the figure, and the axes are not labelled, so it is not clear whether we are watching a drop in processed events per second, or a drop in latency. This should be clarified, and all axes labelled accordingly.
The methodology of the tests is hard to follow from the text description alone. It would be useful to have a schematic diagram of the wiring used in the test, as well as an example interaction diagram with a timeline of how the different events in each component (PC and Raspberry Pis) trigger each other and which time intervals are being measured.
A second set of benchmarks is then presented in Figure 3 comparing LabNet to other established hardware control platforms. These are used to inform how well LabNet running over the network compares to different platforms running on local hardware.
In the revised manuscript, the authors use the same "read and set GPIO" test used for Figure 2. The authors demonstrate that LabNet achieves similar latencies to the local platforms despite hardware control running over the network. The tests are fair, and the results support the authors' conclusion, although it would have been preferable to include an independent measure of electrical signal timing for the Read and set GPIO test through an oscilloscope to exclude possible confounds with different software timestamping strategies of events in the different platforms tested (see for example Akam et al., 2022).
The manuscript concludes with a Discussion section highlighting the possibilities for interfacing with different hardware using existing LabNet adaptors for the Raspberry Pi. Here I remain at odds with the authors as I don't believe they have convincingly demonstrated their stated aim of ease of extensibility either in the client or in the server, which I feel is crucial to introduce LabNet as a platform for the open-source community.
Some examples of operant boxes for rodent experiments controlled by LabNet are presented in Figure 1. A variety of different devices are represented in the example schematics, mostly digital inputs and outputs, but also a visual display monitor and a tone generator, both mentioned in the discussion, the latter also in the code listings. The variety and heterogeneity of hardware components in such tasks is typically challenging to coordinate and synchronize in a full experiment so it would be great to understand how exactly the authors envision LabNet to play a part in the assembly of such experiments.
Unfortunately, none of these complete examples are developed in the body of the text, and it is unclear whether these are actual experiments collecting data in the lab. The authors have argued that listings 1-3 present code examples of a simple "experiment". However, these seem to be mostly restricted to setting up and configuring individual modules and single commands in LabNet. It is not clear how these basic building blocks are expected to be used to coordinate multiple devices in the context of an application running a full experimental protocol, and what are the caveats and limitations in such a case from a developer's perspective.
Specifically, it would have been important to see how well the code listings 1-3 would scale up or integrate the control of a full behaviour task with multiple conditions, or how to synchronize and benchmark the system when integrating with other hardware, such as cameras or physiology recordings. Even if LabNet is just a small component of the final system, it would be important to describe more fully a few examples as this would allow the non-technical reader to make a quick assessment of the versatility of LabNet for different types of experiments and to make it clear where exactly LabNet fits in the design of an experimental rig.
Is the system targeting only rodent tasks or is it reasonable to adapt the system to work for Drosophila or zebrafish? Can nodes easily support the simultaneous generation of pulse trains, visual stimulation, and video capture, and if yes how many tasks should each node be responsible for to optimize control bandwidth? Given the generality of the Raspberry π platform, I feel it would be important for readers to find some of these answers in the manuscript, even if just in the form of suggestive examples, to clarify how LabNet might be positioned in the space of open-source hardware in neuroscience, now and in future versions. A table listing exactly which modules are available would also help to make users aware of the possibilities of the platform.
Alternatively, given the experience of the authors in developing embedded animal behaviour platforms for several years, taking a few samples from that pool of experiments redesigned in LabNet would be very valuable to evaluate how this new software can help to address and alleviate common implementation bottlenecks.
Finally, the ease of the framework SObjectizer in extending LabNet is discussed. The reorganization of the document to move the implementation details into the Materials and methods section has overall made everything easier to read and follow.
I would have preferred, however, to see more examples on how to extend LabNet for a new custom device, compared to existing platforms such as Arduino or pyControl, in addition to the lengthier discussion of actor models. Even though a software plug-in system is not currently supported, it is possible to extend LabNet by modifying its source code. However, all listings in the paper currently target only the PC client side of LabNet. It would be great to see a few examples of the server side, or a schematic discussion on how to integrate new hardware. This would give a more practically grounded introduction to the actor model and help the reader more easily understand how to leverage and modify the LabNet open-source code for their specific purposes.
I fully agree that a robust, high-performance, and flexible hardware layer for combining neuroscience instruments is desperately needed, and so I do expect that a more thorough treatment of the methods developed in LabNet will in the future have a very positive impact in the field.
https://doi.org/10.7554/eLife.77973.sa1Author response
[Editors’ note: what follows is the authors’ response to the second round of review.]
– Remove language about other packages not meeting their claims re: stress test (as they were not tested).
I have removed "unlike others" in line 17. This did not refer to performance, but to the number of IOs that can be controlled simultaneously. For example, Bpod natively has only 2 inputs and 2 outputs, more only via module ports. Also, monitoring and simultaneous reaction to multiple events is possible, but relatively complicated (not only in Bpod). This has to do with the state machine. That is why other packages were not subjected to the stress test: number of IOs is not comparable. Nevertheless, my statement was misleading and has been adjusted.
– Clarify wording on stress test results (see R#2 review).
The description of the “stress” test is reworked. All y axes have a label.
– Describe means of measuring timings on both LabNet and other packages.
A diagram is added as an illustration. Tests description of the “read and set GPIO” is reworked.
– Link to specific versions of code for each of the tests.
Source code for Autopilot, Bpod and pyControl tests is inside the article’s data and source code repositoty. See the “key resources table” and “Code availability” sections. Previously inside the zip archive, now as clear text inside extra folders (autopilot, bpod and pyControl).
– Describe versions (with git hashes or semver) of all software, LabNet and other packages.
See “key resource table”.
– Respond to R#1 questions about example tests for other packages.
All tests for other tools are runnable. The measurement for all tools (including LabNet) was done in same way.
– Describe an example experiment done within the lab using LabNet and how it fits into the rest of the experimental setup.
A description of some so far built systems is now at the beginning of the discussion chapter.
– Label the axes in all figures.
Done
– Clarify the benchmarking protocol well, with a diagram.
Done
– Please include a key resource table.
“key resource table” is now included at the beginning of the “Materials and methods” section. It contains links to repositories of all tested packages with SHA hashes and versions if available.
Reviewer #1 (Recommendations for the authors):
General comments:
– Many of these will read as negative, so I want to start by saying I appreciate the author's work, apologize for the lateness of my review (life has been hectic pre- and post-dissertation!), and thank them for writing this package! All that I don't comment on here I think is good.
– In general I like to see software timestamp measurements supplemented with hardware measurements (from eg. an oscilloscope), even just to confirm that the software timestamps are close. I don't think it's of huge importance here, but I wanted to make that future recommendation to the authors, especially when taking timestamps from an interpreted language like Python.
We can verify the results very easily. We know that the Bpod state machine is running at 10kHz. My test result for Bpod is 0.1ms with nearly 0 STD. Which is exactly the same as 10kHz. My test result for pyControl is very close to the results reported in the pyControl article in eLife. Additionally, I connected on the RasPi used for the measurements two pins together; this was to verify how fast the measurement software can detect input events. The result was 1 microsecond (also reported in the paper). And, of course, this measurement software was written in C++ (not Python) and ran on its own RasPi with no other software running on it. Thus, the RasPi functioned as a 1MHz oscilloscope and we can be sure that the measurements are correct. In addition, all software packages were tested in the same way. If there was a time offset, it was the same for pyControl, Bpod, Autopilot and LabNet.
– The mismatch between L84-L88 and the results is made more salient with the addition of L143-L147 – L84-88 say Python is intrinsically slow and thus C++ was chosen, but then L143-147 say that Python has an advantage because the C++ implementations are more complex. L84-88 thus read like theoretical concerns that were demonstrated here to not be true because of additional details in the C++ implementation, right?
The statement that Python is slower than C++ is of a more general nature. Python is normally executed by CPython. Critical and most important parts in CPython are implemented in C. This is also true for many important packages like NumPy. Thus, as long as a Python program does not require multithreading, has very few calculations directly in the Python code (outsource to C, e.g. via NumPy) and does not have a complex program flow, it can be just as performant as C++. Unfortunately, this is only true for simple programs, like our client implementation used for the article.
– I appreciated the expanded discussion of the intended uses for the package, like the discussion of the potential for using multiple pis together, etc. I think that and the brief descriptions of potential tasks help the paper!
The discussion now contains an overview and description of systems that have already been built using LabNet.
– I don't see a discussion of documentation in the main text, I don't think it's worth holding the paper up over, but I again make the recommendation to the authors to at least discuss their plans for documentation and future maintenance, as that is really the critical factor for whether a package like this will be adoptable by other labs. The authors briefly address this in their response, but yes this is important information for prospective users to have!
The documentation is now addressed at the end of the discussion. It is planned with much more detail for the next major version of LabNet, which is also already in development.
– Some of the other concerns that I raised in the prior recommendations for the authors were not addressed, perhaps that was my fault in not understanding how the public review. vs recommendations to authors work at eLife.
We have been careful to consider all points raised. With that in mind we have again carefully checked the manuscript.
Figure comments:
Additional comments on new text:
– Listings: The inline comments and in-text descriptions are much appreciated!
– Figure 1: You designed that in TikZ? I am amazed. I would love to see that code. I checked out the TikZ code for the other figures and am very impressed.
source code for all figures is now included.
– Figure 2b-d: The y-axes are unlabeled.
now all y-axes have labels
– L17-18: I don't see stress test comparisons for the other packages, so the "unlike others" doesn't seem to be supported by the text.
removed, more in first response
– L18: typo, latenies -> latencies.
– L61-63: This seems like an odd definition of openness to me, which typically means either that the source is inspectable. I would call the "control an experimental chamber on its own" part independence or modularity, and the "or together with a number of other nodes" interoperability or scalability. I am unsure how one would use multiple LabNet nodes in the same task, as an example doesn't seem to be in the text! This also seems to contradict L66-67 "However this comes with the restriction that at most one experimental system can be connected to each RasPi" – what counts as an experimental system here? are the authors just referring to a particular set of hardware components which could be combined in a single experimental chamber? that clarification would resolve the conflict to me.
The definition of openness and scalability refers to the distributed network and not to the source code. The definition also comes from Tanenbaum's book. Anyway, I adjusted the text here a little, hopefully it makes it clearer.
Examples of experimental systems are shown in Figure 1. Now also in Figure 4. The discussion contains now a brief description of so far built and with LabNet controlled system. But actually, anything can be an “experimental chamber”. Even Listings 1-3 already describe a system; a poke sensor, a valve and an LED. Small and simple experiments can already be realised with this hardware.
What is actually described here is that we want to bring any number of experimental systems into a network and then control them simultaneously. With one or multiple clients.
– L71-72: I am not sure what this means, the client is the controlling computer, but not sure what a task is in this context. And I thought that the hardware control happened on the raspi (server?).
Changed “task” to “duty”. Actually, the “task” or “duty” is in the first part of the sentence: “hardware control in the context of the experiments”. I added a new sentence.
– L72-73: from what I recall you also provide clients in these languages? might be worth some clarification describing what you mean by writing clients in multiple languages – eg. that clients written in multiple languages can interact with the underlying C++ library?
The sentence was moved to the end of the paragraph and now explicitly refers to Protobuf.
– L171: I found the description of the new crossover-based tests a bit hard to follow and it took me a few reads, I think a diagram would be helpful here :
The description is reworked. A diagram is added as an illustration.
– L210-221: I can't really tell how the code works for the new tests, it would be really helpful to link to the source code for each of the tests in the main text so we know what you're referring to! for example, the stress test looks like it does not send instructions of the network but operates on local logic as well: https://github.com/WinterLab-Berlin/LabNet/blob/3963f3371610d828e44af1e27ba6374cacc79748/examples/cpp/perf_test/main.cpp
The link leads to the measurement software that ran on the second RasPi. It generates the test signal and waits for the reaction from the LabNet, Autopilot, etc and saves the measured latencies locally in a csv file. It has no LabNet dependencies.
I see the data availability statement and found the repo, but it would be nice to have those separated out instead of inside of a zip file so you could link to them.
Source code for Autopilot, Bpod and pyControl tests is now inside extra folders in the repository (autopilot, bpod and pyControl).
– L228-229: The inclusion of whisker now reads as odd, since I think assuming latencies based on polling frequency is probably a pretty bad assumption in most cases.
Polling frequency normally gives the worst-case times. But of course, there may be other factors which can impact the latencies.
– L218-241: I am not sure how the latencies are measured for the other systems, as it looks like you were taking software timestamps for the LabNet tests, but I don't see timestamps being taken in any of the other comparison tests, so I assume that external timestamps were being taken? It also seems like some external trigger would be needed in some of these tests as well (see below)? It is also important to validate any software timestamps with external hardware timestamps, and they shouldn't be mixed without validation (eg. software timestamps could either exaggerate or underestimate latencies depending on where they are taken). Some additional clarification is needed here.
Latency measurements with Autopilot, Bpod and pyControl were done in exactly the same way as with LabNet. The second RasPi with the same measurement software was used for all tools. The description at the beginning of “Comparison” chapter is changed to reflect this. And yes, an external trigger is needed in all tests, this was provided by the second RasPi. The second RasPi is also the time reference. And because it runs completely independently from all tools, the measurements are comparable and just as good as with an oscilloscope. The measured latency data for Autopilot, Bpod and pyControl are also a part of the second repository.
From my reading it doesn't look like the pycontrol or autopilot tests would work, but I am out of the lab and don't have a π to run them on myself.
All tests are runnable. After the latency time tests, the source code was added to the repository without modifications. The pyControl test is very simple, see my short description below, but needs a bit of pyControl API knowledge. The Autopilot test is a simplified version of the “free water task”, from the official Autopilot repository.
For the pycontrol test, it seems like it would go
- on_state(event='entry') -> p_off
- off_state(event='p_off') -> goto on_state
- on_state(event='entry') …
and if the 'on' state was triggered manually from an external trigger it would go like
- on_state(event='p_on') -> goto off_state
- off_state(event='entry') -> switch p_out.on()
- on_state(event='p_on') -> goto off_state
- off_state(event='entry') …
But I admit I am not altogether familiar with pycontrol. Some comments in the source would be lovely.
The pyControl test is very simple. “p_out” is a digital output, acts as response to the external event. “p_in” is digital input with “p_on” as rising and “p_off” falling events. “p_in is also the external trigger event which comes from the measurement RasPi. Short state machine description:
– enter “on_state” -> turn “p_out” off
– wait until external trigger “p_on” -> go to the state “off_state”
– enter “off_state” -> turn p_out on
– wait until external trigger “p_off” -> go to the state “on_state”
– repeat from beginning
For the Autopilot test, there are two stage methods in a cycle, water() and response(). the water method clears the stage_block which would cause the running pilot to wait for any set triggers. A trigger is set for the 'sig_in' pin that should set the 'sig_out' pin high. When the sig_in trigger is received, that should cause the response() method to be called which sets 'sig_out' low after some delay and then return to the water stage immediately. This would require some external input to trigger the sig_in event, and then the timestamps of the external input and the sig_out pin would be compared. If the sig_out pin were wired to sig_in, the test wouldn't work, as the sig_out pin would never be set high. Having the `prefs.json` file from the pilot would be useful to include here to avoid ambiguity in system configuration, as I am assuming the default polarity configuration settings are used on the digital input and output classes.
The file “prefs.json” is now included and inside the “autopilot” folder. As with other tools Autopilot’s RasPi needs to be connected with 2 pins to the measurement RasPi. Autopilot needs to wait for the external events and reacts to them. The time is only measured on the second RasPi, not on Autopilot’s RasPi.
A set of tests that are more similar to the tests described for labnet are available in the plugin accompanying our manuscript: https://github.com/auto-pi-lot/plugin-paper/blob/c6263a4890b7d6101688158d8acb3aaeb9199533/plugin_paper/scripts/test_gpio.py documented here: https://wiki.auto-pi-lot.com/index.php/Plugin:Autopilot_Paper We find the roundtrip (oscilloscope measurement of input to output) latency to be 400us.
These tests contains nearly no Autopilot functionality. E.g. in the "test_readwrite" test, a function is simply assigned to the pigpio callback and pigpio automatically calls this function when a signal is present. Thus, it tests only the speed of pigpio and its Python interface. I don't think real experiments in Autopilot work that way.
The “read and set GPIO” in our paper produces much more realistic latencies. Simply because all tools, including LabNet, have to use their regular logic like they would to run experiments.
I don't doubt the authors ran the tests, and please correct where my read of the code is wrong, but I think some additional detail is needed in the reporting of the results in any case.
I think that the test description in the previous version of the manuscript was misleading. It has now been rewritten. The former version arose because I wanted to describe the connection between the measuring RasPi and LabNet/Bpod/pyControl/Autopilot very precisely. Unfortunately, it became too complicated. Actually, there are only two pins connected. One as an external trigger and the second as a response. The tests as described now should be understandable.
Reviewer #2 (Recommendations for the authors):
The manuscript introduces LabNet as a network-based platform for the control of hardware in Neuroscience. The authors recognize and attempt to address two fundamental problems in constructing systems neuroscience experiments: on one hand the importance of precise timing in the measurement and control of behavior; on the other hand, the need for flexibility in experimental design. These two goals are often at great odds with each other. Precise timing is more easily achieved when using fewer, dedicated homogeneous devices, such as embedded microcontrollers. Conversely, flexibility is more easily found in the diversity of devices and programming languages available in personal computers, but this often comes at the cost of a non-real-time operating system, where timing can be much harder to predict accurately. There is also a limitation on the number of devices which can be simultaneously controlled by a single processor, which can be an impediment for high-throughput behavior studies where the ability to run dozens of experiments in parallel is desirable.
LabNet proposes to address this tension by focusing on the design of a hardware control and instrumentation layer for embedded systems implemented on top of the Raspberry π family of microprocessors. The idea is to keep coordination of experimental hardware in a central computer, but keep time-critical components at the edge, each node running the same control software in a Raspberry π to provide precise timing guarantees. Flexibility is provided by the ability to connect an arbitrary number of nodes to the central computer using a unified message passing protocol by which the computer can receive events and send commands to each node.
The authors propose the use of the C++ programming language and the actor-model as a unifying framework for implementing individual nodes and present a series of benchmarks for evaluating the performance of LabNet in the Raspberry Pi, followed by a series of benchmarks comparing LabNet against other established hardware control platforms.
The first set of benchmarks is presented in Figure 2 and is used to understand how LabNet retains its performance across a variety of different Raspberry π hardware, and clients written in different languages (C++, C# and Python). Different tests are used to measure latency over the network (Set digital out test), and full round-trip latency by using a digital input event to directly trigger a digital output through LabNet (Read and set GPIO test).
A new stress test is also introduced in this revised manuscript to evaluate how many pins in parallel can be monitored by LabNet. Unfortunately, the plot in panel C is confusing, since the text mentions a measure in events per second, but the plot seems to have the same range of values as the other panels in the figure, and the axes are not labelled, so it is not clear whether we are watching a drop in processed events per second, or a drop in latency. This should be clarified, and all axes labelled accordingly.
The text has now been modified: the results in the figure are described briefly first, then the events per second.
All y axes now have a label.
The methodology of the tests is hard to follow from the text description alone. It would be useful to have a schematic diagram of the wiring used in the test, as well as an example interaction diagram with a timeline of how the different events in each component (PC and Raspberry Pis) trigger each other and which time intervals are being measured.
The test descriptions have been changed and simplified. They should now be easier to understand. A time signal diagram is now included.
A second set of benchmarks is then presented in Figure 3 comparing LabNet to other established hardware control platforms. These are used to inform how well LabNet running over the network compares to different platforms running on local hardware.
In the revised manuscript, the authors use the same "read and set GPIO" test used for Figure 2. The authors demonstrate that LabNet achieves similar latencies to the local platforms despite hardware control running over the network. The tests are fair, and the results support the authors' conclusion, although it would have been preferable to include an independent measure of electrical signal timing for the Read and set GPIO test through an oscilloscope to exclude possible confounds with different software timestamping strategies of events in the different platforms tested (see for example Akam et al., 2022).
See my responses above to the first reviewer.
The manuscript concludes with a Discussion section highlighting the possibilities for interfacing with different hardware using existing LabNet adaptors for the Raspberry Pi. Here I remain at odds with the authors as I don't believe they have convincingly demonstrated their stated aim of ease of extensibility either in the client or in the server, which I feel is crucial to introduce LabNet as a platform for the open-source community.
Some examples of operant boxes for rodent experiments controlled by LabNet are presented in Figure 1. A variety of different devices are represented in the example schematics, mostly digital inputs and outputs, but also a visual display monitor and a tone generator, both mentioned in the discussion, the latter also in the code listings. The variety and heterogeneity of hardware components in such tasks is typically challenging to coordinate and synchronize in a full experiment so it would be great to understand how exactly the authors envision LabNet to play a part in the assembly of such experiments.
The synchronisation is automatically given from the order in which the events (also from several RasPis) arrive. The coordination is given through the experiment logic.
Unfortunately, none of these complete examples are developed in the body of the text, and it is unclear whether these are actual experiments collecting data in the lab. The authors have argued that listings 1-3 present code examples of a simple "experiment". However, these seem to be mostly restricted to setting up and configuring individual modules and single commands in LabNet. It is not clear how these basic building blocks are expected to be used to coordinate multiple devices in the context of an application running a full experimental protocol, and what are the caveats and limitations in such a case from a developer's perspective.
Unfortunately, it would be beyond the scope of this article to describe the client side of the experiments. The focus of the current article is clearly the core LabNet functionality that is independent of specific client implementations. With listings 1-3 we describe exactly what a developer of client software has to do: open TCP/IP connections to all LabNet/RasPi devices, initialize hardware, communicate with hardware. It does not matter how much hardware is connected on how many RasPis the logic is always the same.
Specifically, it would have been important to see how well the code listings 1-3 would scale up or integrate the control of a full behaviour task with multiple conditions, or how to synchronize and benchmark the system when integrating with other hardware, such as cameras or physiology recordings. Even if LabNet is just a small component of the final system, it would be important to describe more fully a few examples as this would allow the non-technical reader to make a quick assessment of the versatility of LabNet for different types of experiments and to make it clear where exactly LabNet fits in the design of an experimental rig.
A description of some so far built systems is now at the beginning of the discussion chapter.
Is the system targeting only rodent tasks or is it reasonable to adapt the system to work for Drosophila or zebrafish? Can nodes easily support the simultaneous generation of pulse trains, visual stimulation, and video capture, and if yes how many tasks should each node be responsible for to optimize control bandwidth? Given the generality of the Raspberry π platform, I feel it would be important for readers to find some of these answers in the manuscript, even if just in the form of suggestive examples, to clarify how LabNet might be positioned in the space of open-source hardware in neuroscience, now and in future versions. A table listing exactly which modules are available would also help to make users aware of the possibilities of the platform.
We think that the large advantage of LabNet is that it can be used universally for laboratory experimental automation, both within and beyond neuroscience. It is agnostic to the species under study and we have used it from bats to flies. A description of some so far built system is now at the beginning of the discussion chapter.
The LabNet architecture with actors targets explicitly the execution of multiple tasks. The Discussion chapter contains now a small discussion about performance and bandwidth. So far, we never had performance or bandwidth issues with LabNet/RasPi. But this excludes applications with video acquisition. A list with already available interfaces is in the “Implementation” subchapter.
Alternatively, given the experience of the authors in developing embedded animal behaviour platforms for several years, taking a few samples from that pool of experiments redesigned in LabNet would be very valuable to evaluate how this new software can help to address and alleviate common implementation bottlenecks.
We wanted to address two points with LabNet: performance and the possibility to control a large number of experimental systems. We succeeded (as we think) in both and both points are now discussed in more detail in the paper.
Finally, the ease of the framework SObjectizer in extending LabNet is discussed. The reorganization of the document to move the implementation details into the Materials and methods section has overall made everything easier to read and follow.
I would have preferred, however, to see more examples on how to extend LabNet for a new custom device, compared to existing platforms such as Arduino or pyControl, in addition to the lengthier discussion of actor models. Even though a software plug-in system is not currently supported, it is possible to extend LabNet by modifying its source code. However, all listings in the paper currently target only the PC client side of LabNet. It would be great to see a few examples of the server side, or a schematic discussion on how to integrate new hardware. This would give a more practically grounded introduction to the actor model and help the reader more easily understand how to leverage and modify the LabNet open-source code for their specific purposes.
Unfortunately, this would have become very technical. Readers would need to know C++ and treating the topic comprehensively would need about 5-10 more pages. In the case of actors, this would also just be a repetition of the many books written about this. In the case of the extensibility of LabNet, I would only describe the current status, which may already be different in the next version. In any case, that will be the case as soon as the plug-in system has been implemented. We fully agree that this information should be available. However, our decision has been to place this information in the online documentation. This is the main reason that we have remained more general in the methods section.
https://doi.org/10.7554/eLife.77973.sa2Article and author information
Author details
Funding
Deutsche Forschungsgemeinschaft (SFB 1315 project-ID 327654276)
- Alexej Schatz
Deutsche Forschungsgemeinschaft (EXC 257: NeuroCure project-ID 39052203)
- Alexej Schatz
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Acknowledgements
We thank R Cardinal, T Akam, J Sanders, and J Saunders for comments on an earlier version of the manuscript.
Support for this work was received through the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), SFB 1315, project-ID 327654276, and EXC 257: NeuroCure, project-ID 39052203.
Senior Editor
- Kate M Wassum, University of California, Los Angeles, United States
Reviewing Editor
- Mackenzie W Mathis, EPFL, Switzerland
Reviewers
- Jonny L Saunders, University of Oregon, United States
- Gonçalo Lopes, NeuroGEARS Ltd, United Kingdom
Version history
- Received: February 18, 2022
- Preprint posted: March 1, 2022 (view preprint)
- Accepted: December 12, 2022
- Version of Record published: December 30, 2022 (version 1)
Copyright
© 2022, Schatz and Winter
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 1,116
- Page views
-
- 56
- Downloads
-
- 0
- Citations
Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
The functional complementarity of the vestibulo-ocular reflex (VOR) and optokinetic reflex (OKR) allows for optimal combined gaze stabilization responses (CGR) in light. While sensory substitution has been reported following complete vestibular loss, the capacity of the central vestibular system to compensate for partial peripheral vestibular loss remains to be determined. Here, we first demonstrate the efficacy of a 6-week subchronic ototoxic protocol in inducing transient and partial vestibular loss which equally affects the canal- and otolith-dependent VORs. Immunostaining of hair cells in the vestibular sensory epithelia revealed that organ-specific alteration of type I, but not type II, hair cells correlates with functional impairments. The decrease in VOR performance is paralleled with an increase in the gain of the OKR occurring in a specific range of frequencies where VOR normally dominates gaze stabilization, compatible with a sensory substitution process. Comparison of unimodal OKR or VOR versus bimodal CGR revealed that visuo-vestibular interactions remain reduced despite a significant recovery in the VOR. Modeling and sweep-based analysis revealed that the differential capacity to optimally combine OKR and VOR correlates with the reproducibility of the VOR responses. Overall, these results shed light on the multisensory reweighting occurring in pathologies with fluctuating peripheral vestibular malfunction.
-
- Neuroscience
Genuinely new discovery transcends existing knowledge. Despite this, many analyses in systems neuroscience neglect to test new speculative hypotheses against benchmark empirical facts. Some of these analyses inadvertently use circular reasoning to present existing knowledge as new discovery. Here, I discuss that this problem can confound key results and estimate that it has affected more than three thousand studies in network neuroscience over the last decade. I suggest that future studies can reduce this problem by limiting the use of speculative evidence, integrating existing knowledge into benchmark models, and rigorously testing proposed discoveries against these models. I conclude with a summary of practical challenges and recommendations.