Figures and data

Software Overview.
An end-to-end GUI pipeline for studying interactions between target and effector cell pairs (from left to right). After loading an experiment project that mimics a multiwell plate structure, the user can apply preprocessing steps to the 2D time lapse microscopy images before segmentation. Target and effector cells are then segmented, tracked, and measured independently. Events are detected from the resulting time series, and the co-culture images are distilled into tables of single-cell measurements. The neighborhood module links cells in spatial proximity, and the cell-pair signal analysis framework facilitates the investigation of interactions between cell pairs. Eye and brush icons indicate steps where visual control and corrections are possible, with an appropriate viewer.
Fig. S6. Main GUI windows.
Fig. S7. Processing modules to extract single cells. Fig. S8. Background correction methods.
Tab. S2. Generalist deep learning segmentation models.

Comparative table of software functionalities with a selection of available solutions.
A ✓ is attributed if the task can be carried without coding. The use of an integrated solution or plugin is indicated in parentheses.

Functional response of immune cells in a spreading assay.
A) Schematic (top) and snapshot (bottom) of a spreading assay imaged by time-lapse RICM. Primary NK cells sediment on an HER2 coated surface in the presence of bispecific anti-CD16 × HER2 antibodies. Cells touch the surface in a desynchronized manner and may spread after a stochastic hovering duration. The contours of the cells are represented in orange for cells classified as spread on the surface and blue for hovering cells (corresponding colors are also used in the schematic). B) A traditional segmentation pipeline is used to generate a first instance segmentation of the cells imaged in RICM. Cell masks can be manually curated before training a new Cellpose [36] model starting from scratch or from an existing generalist model. Barplot comparing segmentation scores obtained for three different methods that can be called within the software. With the improved results yielded by the new model, the user can proceed to tracking and measurements. C) Intensity time series for a cell performing a contact and spreading sequence (see text for details). D) Single-cell intensity time series color-coded for three classes: 1) the cell spreads during the movie (’event’, orange), 2) the cell is not observed to spread during the movie (’no-event’, green) and 3) the cell is already spread at the beginning of the movie (’left-censored’, purple). E) Benchmarking of cell class automatic determination reported by a confusion matrix, and comparison of spreading times determined automatically. F) Individual cell time series (tonal, morphological, etc) are synchronized with respect to a characteristic time to measure the average population response at the event time. G,H) Functional response for the cell population as a function of bsAb concentration: hovering survival or distribution of cell decision time before spreading (G), spreading rate (H).
Fig. S9. Designing a traditional segmentation pipeline.
Fig. S10. Segmentation correction and annotation. Fig. S11. Overview of segmentation strategies.
Video: ricm events.mp4

High throughput cytotoxic response of cancer cells in co-culture with immune cells.
A. Schematics side-view of target/NK cells co-culture assay for bispecific ADCC (top) and representative multimodal composite images, obtained at two different time points, with target nuclei labelled in blue, dying cells in red and NK cells in green (bottom). Corresponding colors are also used in the schematics. B. Decomposition of partly overlapping fluorescence channels and benchmark of segmentation DL models. Brightfield and fluorescence images at three different time points. Hoechst channel (target nucleus) is taken as input to the existing StarDist versatile fluo or Cellpose nuclei models (available directly in Celldetective). Two new models trained in Celldetective were benchmarked: a transfer of the StarDist versatile fluo model on our MCF-7 nuclei dataset, with Hoechst as input, and a StarDist multimodal modeln trained from scratch using 4 channels as its input. C. Time series of nuclear fluorescence intensities and nucleus apparent area for a set of non-dying target cells (left) and of dying cells (right); the reference time t0 is the death time for the dying cells. D. Benchmark of three methods for event classification and regression (Threshold method on PI, DL classifier on PI, DL classifier on PI and nuclear area). Top row: confusion matrices with 3 classes (fraction of predictions displayed). Bottom row: correlation plots. E. Survival curves of target cells without NK cells or in the presence of NK cells for different bsAb concentrations. F. Schematics for neighbors counting and histogram of target neighbors count. G. Survival curves for two subpopulations of targets at 100 pM bsAb, splitted as a function of the local target cell density. Fig. S12. Event detection model architectures.
Fig. S13. Neighbor counting methods.
Tab. S3. MCF7 nuclei segmentation models.
Video: adcc rgb.mp4

Single time point analysis of effector-target interactions.
In the absence of effector tracking, fluorescent bsAbs were used to study the distribution of Ab on effectors and targets. A. bsAb intensity measured on effector cells on targets with high or low antigen expression. bsAb CE4-21 binds to both HER2 and CD16, while CE4-X control antibody binds only to the HER2 antigen on target cells. B. Representative snapshot of fluorescence channel for each condition of A. C. Empirical cumulative distribution functions (ECDFs) for the simulated (gray, sum of CE4-21/no contact + CE4-X/contact) and observed (orange, CE4-21/contact) distributions. The dotted black line represents the Kolmogorov-Smirnov statistic, i.e. the maximum separation between the two ECDFs.
Tab. S4. Primary NK segmentation models. Fig. S14. Effect size at each time point.
Video: single-timepoint-contact.mp4

Effector-target dynamic interactions.
A. Fraction of LAMP1-positive effector cells for different bsAb conditions, decomposed by contact with the targets (over all time points). B. Average effector velocity for different bsAb conditions, decomposed by target contact (up to two estimates per effector). C. Examples of killer-victim identification with one (left) or two (right) killers. One target cell (center) is connected to effector neighbors via colored segments. Dynamic effector-target interaction monitoring includes PI signal of target cell, relative distance and relative velocity between target and effector, as well as LAMP1 signal in effector neighbors. D, E. Characteristic parameters for killer/victim and non-killer/victim pairs. Effector cells are manually annotated as potential killer or not. Each pair of points represents one victim target, with the average over all neighbors of the parameter decomposed by killer class. D. LAMP1 intensity. E. Relative effector-target velocity.
Video: cell interactions.mp4

Generalist deep learning segmentation models.
This table lists the different generalist models (Cellpose or StarDist) which can be called natively in Celldetective. The sample images are cropped to (200 × 200) px and rescaled homogeneously to fit in the table.

MCF7 nuclei segmentation models in the presence of primary NK cells.
Each model was trained on the same dataset of ADCC images, picking only the relevant channels.

Primary NK segmentation models.
The models have been trained on a dataset of annotated primary NKs in ADCC images (primary NKs w MCF7).

Event detection models.
We trained the following 1D DL models to classify and regress events of interest. The mean event response, centered at the event time is shown for each channel in the pattern column.

Main GUI windows:
A) Experiment project selection window. B) GUI to generate a new experiment, where the user provides the metadata. C) Main interface after loading a project. The process block for the effector population is unravelled, showing the 4 main steps detailed in Fig. 1.

Processing modules to extract single cells:
Processing modules are shown on the left, with brief illustrations of the main options, while the output and visualization modules are represented on the right side, following Celldetective’s graphical structure. A typical pipeline features successively (from top to bottom): A) an instance segmentation (either with traditional thresholding method or a deep learning model, stored in a model zoo, using StarDist [35] or Cellpose [37]). The output data are masks, which can be visualized and annotated using napari [56], for correction or model retraining. B) Bayesian tracking associated with feature extraction (btrack [41]), the output data being trajectories, which can also be visualized with napari. C) Single cell measurements of intensity, morphology, texture (based on cell position or on mask), the output being tables of trajectories enriched with features.

Background correction methods.
Screenshots of the background correction tabs in the software and illustration of the associated pipeline. Colored bounding boxes match graphical parameters to their role in the pipeline. The thick black arrows indicate the starting point of the pipelines. A) The steps for a fit-based correction involve selecting a channel of interest, setting a threshold on the standard-deviation filtered image to exclude the cells (done graphically with a viewer). Then a model (either plane or paraboloid) is fit on the background pixels for each image. The fitted background is either subtracted or divided, with or without clipping. B) With the model-free approach, a single background is reconstructed per well, following the same threshold approach but leveraging the multi-positional information to construct a median background. Instead of applying the background directly, it is amplified to minimize the difference with the current image’s background. As with A), the background is then either subtracted or divided, with or without clipping.

Designing a traditional segmentation pipeline.
Screenshots of the traditional segmentation pipeline configuration interface applied to a normalized RICM image of spreading primary NK cells. Top: User-selected mathematical operations (subtraction, absolute value) and preprocessing filters (Gaussian blur) are applied to the image, with results displayed in real-time on the right. A binary mask, set using an interactive thresholding slider, is overlaid on the transformed image in purple. Spot detection is then performed on the Euclidean Distance Transform of the binary mask, using parameters for footprint size and minimum object distance; detected spots are shown as red scatter points. Bottom: After applying the watershed algorithm, the instance segmentation of cells is overlaid on the original normalized RICM image. The user can set filters on object features to exclude false positives; rejected cells are marked in red in both the feature scatter plot and the image.

Segmentation correction and annotation.
To view segmentation results, the user can click the “eye” button in the segmentation module, which opens images and labels in napari. Shown is a screenshot of a segmentation outcome for a normalized RICM image of spreading primary NK cells, with labels overlaid on the image. The user can correct the masks directly on the image, for example, to separate cells indicated by white arrows. New labels can be assigned and saved, and the full set of corrected masks can either be saved in place (top-right plugin) or exported as a training sample for a segmentation model (bottom-right plugin).

Overview of segmentation strategies.
The principle is illustrated for a mixture of two cell populations, effector and targets. Celldetective provides several entry points (black arrows) to perform segmentation, with the intent of segmenting specifically a cell population (left: effectors, right: targets). The generalist DL models are listed in Tab. S2. Specific DL models are listed in Tab. S3 for targets and Tab. S4 for effectors. The traditional pipeline refers to a thresholding method accessible through the GUI (see Fig. S9) and which can serve as a starting point to prepare a dataset for training a new DL model. The masks output from each segmentation technique can be visualized and manually corrected in napari. Exporting those corrections into a dataset of paired image/masks can be used either to fit a generalist model (transfer learning) or train a new model from scratch.

Event detection model architectures.
The event detection models consist of two CNN-based models called back to back during inference to 1) classify single-cell time series and 2) detect the event times from the same time series. The models share a similar architecture of a first 1D convolution, followed by ResBlocks encoding the time series into either three neurons with a softmax activation for the classification problem (three classes) or one neuron with a linear activation for the regression problem. As illustrated here, only the time series that were classified as “event” are sent to the regressor model.

Neighbor counting methods:
A) All neighboring cells in contact or within the isotropic neighborhood are linked to the reference cell (inclusive method). B) Each neighboring cell is linked exclusively to the closest reference cell (exclusive method). C) Neighboring cells in contact or within the isotropic neighborhood are linked to the reference cell, with weights inversely proportional to their number of neighbors.

Effect size at each time point:
Cliff’s Delta between the simulated sum of two distributions (CE4-21/off-contact/HER2+ + CE4-X/in-contact/HER2+) and CE4-21/in-contact/HER2+ computed at each timepoint, independently.