Skip to content

guitchounts/moseq2-ephys-sync

Repository files navigation

Tools to sync ephys data with video data using IR LEDs, specifically using Open Ephys and Azure Kinect (mkv files) data. The sync signal comes from IR LEDs in view of the camera, whose trigger signals are also routed as TTL inputs to the ephys acquisition system. The LED on/off states are converted to bit codes, and the resulting sequences of bit codes in the video frames and ephys data are matched. The matches are used to build linear models (piecewise linear regression) that translate video time into ephys time and vice versa.

To get started, try the following in termanal (these instructions assume you're on a compute node on a cluster):

  1. conda create -n sync_test python=3.7
  2. conda activate sync_test
  3. cd ~/code/ (or wherever you want the repo to live
  4. git clone https://github.com/guitchounts/moseq2-ephys-sync.git
  5. cd ./moseq2-ephys-sync/
  6. python setup.py install
  7. pip install git+ssh://git@github.com/dattalab/moseq2-extract.git@autosetting-params (alternatively, try: pip install git+https://github.com/dattalab/moseq2-extract.git@autosetting-params)
  8. conda install scikit-learn=0.24 (moseq2-extract pins scikit to an earlier version; need to update to 0.24
  9. module load ffmpeg

The script assumes your input folder structure looks like this:

/input_directory/
│   
│       
│
└───ephys_folder/ (this might be e.g. Record Node 10X/ or experiment1/ depending on the version of the Open Ephys GUI you're using) 
│   │
│   └───recording1/
│       │
│       └───events/
│           │
│           └───Rhythm_FPGA-10X.0/
│               │
│               └───TTL_1/
│                     channel_states.npy
│                     channels.npy
│                     full_words.npy
│                     timestamps.npy
│                   
│
└───depth.mkv (can be named anything, with an .mkv extension)

To run an extraction: python main.py -path /input_directory/

This will extract the IR LED data from the video and ephys files, find matches in the resulting bit codes, plot the results in /input_directory/sync/ and save two models that can be used for translating between the two timebases: video_model.p which takes as inputs video times (in seconds) and translates them into ephys times; and ephys_model.p which conversely takes in ephys times (in seconds) and translated them into video times.

To use the resulting models, try:

  1. import joblib
  2. ephys_model = joblib.load('input_directory/sync/ephys_timebase.p')
  3. video_times = ephys_model.predict(ephys_times.reshape(-1,1)) (assuming times are 1D arrays)
  4. video_model = joblib.load('input_directory/sync/video_timebase.p')
  5. ephys_times = video_model.predict(video_times.reshape(-1,1))

About

Tools for syncing Open Ephys data with video using IR LEDs

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages