Facemap: a framework for modeling neural activity based on orofacial tracking
Neural activity, face camera data, and behavioral tracking from 16 large-scale recordings from visual cortex, and the train/test data for the Facemap keypoint tracker. Please see the paper and code repository for more details about the data acquisition and analysis.
- neural_activity_*.zip: contains npz files with deconvolved neural traces "spks" (neurons by times), neuron positions "xpos" and "ypos", camera timestamps "tcam", neural timestamps "tneural", and running speed "run"
- filtered_keypoints.zip: contains npy files with a dictionary with filtered keypoint traces "xy" and keypoint labels "keypoint_labels", which can be used to predict neural activity
- svds.zip: contains npy files which are dictionaries in Facemap output format for the motion and movie SVD of the camera recording
- cam.zip: contains camera recordings in mp4 and avi format, and the raw keypoint tracking h5 files
- pose_estimation.zip: dataset used for training and testing the Facemap keypoint tracking network. The zipped file contains train and test files: mouse face images in PNG format and h5 annotations with "x" coordinates and "y" coordinates for the 15 labeled keypoints. Each subfolder in train and test is formatted as either ‘mousename_date_cameraview' or ‘cameraview_mousename_date'.
If you use this data please cite this data repository and the original paper:
Syeda, A., Zhong, L., Tung, R., Long, W., Pachitariu, M., & Stringer, C. (2023). Facemap: a framework for modeling neural activity based on orofacial tracking. Nature Neuroscience.