Computer Vision Senior Research Scientist ·
I am a Senior Research Scientist in Machine Learning and Computer Vision. I work on Social Signal Processing, Affective Computing and Creative AI.
Disclaimer. This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All people copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
The Mouse Action Recognition System (MARS): a software pipeline for automated analysis of social behaviors in mice C. Segalin, J. Williams, T. Karigo, M. Hui, M. Zelikowsky, J..J. Sun, P. Perona, D.J. Anderson, and A. Kennedy. Bioarxiv, 2020 [PDF]
Smile Intensity Detection in Multiparty Interaction using Deep Learning P. Witzig, J. Kennedy, C. Segalin Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW), 2019 [PDF]
Automated Content Evaluation Using a Predictive Model S. D. Lombardo, C. Segalin, L. Chen, R. D. Navarathna, and S. M. Mandt 18-DIS-326-MEDIA-US-UTL, 2018 [PDF]
Mouse Academy: high-throughput automated training and trial-by-trial behavioral analysis during learning M. Qiao, T. Zhang, C. Segalin, S. Sam, P. Perona, M. Meister bioRxiv, 2018 [PDF]
What your Facebook Profile Picture Reveals about your Personality C. Segalin, F. Celli, L. Polonio, D. Stillwell, M. Kosinski, N. Sebe, M. Cristani and B. Lepri Proceedings of the 25st ACM international conference on Multimedia, 2017 [PDF]
Reading between the turns: Statistical modeling for identity recognition and verification in chats G. Roffo, C. Segalin, A. Vinciarelli, V. Murino and M. Cristani IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), 2013 [PDF]
Statistical Analysis of Visual Attentional Patterns for Video Surveillance G. Roffo, M. Cristani, F. Pollick, C. Segalin and V. Murino Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, 2013 [PDF]
The expressivity of turn-taking: Understanding children pragmatics by hybrid classifiers C. Segalin, A. Pesarin, A. Vinciarelli, M. Tait and M. Cristani nternational Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS), 2013 [PDF]
Generative modelling of dyadic conversations: characterization of pragmatic skills during development age A. Pesarin, M. Tait, A. Vinciarelli, C. Segalin, G. Bilancia and M. Cristani Multimodal Pattern Recognition of Social Signals in Human-Computer-Interaction, 2013 [PDF]
Design and implementation of vibrotactile sensor of depth The main idea was to create a tool able to stimulate the user to sense spatial depth from a tactile input. In particular we have created a table with four pressure sensors able to give four different sounds and haptic feedbacks [PDF]
Analysis of eye motions using AAM The general purpose of this project, in collaboration with Dr. Manganotti, was to analyze video sequences involving epileptic patients. In particular, we created a classifier able to discriminate different stages of an epileptic seizure. To face this problem we used some techniques based on Active Appearance Models (AAM). [PDF]
Extend mElite game Extend the mElite game located in the space. You can negotiate objects, fuel, planets. For each asteroid you destroy, you earn money and life.
Gesture interaction with markers The aim was to use ARToolkit for designing 3D models. The program is able to recognize drawn and printed patterns (like a shape). We studied a mechanism to make the user interact with the program not only through a simple marker, but through keyboard or fingers motion. [Slides] [Video]
This dataset is used for studying personal aesthetics, a recent soft biometrics application where the goal is to recognize people from the images they like. It's composed of 200 users, 40K images. Given a set of preferred image of a user. [ACCV14] [ICMI14] [IEEEForensics14] [ICIP14] [Dataset]
This dataset is used to infer both self-assessed and attributed personality traits (Big-Five Traits) of Flickr users from their galleries of favorite pictures. The datset is composed of 60,000 pictures tagged as favorite by 300 users. [CVIU16] [IEEEAC16] [ACMBNI13] [Dataset] [Code Features]
Software to analyze mice social interactions in videos. Deep Learning mice detector, pose estimation, tracking and behavior classifier in Tensorflow. The system is optimized and integrated into a GUI. [In preparation] [Code (coming soon)]
I attended a bachelor degree in Multimedia Information Technology at the University of Verona, where I was mainly interested in Computer Vision, Augmented Reality, Human Computer Interaction, Computer Graphics and also Perception. My bachelor degree thesis proposed a face recognition system, that has been installed at the door of the VIPS laboratory of the University of Verona. I completed a master degree on Engineering and Computer Science with a thesis focused on the research field of Social Signal Processing (SSP), the domain aimed at modeling, analysis and synthesis of nonverbal communication in human-human and human-machine interactions. The purpose of the thesis was the person re-identfication through the way people chat with other subjects. During that period, I was lucky enough to work also on other research projects, like recognizing the age of children by the way they talk with each other. SSP together with, Social Media Analysis, Personality Computing, Machine Learning and Computer Vision became the main topics of my PhD at the Dept. of Computer Science in Verona (Italy). During the first year of PhD I investigated the interplay between aesthetic preferences and individual differences, under the supervision of Marco Cristani. I had the great opportunity to move to Glasgow for some months and collaborate with Alessandro Vinciarelli to this project. I collected a dataset of 60K images favorited from Flickr users, extracted features coming from the field of Computational Aesthetics (CA), and predicted from them the personality of a user. Continuing on the perspective of CA, we also proposed a soft biometrics application where the goal is to recognize people by considering the images they like as a new biometric trait. At the end of the second year of PhD I moved for some months in Birmingham to collaborate with Mirco Musolesi with the aim of investigating the role of textual, visual and social cues in information propagation in Twitter. My last contribute during the PhD was in the field of Deep Learning and Representation Learning, trying generalize the particular cues that characterize each personality trait. While waiting to defend my Phd thesis, I worked as research associate at Disney Research in Pittsburgh (PA). After my graduation, I moved to Pasadena (CA) to work as postdoctoral scholar at California Institute of Technology (Caltech), in the Computational Vision Lab, under the supervision of Pietro Perona and to collaborate with David Anderson. Here I worked on animal behavior, in particular, Computational Ethology, which involves biologists and computer scientists with the common goal of understanding, analyzing, measuring and describing animal behavior using Machine Learning and Machine Vision algorithms and tools. My role here was to develop a novel system able to detect, track and recognize mice actions on videos. Between 2018 and 2020 I was a research scientist at Disney Research LA, working on Machine Learning, Computer Vision, Creative AI, Emotional AI and Affective Computing and Perception with the goal of creating new magic experiences in the theme parks, resorts, hotels and cruiseships. I build integreated systems that can sense human (social and non-verbal) behavior in order to deliver more seamless experiences. In particular I developed applications for understanding, modeling and synthetizing social interactions to provide computers with similar abilities. I also explored the potential of AI system to develop new forms of and processes for human creativity in order to use them as non-human collaborators and empower creative expression. In 2020 I joined Netflix as CV senior research scientist, where I am part of the AE-Evidence team, which is responsible for algorithmic creation and personalization of Netflix media assets. I develop Computer Vision and Machine Learning algorithms to analyze and transform raw media sources to generate and recommend media assets, such as artwork and video trailers that introduce the Netflix content to our 190+ million members. Cross-functional work includes research, design, implementation, A/B testing, and deploying of algorithms into production. Other areas of research include algorithmically assisting the post-production of our original content.