I received my B.Tech. and Ph.D. degrees in from the , India. My Ph.D. thesis was titled “”, where I worked on analyzing music audio to identify and transcribe different instruments/voices playing simultaneously. During postdoc at Oxford University (UK), I developed systems using linguistic principles, with applications in automatic language teacher and speech recognition for low-resource languages. At Amazon in Boston (USA), I worked on audio classification for developing Alexa system, with research focusing on classification with imbalanced data.
Research interests: machine learning, signal processing, machine learning for Physics, time series analysis
- Opening for a research scientist/engineer in music processing apply here
- I am looking for students/post-docs with computational music or Physics background to join my group.
- UG students with music knowledge and experience in signal processing and coding are welcome to apply for SURGE or UG projects.
I am focusing mainly on the following areas:
- Machine learning for audio analysis
- speech, music and acoustic events
- Machine learning for Physics
- Time series analysis on sensor data
|22 Jan 2021:
Positions secured in International BCI competition organized by IEEE Brain:
- 1st position in few shot learning track
- 1st position in microsleep detection
- 3rd position in EEG based ERP detection
- 4th position in upper limb movements decoding
- 7th position in imagined sleep classification
Team members: Dr. Tharun Reddy, Madhurdeep Jain, Jinang Shah, Palashdeep Singh, Vartika Gupta, Kushangi Mittal, Archit Bansal, Chittoor Murari
|3 Dec 2020:
Paper accepted in IEEE CICT 2020: Shivangi Ranjan and Vipul Arora, “A Bioinformatic Method Of Semi-Global Alignment For Query-By-Humming”, in IEEE Conference on Information and Communication Technology (CICT), 2020, pp. 1–5
|24 July 2020:
Clinical BCI Challenge 2020: Our team (named iBCI) secured overall 2nd position in Clinical BCI Challenge at IEEE WCCI 2020
|9 July 2020:
Grant approved by MPCB: “Technical Assessment of Low-Cost Sensor based PM2.5 and PM10 Monitoring Network in Maharashtra”
|25 Apr 2020:
SP Cup 2020: Our team secured 6th position in IEEE Signal Processing Cup competition 2020
|21 Mar 2020:
Paper accepted in FUZZ-IEEE 2020: Tharun Kumar Reddy, Vipul Arora, Laxmidhar Behera, Yukai Wang and Chin Teng Lin, “Fuzzy Divergence Based Analysis For EEG Drowsiness Detection Brain Computer Interfaces “, IEEE International Conference on Fuzzy Systems, 2020.
|24 Jan 2020:
Paper accepted in ICASSP 2020: Satyam Kumar, Tharun Kumar Reddy, Vipul Arora, and Laxmidhar Behera, “Formulating Divergence Framework For Multiclass Motor Imagery EEG Brain Computer Interface”, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2020.
|11 Nov 2019:
ASEM DUO 2020 Professor Fellowship Award: for research exchange with University of Surrey, UK
|28 June 2019:
IMPRINT-2C grant approved: “Smart music tutor for Indian classical music”
|15 May 2019:
Paper presented in ICASSP 2019: Vipul Arora, Ming Sun and Chao Wang, “Deep Embeddings for Rare Audio Event Detection With Imbalanced Data”, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019. [poster]
|13 May 2019:
General Linguistics Seminar Talk at University of Oxford: titled “Modern speech technologies and applications of phonology.
|1 May 2019:
Initiation grant approved: “Machine Learning for Physics
|6 Feb 2019:
SPARC grant approved: Collaboration project between IITK and MIT, titled “Machine Learning for Lattice Quantum Chromodynamics”
|1 Feb 2019:
Paper accepted in ICASSP 2019: Vipul Arora, Ming Sun and Chao Wang, “Deep Embeddings for Rare Audio Event Detection With Imbalanced Data”, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019. [pdf] [blog]
Dr. Vipul Arora
Department of Electrical Engineering
Office: 305D, ACES building