Yaqub Jonmohamadi

Summary


An example image

I am a data scientist with a passion for LLMs, computer vision, machine learning, functional neuroimaging, and signal and imaging analysis. I am an active learner with a unique blend of commercial and academic experience.

Academic Background:
I hold a Bachelor's degree in Electronics, a Master's degree in Signal Processing, and a PhD in Medical Imaging. I have completed two postdoctoral positions: one at the University of Auckland (2015-2017) focusing on multimodal neuroimaging, and another at Queensland University of Technology (2018-2021) specializing in medical image analysis. I have authored over 25 peer-reviewed publications, with 13 as the first author.

Industry Experience:
I have been working in the industry since Jan 2021, as a Data Scientist and ML Engineer where I focused on Gen AI, agentic work flows, 3D computer vision, and machine learning, further bridging the gap between research and practical application.

Contact

Email: y.jonmo@gmail.com, LinkedIn, GitHub, Google Scholar, Location: Sydney, Australia







Sample work


AI Agent: A Chatbot Created Using Mistral LLM and Three Tools

Agent Diagram Sample Run GIF

An example of building an AI agent that uses the Mistral generative model together with three tools: Wikipedia search, a calculator, and a Convolutional Neural Network (CNN) for image classification.

The link for this work is available at: AI Agent

Broadcast Virtual: video analytics and segmentation

At Broadcast Virtual, we specialize in various forms of virtual advertising. In the example above, we first remove physical LEDs from the footage using semantic segmentation. Next, we employ camera tracking to seamlessly overlay virtual advertisements onto the footage.

More product demo are available on the Broadcast Virtual vebesite.

Broadcast Virtual: 3D SLAM



  This video is a smaple visual odometry for a lens camera. It other words, it demonstrates the use of computer vision and machine learning techniques for frame-to-frame tracking of camera parameters, including pan, tilt, and zoom.


Bodymapp Ltd

At Bodymapp Ltd the 3D body scannibg app provides body avatars + anatomical measurements for people using the iPhones.





Sample publications


3D semantic mapping from arthroscopy using out-of-distribution pose and depth and in-distribution segmentation training

Yaqub Jonmohamadi, Shahnewaz Ali, Fengbei Liu, Jonathan Roberts, Ross Crawford, Gustavo Carneiro, Ajay K Pandey

MICCAI 2021

Abstract

Minimally invasive surgery (MIS) has many documented advantages, but the surgeon’s limited visual contact with the scene can be problematic. Hence, systems that can help surgeons navigate, such as a method that can produce a 3D semantic map, can compensate for the limitation above. In theory, we can borrow 3D semantic mapping techniques developed for robotics, but this requires finding solutions to the following challenges in MIS: 1) semantic segmentation, 2) depth estimation, and 3) pose estimation. In this paper, we propose the first 3D semantic mapping system from knee arthroscopy that solves the three challenges above. Using out-of-distribution non-human datasets, where pose could be labeled, we jointly train depth+pose estimators using self-supervised and supervised losses. Using an in-distribution human knee dataset, we train a fully-supervised semantic segmentation system to label arthroscopic image pixels into femur, ACL, and meniscus. Taking testing images from human knees, we combine the results from these two systems to automatically create 3D semantic maps of the human knee. The result of this work opens the pathway to the generation of intra-operative 3D semantic mapping, registration with pre-operative data, and robotic-assisted arthroscopy. Source code: https://github.com/YJonmo/EndoMapNet.

Pipeline Overview

1- semantic segmentation of the scene using augmented multi-spectral input,
2- simultaneous depth and pose estimation in arthroscopy (self supervised + supervised pose)
3- creating the 3D semantics of the arthroscopic scenes.





Demo on partial knee mapping



                Sample image sequence                                         Reconstructed 3D semantic map




Extraction of common task features in EEG-fMRI data using coupled tensor-tensor decomposition

Yaqub Jonmohamadi, Suresh Muthukumaraswamy, Joseph Chen, Jonathan Roberts, Ross Crawford, Ajay Pandey

Brain Topography 2020

Abstract

The fusion of simultaneously recorded EEG and fMRI data is of great value to neuroscience research due to the complementary properties of the individual modalities. Traditionally, techniques such as PCA and ICA, which rely on strong strong nonphysiological assumptions such as orthogonality and statistical independence, have been used for this purpose. Recently, tensor decomposition techniques such as parallel factor analysis have gained more popularity in neuroimaging applications as they areable to inherently contain the multidimensionality of neuroimaging data and achieve uniqueness in decomposition without imposing strong assumptions. Previously, the coupled matrix-tensor decomposition (CMTD) has been applied for the fusion of the EEG and fMRI. Only recently the coupled tensor-tensor decomposition (CTTD) has been proposed. Here for the first time, we propose the use of CTTD of a 4th order EEG tensor (space, time, frequency, and participant) and 3rd order fMRI tensor (space, time, participant), coupled partially in time and participant domains, for the extraction of the task related features in both modalities. We used both the sensor-level and source-level EEG for the coupling. The phase shifted paradigm signals were incorporated as the temporal initializers of the CTTD to extract the task related features. The validation of the approach is demonstrated on simultaneous EEG-fMRI recordings from six participants performing an N-Back memory task. The EEG and fMRI tensors were coupled in 9 components out of which 7 components had a high correlation (more than 0.85) with the task. The result of the fusion recapitulates the well-known attention network as being positively, and the default mode network working negatively time-locked to the memory task.

Pipeline Overview

The following block diagram illustrates the spatial, temporal, and spectral operations required to create the 4th order EEG and 3rd order fMRI tensors. The EEG and fMRI tensors could be coupled in temporal and participant domains. The paradigm signal could be used as a temporal constraint for the coupled tensor-tensor decomposition..





Sample extracted common task signals