Alert button
Picture for Alexander Wong

Alexander Wong

Alert button

COVID-Net Clinical ICU: Enhanced Prediction of ICU Admission for COVID-19 Patients via Explainability and Trust Quantification

Add code
Bookmark button
Alert button
Sep 14, 2021
Audrey Chung, Mahmoud Famouri, Andrew Hryniowski, Alexander Wong

Figure 1 for COVID-Net Clinical ICU: Enhanced Prediction of ICU Admission for COVID-19 Patients via Explainability and Trust Quantification
Figure 2 for COVID-Net Clinical ICU: Enhanced Prediction of ICU Admission for COVID-19 Patients via Explainability and Trust Quantification
Figure 3 for COVID-Net Clinical ICU: Enhanced Prediction of ICU Admission for COVID-19 Patients via Explainability and Trust Quantification
Viaarxiv icon

COVID-Net MLSys: Designing COVID-Net for the Clinical Workflow

Add code
Bookmark button
Alert button
Sep 14, 2021
Audrey G. Chung, Maya Pavlova, Hayden Gunraj, Naomi Terhljan, Alexander MacLean, Hossein Aboutalebi, Siddharth Surana, Andy Zhao, Saad Abbasi, Alexander Wong

Figure 1 for COVID-Net MLSys: Designing COVID-Net for the Clinical Workflow
Figure 2 for COVID-Net MLSys: Designing COVID-Net for the Clinical Workflow
Viaarxiv icon

Where Did You Learn That From? Surprising Effectiveness of Membership Inference Attacks Against Temporally Correlated Data in Deep Reinforcement Learning

Add code
Bookmark button
Alert button
Sep 08, 2021
Maziar Gomrokchi, Susan Amin, Hossein Aboutalebi, Alexander Wong, Doina Precup

Figure 1 for Where Did You Learn That From? Surprising Effectiveness of Membership Inference Attacks Against Temporally Correlated Data in Deep Reinforcement Learning
Figure 2 for Where Did You Learn That From? Surprising Effectiveness of Membership Inference Attacks Against Temporally Correlated Data in Deep Reinforcement Learning
Figure 3 for Where Did You Learn That From? Surprising Effectiveness of Membership Inference Attacks Against Temporally Correlated Data in Deep Reinforcement Learning
Figure 4 for Where Did You Learn That From? Surprising Effectiveness of Membership Inference Attacks Against Temporally Correlated Data in Deep Reinforcement Learning
Viaarxiv icon

COVID-Net US: A Tailored, Highly Efficient, Self-Attention Deep Convolutional Neural Network Design for Detection of COVID-19 Patient Cases from Point-of-care Ultrasound Imaging

Add code
Bookmark button
Alert button
Aug 05, 2021
Alexander MacLean, Saad Abbasi, Ashkan Ebadi, Andy Zhao, Maya Pavlova, Hayden Gunraj, Pengcheng Xi, Sonny Kohli, Alexander Wong

Figure 1 for COVID-Net US: A Tailored, Highly Efficient, Self-Attention Deep Convolutional Neural Network Design for Detection of COVID-19 Patient Cases from Point-of-care Ultrasound Imaging
Figure 2 for COVID-Net US: A Tailored, Highly Efficient, Self-Attention Deep Convolutional Neural Network Design for Detection of COVID-19 Patient Cases from Point-of-care Ultrasound Imaging
Figure 3 for COVID-Net US: A Tailored, Highly Efficient, Self-Attention Deep Convolutional Neural Network Design for Detection of COVID-19 Patient Cases from Point-of-care Ultrasound Imaging
Figure 4 for COVID-Net US: A Tailored, Highly Efficient, Self-Attention Deep Convolutional Neural Network Design for Detection of COVID-19 Patient Cases from Point-of-care Ultrasound Imaging
Viaarxiv icon

LexSubCon: Integrating Knowledge from Lexical Resources into Contextual Embeddings for Lexical Substitution

Add code
Bookmark button
Alert button
Jul 11, 2021
George Michalopoulos, Ian McKillop, Alexander Wong, Helen Chen

Figure 1 for LexSubCon: Integrating Knowledge from Lexical Resources into Contextual Embeddings for Lexical Substitution
Figure 2 for LexSubCon: Integrating Knowledge from Lexical Resources into Contextual Embeddings for Lexical Substitution
Figure 3 for LexSubCon: Integrating Knowledge from Lexical Resources into Contextual Embeddings for Lexical Substitution
Figure 4 for LexSubCon: Integrating Knowledge from Lexical Resources into Contextual Embeddings for Lexical Substitution
Viaarxiv icon

Does Form Follow Function? An Empirical Exploration of the Impact of Deep Neural Network Architecture Design on Hardware-Specific Acceleration

Add code
Bookmark button
Alert button
Jul 08, 2021
Saad Abbasi, Mohammad Javad Shafiee, Ellick Chan, Alexander Wong

Figure 1 for Does Form Follow Function? An Empirical Exploration of the Impact of Deep Neural Network Architecture Design on Hardware-Specific Acceleration
Figure 2 for Does Form Follow Function? An Empirical Exploration of the Impact of Deep Neural Network Architecture Design on Hardware-Specific Acceleration
Figure 3 for Does Form Follow Function? An Empirical Exploration of the Impact of Deep Neural Network Architecture Design on Hardware-Specific Acceleration
Figure 4 for Does Form Follow Function? An Empirical Exploration of the Impact of Deep Neural Network Architecture Design on Hardware-Specific Acceleration
Viaarxiv icon

SALT: Sea lice Adaptive Lattice Tracking -- An Unsupervised Approach to Generate an Improved Ocean Model

Add code
Bookmark button
Alert button
Jun 24, 2021
Ju An Park, Vikram Voleti, Kathryn E. Thomas, Alexander Wong, Jason L. Deglint

Figure 1 for SALT: Sea lice Adaptive Lattice Tracking -- An Unsupervised Approach to Generate an Improved Ocean Model
Figure 2 for SALT: Sea lice Adaptive Lattice Tracking -- An Unsupervised Approach to Generate an Improved Ocean Model
Figure 3 for SALT: Sea lice Adaptive Lattice Tracking -- An Unsupervised Approach to Generate an Improved Ocean Model
Figure 4 for SALT: Sea lice Adaptive Lattice Tracking -- An Unsupervised Approach to Generate an Improved Ocean Model
Viaarxiv icon

Residual Error: a New Performance Measure for Adversarial Robustness

Add code
Bookmark button
Alert button
Jun 18, 2021
Hossein Aboutalebi, Mohammad Javad Shafiee, Michelle Karg, Christian Scharfenberger, Alexander Wong

Figure 1 for Residual Error: a New Performance Measure for Adversarial Robustness
Figure 2 for Residual Error: a New Performance Measure for Adversarial Robustness
Figure 3 for Residual Error: a New Performance Measure for Adversarial Robustness
Figure 4 for Residual Error: a New Performance Measure for Adversarial Robustness
Viaarxiv icon

Insights into Data through Model Behaviour: An Explainability-driven Strategy for Data Auditing for Responsible Computer Vision Applications

Add code
Bookmark button
Alert button
Jun 16, 2021
Alexander Wong, Adam Dorfman, Paul McInnis, Hayden Gunraj

Figure 1 for Insights into Data through Model Behaviour: An Explainability-driven Strategy for Data Auditing for Responsible Computer Vision Applications
Figure 2 for Insights into Data through Model Behaviour: An Explainability-driven Strategy for Data Auditing for Responsible Computer Vision Applications
Figure 3 for Insights into Data through Model Behaviour: An Explainability-driven Strategy for Data Auditing for Responsible Computer Vision Applications
Figure 4 for Insights into Data through Model Behaviour: An Explainability-driven Strategy for Data Auditing for Responsible Computer Vision Applications
Viaarxiv icon

DeepDarts: Modeling Keypoints as Objects for Automatic Scorekeeping in Darts using a Single Camera

Add code
Bookmark button
Alert button
May 20, 2021
William McNally, Pascale Walters, Kanav Vats, Alexander Wong, John McPhee

Figure 1 for DeepDarts: Modeling Keypoints as Objects for Automatic Scorekeeping in Darts using a Single Camera
Figure 2 for DeepDarts: Modeling Keypoints as Objects for Automatic Scorekeeping in Darts using a Single Camera
Figure 3 for DeepDarts: Modeling Keypoints as Objects for Automatic Scorekeeping in Darts using a Single Camera
Figure 4 for DeepDarts: Modeling Keypoints as Objects for Automatic Scorekeeping in Darts using a Single Camera
Viaarxiv icon