Picture for Lexing Xie

Lexing Xie

Australian National University

Learning k-Determinantal Point Processes for Personalized Ranking

Add code
Jun 23, 2024
Viaarxiv icon

Sampled Transformer for Point Sets

Add code
Feb 28, 2023
Figure 1 for Sampled Transformer for Point Sets
Figure 2 for Sampled Transformer for Point Sets
Figure 3 for Sampled Transformer for Point Sets
Figure 4 for Sampled Transformer for Point Sets
Viaarxiv icon

Determinantal Point Process Likelihoods for Sequential Recommendation

Add code
Apr 25, 2022
Figure 1 for Determinantal Point Process Likelihoods for Sequential Recommendation
Figure 2 for Determinantal Point Process Likelihoods for Sequential Recommendation
Figure 3 for Determinantal Point Process Likelihoods for Sequential Recommendation
Figure 4 for Determinantal Point Process Likelihoods for Sequential Recommendation
Viaarxiv icon

Fair Wrapping for Black-box Predictions

Add code
Feb 16, 2022
Figure 1 for Fair Wrapping for Black-box Predictions
Figure 2 for Fair Wrapping for Black-box Predictions
Figure 3 for Fair Wrapping for Black-box Predictions
Figure 4 for Fair Wrapping for Black-box Predictions
Viaarxiv icon

Factorized Fourier Neural Operators

Add code
Nov 30, 2021
Figure 1 for Factorized Fourier Neural Operators
Figure 2 for Factorized Fourier Neural Operators
Figure 3 for Factorized Fourier Neural Operators
Figure 4 for Factorized Fourier Neural Operators
Viaarxiv icon

Interval-censored Hawkes processes

Add code
Apr 16, 2021
Figure 1 for Interval-censored Hawkes processes
Figure 2 for Interval-censored Hawkes processes
Figure 3 for Interval-censored Hawkes processes
Figure 4 for Interval-censored Hawkes processes
Viaarxiv icon

Radflow: A Recurrent, Aggregated, and Decomposable Model for Networks of Time Series

Add code
Feb 15, 2021
Figure 1 for Radflow: A Recurrent, Aggregated, and Decomposable Model for Networks of Time Series
Figure 2 for Radflow: A Recurrent, Aggregated, and Decomposable Model for Networks of Time Series
Figure 3 for Radflow: A Recurrent, Aggregated, and Decomposable Model for Networks of Time Series
Figure 4 for Radflow: A Recurrent, Aggregated, and Decomposable Model for Networks of Time Series
Viaarxiv icon

AttentionFlow: Visualising Influence in Networks of Time Series

Add code
Feb 03, 2021
Figure 1 for AttentionFlow: Visualising Influence in Networks of Time Series
Figure 2 for AttentionFlow: Visualising Influence in Networks of Time Series
Viaarxiv icon

SupMMD: A Sentence Importance Model for Extractive Summarization using Maximum Mean Discrepancy

Add code
Oct 06, 2020
Figure 1 for SupMMD: A Sentence Importance Model for Extractive Summarization using Maximum Mean Discrepancy
Figure 2 for SupMMD: A Sentence Importance Model for Extractive Summarization using Maximum Mean Discrepancy
Figure 3 for SupMMD: A Sentence Importance Model for Extractive Summarization using Maximum Mean Discrepancy
Figure 4 for SupMMD: A Sentence Importance Model for Extractive Summarization using Maximum Mean Discrepancy
Viaarxiv icon

Universal Approximation with Neural Intensity Point Processes

Add code
Jul 28, 2020
Figure 1 for Universal Approximation with Neural Intensity Point Processes
Figure 2 for Universal Approximation with Neural Intensity Point Processes
Figure 3 for Universal Approximation with Neural Intensity Point Processes
Figure 4 for Universal Approximation with Neural Intensity Point Processes
Viaarxiv icon