Alert button

"Information": models, code, and papers
Alert button

Natural Language Understanding for Argumentative Dialogue Systems in the Opinion Building Domain

Mar 03, 2021
Waheed Ahmed Abro, Annalena Aicher, Niklas Rach, Stefan Ultes, Wolfgang Minker, Guilin Qi

Figure 1 for Natural Language Understanding for Argumentative Dialogue Systems in the Opinion Building Domain
Figure 2 for Natural Language Understanding for Argumentative Dialogue Systems in the Opinion Building Domain
Figure 3 for Natural Language Understanding for Argumentative Dialogue Systems in the Opinion Building Domain
Figure 4 for Natural Language Understanding for Argumentative Dialogue Systems in the Opinion Building Domain
Viaarxiv icon

Joint User Association and Power Allocation in Heterogeneous Ultra Dense Network via Semi-Supervised Representation Learning

Mar 29, 2021
Xiangyu Zhang, Zhengming Zhang, Luxi Yang

Figure 1 for Joint User Association and Power Allocation in Heterogeneous Ultra Dense Network via Semi-Supervised Representation Learning
Figure 2 for Joint User Association and Power Allocation in Heterogeneous Ultra Dense Network via Semi-Supervised Representation Learning
Figure 3 for Joint User Association and Power Allocation in Heterogeneous Ultra Dense Network via Semi-Supervised Representation Learning
Figure 4 for Joint User Association and Power Allocation in Heterogeneous Ultra Dense Network via Semi-Supervised Representation Learning
Viaarxiv icon

DRL-FAS: A Novel Framework Based on Deep Reinforcement Learning for Face Anti-Spoofing

Add code
Bookmark button
Alert button
Sep 16, 2020
Rizhao Cai, Haoliang Li, Shiqi Wang, Changsheng Chen, Alex Chichung Kot

Figure 1 for DRL-FAS: A Novel Framework Based on Deep Reinforcement Learning for Face Anti-Spoofing
Figure 2 for DRL-FAS: A Novel Framework Based on Deep Reinforcement Learning for Face Anti-Spoofing
Figure 3 for DRL-FAS: A Novel Framework Based on Deep Reinforcement Learning for Face Anti-Spoofing
Figure 4 for DRL-FAS: A Novel Framework Based on Deep Reinforcement Learning for Face Anti-Spoofing
Viaarxiv icon

Learning Defense Transformers for Counterattacking Adversarial Examples

Mar 13, 2021
Jincheng Li, Jiezhang Cao, Yifan Zhang, Jian Chen, Mingkui Tan

Figure 1 for Learning Defense Transformers for Counterattacking Adversarial Examples
Figure 2 for Learning Defense Transformers for Counterattacking Adversarial Examples
Figure 3 for Learning Defense Transformers for Counterattacking Adversarial Examples
Figure 4 for Learning Defense Transformers for Counterattacking Adversarial Examples
Viaarxiv icon

3D coherent x-ray imaging via deep convolutional neural networks

Feb 26, 2021
Longlong Wu, Shinjae Yoo, Ana F. Suzana, Tadesse A. Assefa, Jiecheng Diao, Ross J. Harder, Wonsuk Cha, Ian K. Robinson

Figure 1 for 3D coherent x-ray imaging via deep convolutional neural networks
Figure 2 for 3D coherent x-ray imaging via deep convolutional neural networks
Figure 3 for 3D coherent x-ray imaging via deep convolutional neural networks
Figure 4 for 3D coherent x-ray imaging via deep convolutional neural networks
Viaarxiv icon

PURSUhInT: In Search of Informative Hint Points Based on Layer Clustering for Knowledge Distillation

Feb 26, 2021
Reyhan Kevser Keser, Aydin Ayanzadeh, Omid Abdollahi Aghdam, Caglar Kilcioglu, Behcet Ugur Toreyin, Nazim Kemal Ure

Figure 1 for PURSUhInT: In Search of Informative Hint Points Based on Layer Clustering for Knowledge Distillation
Figure 2 for PURSUhInT: In Search of Informative Hint Points Based on Layer Clustering for Knowledge Distillation
Figure 3 for PURSUhInT: In Search of Informative Hint Points Based on Layer Clustering for Knowledge Distillation
Figure 4 for PURSUhInT: In Search of Informative Hint Points Based on Layer Clustering for Knowledge Distillation
Viaarxiv icon

Heterogeneous Federated Learning

Aug 15, 2020
Fuxun Yu, Weishan Zhang, Zhuwei Qin, Zirui Xu, Di Wang, Chenchen Liu, Zhi Tian, Xiang Chen

Figure 1 for Heterogeneous Federated Learning
Figure 2 for Heterogeneous Federated Learning
Figure 3 for Heterogeneous Federated Learning
Figure 4 for Heterogeneous Federated Learning
Viaarxiv icon

Word2vec Skip-gram Dimensionality Selection via Sequential Normalized Maximum Likelihood

Aug 18, 2020
Pham Thuc Hung, Kenji Yamanishi

Figure 1 for Word2vec Skip-gram Dimensionality Selection via Sequential Normalized Maximum Likelihood
Figure 2 for Word2vec Skip-gram Dimensionality Selection via Sequential Normalized Maximum Likelihood
Figure 3 for Word2vec Skip-gram Dimensionality Selection via Sequential Normalized Maximum Likelihood
Figure 4 for Word2vec Skip-gram Dimensionality Selection via Sequential Normalized Maximum Likelihood
Viaarxiv icon

Ptychography Intensity Interferometry Imaging for Dynamic Distant Object

Feb 10, 2021
Yuchen He, Yuan Yuan, Hui Chen, Huaibin Zheng, Jianbin Liu, Zhuo Xu

Figure 1 for Ptychography Intensity Interferometry Imaging for Dynamic Distant Object
Figure 2 for Ptychography Intensity Interferometry Imaging for Dynamic Distant Object
Figure 3 for Ptychography Intensity Interferometry Imaging for Dynamic Distant Object
Figure 4 for Ptychography Intensity Interferometry Imaging for Dynamic Distant Object
Viaarxiv icon

Distributed Learning and Democratic Embeddings: Polynomial-Time Source Coding Schemes Can Achieve Minimax Lower Bounds for Distributed Gradient Descent under Communication Constraints

Add code
Bookmark button
Alert button
Mar 13, 2021
Rajarshi Saha, Mert Pilanci, Andrea J. Goldsmith

Figure 1 for Distributed Learning and Democratic Embeddings: Polynomial-Time Source Coding Schemes Can Achieve Minimax Lower Bounds for Distributed Gradient Descent under Communication Constraints
Figure 2 for Distributed Learning and Democratic Embeddings: Polynomial-Time Source Coding Schemes Can Achieve Minimax Lower Bounds for Distributed Gradient Descent under Communication Constraints
Figure 3 for Distributed Learning and Democratic Embeddings: Polynomial-Time Source Coding Schemes Can Achieve Minimax Lower Bounds for Distributed Gradient Descent under Communication Constraints
Figure 4 for Distributed Learning and Democratic Embeddings: Polynomial-Time Source Coding Schemes Can Achieve Minimax Lower Bounds for Distributed Gradient Descent under Communication Constraints
Viaarxiv icon