Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

Fundamental Limitations of Alignment in Large Language Models


Apr 19, 2023
Yotam Wolf, Noam Wies, Yoav Levine, Amnon Shashua

Add code


   Access Paper or Ask Questions

The Learnability of In-Context Learning


Mar 14, 2023
Noam Wies, Yoav Levine, Amnon Shashua

Add code


   Access Paper or Ask Questions

In-Context Retrieval-Augmented Language Models


Jan 31, 2023
Ori Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton-Brown, Yoav Shoham

Add code


   Access Paper or Ask Questions

Parallel Context Windows Improve In-Context Learning of Large Language Models


Dec 21, 2022
Nir Ratner, Yoav Levine, Yonatan Belinkov, Ori Ram, Omri Abend, Ehud Karpas, Amnon Shashua, Kevin Leyton-Brown, Yoav Shoham

Add code


   Access Paper or Ask Questions

MRKL Systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning


May 01, 2022
Ehud Karpas, Omri Abend, Yonatan Belinkov, Barak Lenz, Opher Lieber, Nir Ratner, Yoav Shoham, Hofit Bata, Yoav Levine, Kevin Leyton-Brown, Dor Muhlgay, Noam Rozen, Erez Schwartz, Gal Shachaf, Shai Shalev-Shwartz, Amnon Shashua, Moshe Tenenholtz

Add code


   Access Paper or Ask Questions

Standing on the Shoulders of Giant Frozen Language Models


Apr 21, 2022
Yoav Levine, Itay Dalmedigos, Ori Ram, Yoel Zeldes, Daniel Jannai, Dor Muhlgay, Yoni Osin, Opher Lieber, Barak Lenz, Shai Shalev-Shwartz, Amnon Shashua, Kevin Leyton-Brown, Yoav Shoham

Add code


   Access Paper or Ask Questions

Sub-Task Decomposition Enables Learning in Sequence to Sequence Tasks


Apr 06, 2022
Noam Wies, Yoav Levine, Amnon Shashua

Add code


   Access Paper or Ask Questions

The Inductive Bias of In-Context Learning: Rethinking Pretraining Example Design


Oct 25, 2021
Yoav Levine, Noam Wies, Daniel Jannai, Dan Navon, Yedid Hoshen, Amnon Shashua

Add code


   Access Paper or Ask Questions

Which transformer architecture fits my data? A vocabulary bottleneck in self-attention


May 09, 2021
Noam Wies, Yoav Levine, Daniel Jannai, Amnon Shashua

Add code

* ICML 2021 

   Access Paper or Ask Questions

1
2
3
4
>>