We analyze the sample complexity of learning from multiple experiments where the experimenter has a total budget for obtaining samples. In this problem, the learner should choose a hypothesis that performs well with respect to multiple experiments, and their related data distributions. Each collected sample is associated with a cost which depends on the particular experiments. In our setup, a learner performs m experiments, while incurring a total cost C. By using a Rademacher complexity approach, we show that the gap between the training and generalization error is O(C^{-1/2}). We also provide some examples for linear prediction, two-layer neural networks and kernel methods.