Tiny episodic memory is often motivated by cost requirements and privacy consideration. I argue in the talk that neither is resolved with tiny memory and that we should move beyond that vacuous assumption. Algorithms can either store all the data or nothing at all. I make the case in this talk that we need to focus on computationally budgeted continual learning. In this setup, per time step, i.e. task, algorithms are permitted a fixed computation to update model parameters. Moreover, all algorithms need to be compared under fixed normalized computation for a fair comparison.