Economics Transformer: Scaling Laws
Researching Scaling Laws for Time Series Models
This project is a sub-project, and part of a larger partnership with Humanity Unleashed to build an AI Legislator. The partnership aims to develop a comprehensive AI-driven framework for understanding, predicting, and proposing policies using multivariate time series and legislative data. It consists of several teams working on different aspects: curating large datasets from US law and time series data, developing value elicitation algorithms for understanding user values and proposing policies, building user-friendly frontends for public policy and legislation analysis, pretraining large foundation models for time-series prediction, designing specialized model architectures, and establishing scaling laws for time-series models. The project seeks to leverage AI and deep learning to bring transparency, efficiency, and effectiveness to policymaking and public policy analysis, ultimately creating an ecosystem where both citizens and legislators can interact with data-driven insights and AI-generated recommendations.
The first scaling laws on language models paper is one of the most fundamental of the current deep learning paradigm, laying out a concrete research agenda by establishing how evaluation loss decreases with respect to compute, data, and parameter counts. Could we put out a similar work for a time-series foundation model, in order to understand optimal model size and compute usage?