Data engineering pipelines are essential - albeit costly - components of predictive analytics frameworks requiring significant engineering time and domain expertise for carrying out tasks such as data ingestion, preprocessing, feature extraction, and feature engineering. In this paper, we propose ADEPT, an automated data engineering pipeline via text embeddings. At the core of the ADEPT framework is a simple yet powerful idea that the entropy of embeddings corresponding to textually dense raw format representation of time series can be intuitively viewed as equivalent (or in many cases superior) to that of numerically dense vector representations obtained by data engineering pipelines. Consequently, ADEPT uses a two step approach that (i) leverages text embeddings to represent the diverse data sources, and (ii) constructs a variational information bottleneck criteria to mitigate entropy variance in text embeddings of time series data. ADEPT provides an end-to-end automated implementation of predictive models that offers superior predictive performance despite issues such as missing data, ill-formed records, improper or corrupted data formats and irregular timestamps. Through exhaustive experiments, we show that the ADEPT outperforms the best existing benchmarks in a diverse set of datasets from large-scale applications across healthcare, finance, science and industrial internet of things. Our results show that ADEPT can potentially leapfrog many conventional data pipeline steps thereby paving the way for efficient and scalable automation pathways for diverse data science applications.