In a linear regression model with random design, we consider a family of James--Stein-type shrinkage estimators from which we want to select a `good' estimator for prediction out-of-sample and with which we want to construct `valid' prediction intervals. We focus on the challenging situation where the number of explanatory variables can be of the same order as sample size and where the number of candidate estimators can be much larger than sample size. We develop an estimator for the out-of-sample predictive performance that differs from Stein's well-known unbiased estimator of risk, even asymptotically. Using our performance estimator, we show that the empirically best estimator is asymptotically as good as the truly best (oracle) estimator. Using the empirically best estimator, we construct a prediction interval that is approximately valid and short with high probability, i.e., we show that the actual coverage probability is close to the nominal one and that the length of this pre diction interval is close to the length of the shortest but infeasible prediction interval. These findings extend results of Leeb (2009, Ann. Stat. 37:2838-2876) where the underlying estimators are least-squares estimators.