We propose a new algorithm for solving large-scale non-convex machine learning problems. The algorithm is based on the classical trust-region framework and incorporates gradient and hessian information of the objective function obtained through subsampling. The subsample size is adaptively increased during the algorithm. We present a theoretical convergence result and demonstrate on several test problems that our algorithm is competitive with state-of-the-art methods.