The methods and models of machine learning (ML) are rapidly becoming de facto tools for the analysis and interpretation of large data sets. Complex classification tasks such as speech and image recognition, automatic translation, decision making, etc. that were out of reach a decade ago are now routinely performed by computers with a high degree of reliability using (deep) neural networks. These performances suggest that it may be possible to represent high-dimensional functions with controllably small errors, potentially outperforming standard interpolation methods based on Galerkin truncation or finite elements: these have been the workhorses of scientific computing but suffer from the curse of dimensionality. By beating this curse, ML techniques could change the way we perform quantum physics calculations, molecular dynamics simulation, PDE solution, etc. In support of this prospect, in this talk I will present results about the representation error and trainability of neural networks, obtained by mapping the parameters of the neural network to a system of interacting particles. I will also discuss what these results imply for applications in scientific computing.