This link goes to Michael Neilsen's "book" on a very good intro to machine learning. You might recognise his name from the book on "Quantum computation and Quantum Information" and this is the same dude. Its a very nice read and for anyone with familiarity with multivariable calculus, linear algebra and some practice in python it should be a breeze to read. Very recommended even just to read the first 2 chapers let alone giving a go of implementing it yourslelf.
A link to some pytorch tutorial by pytorch themselves. Pytorch is a library for all your machine learning need in python. Its syntax almost completely follows how you write code with numpy. This would be a good read if you really want to start creating your own machine learning models.
https://pytorch.org/tutorials/beginner/basics/intro.htmlThis is very cool theorem in which, once you have some basic understanding of neural networks, you should be able to understand. In principle it says that neural networks are universal approximators, ie you can use them to appromate any fuctions arbitrarily well. I link the wikipedia page, which is pretty good at giving you some intuition. Also gives the theorem statement.
https://en.wikipedia.org/wiki/Universal_approximation_theorem