202405220058
Status: #idea
Tags: Machine Learning

Neural Networks

Fundamentally all a neural network is, is an infinitely flexible class of function, that can be anything and everything you want; from the straightest of line, to the squiggliest of squigglies. As Josh Stalmer (from StatQuest) calls it, it's not much more than Big Fancy Squiggly Fitting Machine.

Neural Networks are made of a few components which are all better treated in their own notes, because I foresee that anything more than surface level will evolve into something... complex.

The 4 Pillars of Neural Networks

The fundamental components:

These are the components of neural networks, and I hope you know understand why I decided to keep everything to their own notes.

Why Neural Networks?

Fundamentally, because they are a universal approximators (most important point) and are comparably straightforward when it comes to training them.

First, the first point. Yes, this sounds as cool as it sounds.
It means that it is provable mathematically that a neural network given enough layers and neurons can approximate any... ANY mathematical function you could think of. See Universal Approximation Theorem.

The thing is more than a few classes of functions can do that, like the polynomials we all know and love can do that as well. So then why neural networks?

Fundamentally because they are dead simple and can be used like in pretty much any context. The list of their benefits would be too big but a few are:

No other method boast the same level of flexibility, generalizability, effectiveness and power. There is a very real issue with the fact that as things stand a lot of Machine Learning models, especially on the Deep Learning side of things are inscrutable black boxes. Still their other advantages make that significant downside palatable. This is the difference between Inference and Prediction. If what we care about is how specific elements relate to each other, neural networks are terrible; but for anything else they are a game-changer and are guaranteed to work.

Caution, do not let your hype over Artifical Intelligence and neural networks take you over. While machine learning and neural networks are extremely powerful and high quality mithril hammers, not all the problems in your life require hammering nails into a drago... Pause, I swear as I wrote that I wasn't thinking of anything...

(I am thinking of the dragon innuendo game, I am not that weird.)

Simple and interpretable models like Decision Trees, and Random Forests are still relevant and can often offer similar if not better results on the same data as the neural network alternative. Sometimes the data at hand simply does not have enough "hidden features" to make the overhead of a neural network worth it. Furthermore, even if we are adamant about using a neural network, first creating models using simpler methods like Linear Regression, Tree-Based Methods and whatnot has value even simply as a baseline. You do not want to spend weeks optimizing a model, trying different learning rates, and learning schedules, tweaking the number of epochs and the whole shebang just to realize your model does about the same (or potentially worse) than a Multiple Linear Regression model you fitted in 5 seconds in R or Python.

References

File Folder Last Modified
StatQuest ~ Neural Networks ! Deep Learning 2. White Holes/References 12:33 PM - December 06, 2025
Practical Deep Learning for Coders 2. White Holes/References 12:33 PM - December 06, 2025
Supervised Learning 1. Cosmos 12:33 PM - December 06, 2025
Bayes Classifier 1. Cosmos 12:33 PM - December 06, 2025