The Mathematics of Neural Networks (Explained Visually)

I think I heard my brain crack when he introduced 3D perception spaces and then “hyperspaces” (~3 mins).  BOOM!, mind blown.  This guy had better have more videos.

But then I had some ideas I thought (well, 50% high me) were mind-blowing at the time.


Below here be dragons…

— posted to Boing Boing in this thread:

I’m way out of my league here, but I had a couple of thoughts that might be of interest, or at least amusingly naive.

Is it really “AI” when I could write a simple BASIC program to do the pressure/temperature/humidity controller.  Probably nothing more than a bunch of nested IF/THEN statements.  Maybe even a .bat file if all my favorite utils are on the %PATH%. Or Fanuc PMC “ladder”, with or without Fanuc Macro B.  A box full of NOR gates even.

But this won’t scale well due to [ exponential | logarithmic | factorial ] growth.  Better languages, better algorithms, clever hacks will allow larger and larger nets, but the problem grows faster than the solutions.

But in principle, there’s nothing magic in “machine learning”, at least these self-learning neural nets.

Also, I find it interesting that you can’t fold a piece of paper more than 7 times, and a neural network of 8 layers or more is required for anything useful.  He didn’t mention 8, but it fits with my thought, so I’m going with it.

Wonderful video.  I hope he has more.  I smashed the Like button and subscribed 90 seconds in.

— first draft below, as orginally typed…

Is the AI in software really any different than what could be done with a “normal” program; a procedural program could be made, but the vast numbr of connections makes it human-capable (# of IF/THEN stateents grows too large, likely exponentially or logarithmically??).  In principle, at least.

Also, odd that the folding thing is limited to the same # of folds as real paper?  Can’t do (more than?) 7 folds in real paper, and nuruel nets of depth 6 aren’t powerful enough.

Leave a Reply