My Discalculic Pony: Math is Hard 17 members · 6 stories
Comments ( 1 )
  • Viewing 1 - 50 of 1
equestrian.sen
Group Admin

I've been working a bit with neural networks recently. There are various ways to think about neural networks, though I think the most common interpretation is this:

A neural network is a modular function approximator.

"Modular" here meaning that it is composed of components that can be understood in isolation. It has nothing to do with modular forms.

There exist algorithms for approximating functions given an arbitrary amount of data, so neural networks tend to be employed only when data is scarce, and so neural networks turn out to be used more as statistical tools than anything else. Any form of regression, for example, can be thought of as fitting parameters in a neural network. The transformation from a regression to a network is trivial and usually useless when doing only a single multinomial regression over data, but it makes complex stacked regressions an effortless process, and performing the entire stacked regression at once can actually produce more accurate results than performing each regression individually.

So here's the question for this thread:

Performing an intermediate regression incorrectly can improve the accuracy of the overall model. Why does this happen?

I have some ideas of my own, which I hint at in the title. Dumping all my thoughts on this topic may be impractical since I've spent an inordinate amount of time on this and related problems. I also work a lot faster with a physical pen and paper, and translating my thoughts through a keyboard would take a significant amount of time. If anyone's interested, I can dump a primer though, and if anyone wants to help solve the problem, I can dump future thoughts here as well.

  • Viewing 1 - 50 of 1