Hello.
Let's say I have a set of input vectors $I = {mathbf{x_1}, dots, mathbf{x_k}} subset mathcal{R}^m$ and a set of output vectors $O = {mathbf{y_1}, dots, mathbf{y_k}} subset mathcal{R}^n$, and I want to obtain a mapping $f : mathcal{R}^m to mathcal{R}^n$ such that
$$ f(mathbf{x_i}) = mathbf{y_i} + epsilon_i, forall i in {1, dots, k}$$
where $epsilon_i$ is small, and this mapping should be continuous at least around the input/output pairs.
There are many ways of doing so.
If we suppose that the input/output pairs won't change, what are the advantages of using an Artificial Neural Network over other methods to approximate functions?
EDIT: when I say advantages, I mean the practical advantages on the use of neural networks over other function approximation methods on any domain that use the neural networks.
For example, if we think of (the canonical example of) handwriting recognition that mailing services might use to read zip codes, the neural network is nothing more than a function that maps, let's say, [0, 1]^35 (if we think of a 5x7 grid, in which the values are the "intensity" or "amount" of the ink on each cell) to [0, 1]^10 (corresponding to each digit between 0 and 9, the value being the probability of the digit). So, in this case, if we think that we will write a software to do this recognition, and that the patterns will never change, we could simply have used another technique to map the input to the output. If we use any method that produces a continuous mapping, small "variations" on the handwritting wouldn't affect too much the output of the function.
Thanks.
No comments:
Post a Comment