Feedforward Neural Networks

These neural networks are the simplest and most widely recognized. They form the foundation for many advanced neural network architectures. These networks are simple in that they take data as an input and then this flows through the hidden layers to the output in one direction. There are no feedback loops or back propagation. Because of this simplicity, they work well for simple classification, regression and basic pattern recognition. These networks are not suited for more complex tasks such as processing text or images because they don’t have memory or shared weights.\

Despite their limitations, because of their simplicity,t hey are often used as a starting point for learning how neural networks work. They are still used quite a bit today where tabular and is not dependent on sequence or special relationships. Feed Forward Neural Networks are the backbone of neural network theory and serve to help us understand more advance architectures.

Back to Index
Previous: Introduction
Next: Convolutional Neural Networks


Topics

Introduction | FFNs | CNNs | RNNs | Transformers | Index