Introduction
In this section, I will provide an explanation of some of the main and most predominantly used neural networks in machine learning. These consist of the Feedforward Neural Networks (FNNs), Convolutional Neural Networks (CNN’s), Recurrent Neural Networks (RNNs) and Transformers. Each of these Neural Networks are designed to work best for different data and tasks, and can range from translation, computer vision, Natural language processing to generative AI and much more. The following is a list of what each of these models are primarily used for. These four neural networks for the core of deep learning and power many of today’s most state of the art AI solutions.
-
Feed Forward Neural Networks – Simplest, building blocks for other architectures, classification, regression, predicting and basic pattern recognition
-
Convolutional Neural Networks – Image classification, object detection, and medical imaging
-
Recurrent Neural Networks – Natural language processing, forecasting and music generation
-
Transformers – Large language models, translation, vision imaging, and chat bots
Navigation
Back to Index
Next: Feedforward Neural Networks
Topics
Introduction | FFNs | CNNs | RNNs | Transformers | Index