Click on the above“Beginner Learning Vision” to select and addStarred or “Top”
Important content delivered in real-time
Author | SANYA4
Translator | VK
Source | Analytics Vidhya
Introduction
Neural networks are everywhere today. Major companies are spending heavily on hardware and talent to ensure they can build the most complex neural networks and deliver the best deep learning solutions.
Although deep learning is a rather old subset of machine learning, it did not receive the recognition it deserved until the early 2010s. Today, it has taken the world by storm, capturing public attention.
In this article, I would like to take a slightly different approach to neural networks and understand how they came to be.

The Origins of Neural Networks
The earliest reports in the field of neural networks date back to the 1940s when Warren McCulloch and Walter Pitts attempted to create a simple neural network using circuits.
The image below shows an MCP neuron. If you studied high school physics, you’ll find this looks a lot like a simple NOR gate.
l The paper demonstrated the basic idea of using signals and how decisions can be made by transforming the provided inputs.

❝
The paper by McCulloch and Pitts provided a way to describe brain functions in abstract terms and showed that simple elements connected in a neural network can have immense computational power.
❞
Despite its groundbreaking significance, the paper received little attention until about six years later when Donald Hebb published a paper emphasizing that neural pathways are strengthened each time they are used.

Remember, computers were still in their infancy at that time, with IBM launching the first PC (IBM5150) in 1981.

Fast forward to the 1990s, and numerous studies on artificial neural networks had been published. Rosenblatt invented the first perceptron in the 1950s, and in 1989, Yann LeCun successfully implemented the backpropagation algorithm at Bell Labs. By the 1990s, the USPS was already capable of reading postal codes on envelopes.
The LSTM we know today was invented in 1997.
❝
If so much groundwork was laid in the 1990s, why did it take until 2012 to leverage neural networks for deep learning tasks?
❞
The Rise of Hardware and the Internet
A major challenge faced by deep learning research was the lack of reproducible studies. Until now, these advancements were theory-driven due to the low availability of reliable data and limited hardware resources.
In the past two decades, significant progress has been made in hardware and the internet. In the 1990s, the RAM of an IBM PC was 16KB. By 2010, the average RAM of a PC was around 4GB!
Now, we can train a small model on our computers, which was unimaginable in the 1990s.
The gaming market also played a crucial role in this revolution, with companies like NVIDIA and AMD investing heavily in supercomputers to provide high-end virtual experiences.
With the development of the internet, creating and distributing datasets for machine learning tasks became much easier.
Learning and collecting images from Wikipedia became simpler.
2010: The Era of Deep Learning
ImageNet: In 2009, the modern era of deep learning began when Fei-Fei Li from Stanford University created ImageNet, a large visual dataset hailed as a project that sparked the AI revolution worldwide.
As early as 2006, Li was a new professor at the University of Illinois at Urbana-Champaign. Her colleagues would constantly discuss new algorithms to make better decisions. However, she saw flaws in their plans.
If trained on datasets that reflect the real world, even the best algorithms won’t perform well. ImageNet consists of over 14 million images across more than 20,000 categories and remains a cornerstone of object recognition technology to this day.
“Open Competition”: In 2009, Netflix held a public competition named the Netflix Prize to predict user ratings for movies. On September 21, 2009, the Pragmatic Chaos team from BellKor beat Netflix’s own algorithm by 10.06% and won a $1 million prize.
Kaggle was established in 2010 as a platform for machine learning competitions open to everyone globally. It allows researchers, engineers, and local programmers to push the limits in solving complex data tasks.
Before the AI boom, investments in AI were around $20 million. By 2014, this investment had grown 20-fold, with market leaders like Google, Facebook, and Amazon allocating funds for further research into future AI products. This new wave of investment increased the number of hires in deep learning from hundreds to tens of thousands.
Conclusion
Despite its slow start, deep learning has become an inevitable part of our lives. From Netflix and YouTube recommendations to language translation engines, from facial recognition and medical diagnosis to self-driving cars, there is no field untouched by deep learning.
These advancements broaden the future scope and applications of neural networks in improving our quality of life.
Artificial intelligence is not our future; it is our present, and it has only just begun!
Original link: https://www.analyticsvidhya.com/blog/2020/10/how-does-the-gradient-descent-algorithm-work-in-machine-learning/
Good news!
The Beginner Learning Vision knowledge community is now open to the public👇👇👇
Download 1: OpenCV-Contrib Extension Module Chinese Version Tutorial
Reply with: Extension Module Chinese Tutorial in the "Beginner Learning Vision" WeChat public account backend to download the first OpenCV extension module tutorial in Chinese, covering installation, SFM algorithms, stereo vision, object tracking, biological vision, super-resolution processing, and more than twenty chapters of content.
Download 2: Python Vision Practical Projects 52 Lectures
Reply with: Python Vision Practical Projects in the "Beginner Learning Vision" WeChat public account backend to download 31 practical vision projects including image segmentation, mask detection, lane detection, vehicle counting, eyeliner addition, license plate recognition, character recognition, emotion detection, text content extraction, facial recognition, etc., to help quickly learn computer vision.
Download 3: OpenCV Practical Projects 20 Lectures
Reply with: OpenCV Practical Projects 20 Lectures in the "Beginner Learning Vision" WeChat public account backend to download 20 practical projects based on OpenCV for advanced learning.
Group Chat
Welcome to join the public account reader group to communicate with peers. Currently, there are WeChat groups for SLAM, 3D vision, sensors, autonomous driving, computational photography, detection, segmentation, recognition, medical imaging, GAN, algorithm competitions, etc. (These will be gradually subdivided). Please scan the WeChat ID below to join the group, and note: "Nickname + School/Company + Research Direction", for example: "Zhang San + Shanghai Jiao Tong University + Vision SLAM". Please follow the format; otherwise, you will not be approved. After successful addition, invitations to relevant WeChat groups will be sent based on research direction. Please do not send advertisements in the group; otherwise, you will be removed. Thank you for your understanding~