If the man-machine battle between AlphaGo and Lee Sedol in March 2016
only had a big impact in the scientific and go circles, then its match
with the top-ranked world Go champion Ke Jie in May 2017 brought
artificial intelligence technology to the public eye. AlphaGo, the first
artificial intelligence program to beat a human professional Go player
and the first to defeat a world champion, was developed by a team led by
Demis Hassabis at Google's DeepMind and works primarily on the
principle of "deep learning."
In fact, as early as 2012, deep
learning technology has been widely discussed in the academic
community. In this year's ImageNet Large-scale Visual Recognition
Challenge ILSVRC, AlexNet, a neural network structure with five
convolutional layers and three fully connected layers, achieved a top-5
(15.3%) historically best error rate, compared to the second-place
result of 26.2%. Since then, there have been more layers and more
complex neural network structures, such as ResNet, GoogleNet, VGGNet and
MaskRCNN, and last year's more popular generative adversarial network
GAN.
Whether it is AlexNet, which won the visual recognition
challenge, or AlphaGo, which defeated Go champion Ke Jie, their
implementation cannot be separated from the core of modern information
technology - the processor, whether the processor is a traditional CPU,
or a GPU. Or the emerging special acceleration component NNPU (NNPU is
short for Neural Network Processing Unit). In a small symposium on
Architecture 2030 at ISCA2016, Hall of Famer Professor Xie Yuan of UCSB
summarized the papers that have been included in ISCA since 1991, and
the papers related to dedicated acceleration components began in 2008
and peaked in 2016. More than the three traditional areas of processors,
memory, and interconnect architecture. In this year, the paper "A
Neural Network Instruction Set" submitted by the research group of Chen
Yunji and Chen Tianshi from the Institute of Computing Technology of the
Chinese Academy of Sciences is the highest scoring paper of ISCA2016.
Before
specifically introducing AI chips at home and abroad, some readers may
have such doubts: Aren't they all talking about neural networks and deep
learning? Then I think it is necessary to elaborate on the concept of
artificial intelligence and neural networks, especially in the
"Three-year Action Plan to Promote the development of a New generation
of artificial intelligence Industry (2018-2020)" released by the
Ministry of Industry and Information Technology in 2017, the description
of the development goals is easy to feel that artificial intelligence
is a neural network, and AI chips are neural network chips.
The
overall core basic capabilities of artificial intelligence have been
significantly enhanced, smart sensor technology products have achieved
breakthroughs, design, foundry, and sealing technology have reached the
international level, neural network chips have achieved mass production
and large-scale application in key areas, and the open source
development platform has initially the ability to support the rapid
development of the industry.
Not so much. Artificial intelligence is a
very, very old concept, and neural networks are just a subset of the
artificial intelligence category. As early as 1956, John McCarthy, the
Turing Award winner known as the "father of artificial intelligence,"
defined artificial intelligence as "the science and engineering of
creating intelligent machines." In 1959, Arthur Samuel gave the
definition of machine learning, a subfield of artificial intelligence,
as "the ability of a computer to learn, not through pre-accurately
implemented code", which is currently recognized as the earliest and
most accurate definition of machine learning. The neural networks and
deep learning that we are familiar with everyday belong to the category
of machine learning, and are all inspired by the brain mechanism.
Another important research area is the pulse neural network, which is
represented by the brain Computing Research Center of Tsinghua
University and Shanghai Xijing Technology.
Well, now we can finally
introduce the development status of AI chips at home and abroad, of
course, these are my personal observations and humble opinions, and the
peep view is right to throw a brick.
Due to their unique
technology and application advantages, Nvidia and Google account for
almost 80% of the market share in AI processing, and this share is
expected to expand further in 2018 following Google's announcement of
its Cloud TPU open service and Nvidia's launch of autonomous driving
processor Xavier. Other manufacturers, such as Intel, Tesla, ARM, IBM,
and Cadence, also have a presence in the AI processor space.
Of
course, these companies focus on different areas. For example, Nvidia
is mainly focused on GPU and driverless areas, while Google is mainly
for the cloud market, Intel is mainly for computer vision, and Cadence
is mainly to provide accelerated neural network computing related IP. If
these companies are also mainly biased towards hardware areas such as
processor design, ARM is mainly biased towards software, focusing on
providing efficient algorithms for machine learning and artificial
intelligence.
Note: The above table is the latest publicly available data for each development unit as of 2017.
No. 1 -- Nvidia
In
the field of artificial intelligence, Nvidia can be said to be the most
involved and largest market share of the company, its product lines
across self-driving cars, high-performance computing, robotics,
healthcare, cloud computing, game video and many other fields. Its new
AI supercomputer, Xavier, for self-driving cars, is, in the words of
NVIDIA CEO Jen-Hsun Huang, "a really great effort in SoC that I know of,
and we've been working on chips for a long time."
Xavier is a
complete system-on-chip (SoC) that integrates a new GPU architecture
known as Volta, a custom 8-core CPU architecture, and a new computer
vision accelerator. The processor delivers high performance of 20 TOPS
(trillion operations per second) while consuming only 20 watts. A single
Xavier person
Phone