Although the spread of the global new crown epidemic in 2020 and the escalation of the Cold War between China and the United States in the semiconductor field have had a negative impact on the global economy and the semiconductor industry, technological progress in the semiconductor field has not stopped, and some technologies have even accelerated the process of market commercialization. ASPENCORE’s global analyst team has carefully selected 10 technology trends that will emerge or highlight in the global semiconductor industry in 2021. Compared with the top 10 technology trends in 2020, what will change in 2021?
1. Arm architecture processor: comprehensive penetration of high, medium and low performance computing fields
Arm has released a Cortex-A78C CPU specifically for next-generation “always-on” laptops, with support for 8 “big cores” and increased L3 cache to 8MB. The Cortex-A78C-based CPU chip will become a strong competitor of the x86 architecture CPU in the high-performance PC market. The full adoption of the Arm architecture-based CPU in Apple’s Mac computers will drive more Arm-based chip designers to enter the PC market, including Qualcomm, Huawei and Samsung. Even AMD in the x86 camp is said to be developing Arm-based processor Chips, while Amazon Web Services is driving the growth of Arm-based CPUs in the server market. In terms of high-performance computing (HPC), the Arm-based supercomputer “Fugaku” won the first place in the world’s Top 500 supercomputers.
The Arm Cortex-A78 series CPUs include the Cortex-A78 for mobile computing, which takes into account performance and energy efficiency; the Cortex-A78AE for the automotive market, which emphasizes safety; and the Cortex-A78C core for high-performance computing. In addition to these three large-scale application markets, Arm-based processors have also generally penetrated in the Internet of Things, edge computing, AI, and 5G, becoming the most widely used microprocessor instruction set architecture (ISA) in computer history. As of the end of 2019, a total of 130 billion Arm processor chips have been shipped worldwide, and 70% of the world’s population is using Electronic devices powered by Arm processors.
Figure 1: The Arm Cortex-A78 family of CPUs and their intended applications. (Source: Arm)
After 30 years of development, Arm, which originated in the United Kingdom and is composed of 12 engineers, has monopolized the mobile device market with its unique IP licensing business model and low-power processing performance. Arm is now a $40 billion IP development company with more than 6,500 employees and is leading more than 1,000 partners into embedded systems, IoT, mobile, PC and automotive applications. If Arm is successfully transferred from Japan’s Softbank to NVIDIA, it will become the dominant computing processing architecture in the emerging data centers and servers, autonomous driving, and artificial intelligence markets. Whether x86, which has monopolized the PC market for many years, or RISC-V, a rising star, is hard to match in terms of performance and shipments.
2. 3nm process node: the difference between TSMC and Samsung’s routes has become larger
Since the 7nm process, TSMC and Samsung Foundry have seen a relatively large difference in route evolution. For example, Samsung 7nm (7LPP) adopted EUV (Extreme Ultraviolet Light) earlier, and used 5nm and 4nm as half-generation processes; while TSMC followed the evolution of 7nm itself (N7/N7P/N7+), 5nm has also begun an important process iteration .
Samsung has adopted a more radical transistor structure GAAFET (Gate-All-Around FET) on the big iteration after 7nm. In mid-2019, Samsung Foundry announced that the 3nm PDK had entered the Alpha stage (3GAE). For a more specific structure, Samsung has chosen nanosheets, called MBCFETs (Multi-Bridge Channel FETs), but it is still possible to develop GAAFETs with nanowires. Samsung’s data shows that compared with its 7nm process, 3nm has a 35% increase in performance, a 50% reduction in power consumption, and a 45% reduction in area. Judging from the news in mid-2020, Samsung’s 3nm trial production has been postponed to Q1 2021, and mass production will have to wait until 2022.
Figure 2: Process node evolution for Samsung foundry. (Source: Samsung)
In April 2020, TSMC disclosed the specific information of the 3nm process (N3) for the first time. N3 is another official iteration after the N5 process. It is expected that the transistor density will be increased by 1.7 times (the cell-level density is about 290MTr/mm?), compared with N5, the performance is improved by up to 50%, and the power consumption is reduced by up to 30%. Risk production of TSMC’s N3 process is planned for 2021, with mass production starting in the second half of 2022. Taking into account the maturity, power consumption and cost issues, TSMC said that N3 will still use the traditional FinFET structure, but the stepping of its 3nm process itself still has the opportunity to use GAAFET technology.
In fact, the two world’s most advanced wafer foundries have seen big differences in technological evolution since the 5nm process. Samsung’s technology is more aggressive in the general direction of node evolution, but TSMC still has considerable advantages in transistor density and actual performance/power consumption.
3. High Performance Computing: The Progress of Dedicated Acceleration for Data Centers
The A64FX launched by Fujitsu in March 2020 is a chip dedicated to HPC (High Performance Computing) loads, and its structure represents an important trend in the HPC and data center markets. It has achieved the first computing power and efficiency in the field of supercomputing in terms of data, which is much higher than the combination of Intel Xeon + Nvidia Tesla + memory, much like a combination of CPU, GPU and high-speed memory on-chip. However, its overall architecture is an integrated integration of monolithic, which eliminates chip-to-chip communication between the CPU and the accelerated processor, and integrates the storage system closer to the computing core, which is partly similar to the design of a specific domain. A64FX contains 48 cores, each core is 512bit wide pipeline, and each chip has 8GiB HMB2 storage.
NVIDIA’s CUDA programming allows its GPU to be widely used in the HPC field, and NVIDIA itself is also thinking about the development path of HPC. In October 2020, NVIDIA launched the BlueField-2 family DPU (data processing unit) and DOCA software development kit – DPU claims to be a data center on a chip. Simply put, a DPU is a data center-oriented chip that accelerates specific loads.
Figure 3: Nvidia Bluefield-2X card includes DPU and Ampere GPU. (Source: nVidia)
In addition to the Ampere GPU (BlueField-2X) in the computing (or AI acceleration) part, in terms of networking, storage and security, BlueField chips have programmable Arm cores and Mellanox Networking adapters (SmartNIC), including “software-defined security”, “Software-defined storage”, “software-defined networking” and infrastructure management. Mellanox has been brought under the umbrella of Nvidia, and Arm’s current mergers and acquisitions that are currently being hyped, obviously become easy to understand in this scenario.
In the field of DSA accelerators, Nvidia has long recognized the need for specialized processors in data centers to gradually subvert the CPU-dominated market, especially focusing on efficiency and performance in data center security, networking, and storage. Such strategies alone are enough to circumvent Arm’s inherent flaws in the high-performance market. This may also be the purpose of AMD’s acquisition of Xilinx, and the data center acceleration business has become the focus of Xilinx’s development as early as the year before.
In 2020, these market actions and technological evolution directions of data centers are enough to show that the era of dedicated computing in data centers is advancing in an orderly manner.
4. Sensor fusion: The combination of hardware and algorithms drives autonomous systems such as autonomous driving, drones, and industrial robots
In complex application scenarios such as autonomous driving and drones, Multi-sensor Fusion (MSF) combines information and data from multiple and various types of sensors through high-performance processors and software algorithms to a certain degree. Rules are automatically analyzed and synthesized for decision making and enforcement. Cameras are the most widely used image sensors, but their performance suffers significantly in low-light environments. Sensors such as ultrasonic, radar, and lidar (LiDAR) based on the time-of-flight (ToF) principle are great complements and enhancements to cameras.
Figure 4: The design of autonomous driving applications requires many sensor fusion techniques. (Source: Synopsys)
Lidar, which emits up to 1 million laser pulses per second, can capture high-resolution 3D point cloud data that not only identifies but also classifies objects. According to Yole, a market research firm, LiDAR for the ADAS/autonomous driving market will grow from $19 million in 2019 to $1.7 billion in 2025, achieving a compound annual growth rate of 114%. However, due to the complex design and high cost of lidar, large-scale applications still face many challenges. Luminar released a LiDAR solution priced under $1,000 in 2019, while Velodyne, which debuted real-time 3D LiDAR in 2005, announced a gradual price reduction plan, reducing the average selling price from $17,900 in 2017 to $600 in 2024. Dollar. Chinese LiDAR manufacturers have started producing sub-$1,000 products and are gaining more market share. Although not favored by Tesla’s Elon Musk, lidar will remain a key technology for achieving higher levels of autonomous driving.
Complex environmental and climatic conditions require data from sensor sources such as images, ultrasonics, radar, and lidar to be cross-referenced and calculated, which requires AI chips and deep learning model algorithms with real-time processing performance. Only by fusing multiple technologies such as sensors, chips and AI algorithms in the system can the precise and safe operation of autonomous systems in practical application scenarios be guaranteed. In addition to ADAS/autonomous driving applications, sensor fusion technology will also be developed and popularized in fields such as industrial robots and drones.
5. Chiplet: Open a new mode of chip design IP
Moore’s Law has been the fundamental law governing the rapid growth of the semiconductor industry since 1965. With the advancement of semiconductor manufacturing process nodes from 7nm, 5nm to 3nm, and gradually approaching the physical limit, the cost of chip design and manufacturing continues to increase, and the development speed of the entire semiconductor industry has slowed down significantly. Leading semiconductor manufacturers are turning to chiplets in anticipation of finding new solutions for semiconductor design and integration, returning the semiconductor industry to a two-year doubling cycle.
Figure 5: Future computer systems may contain a CPU die and multiple GPU and memory die, all packaged and integrated on a single chip. (Image credit: AMD)
Die replaces a single chip with multiple small chips and packs them together, which can accommodate more transistors in the same area and can significantly improve chip production yields. Chip is like object-oriented programming, which is a thinking paradigm based on the concept of objects, and a similar paradigm shift is taking place in hardware design. However, interfaces between core particles are required, not just electrical interfaces, but also interfaces that simplify design, manufacture and collaboration. The Open Compute Project (OCP), a global industry group, is working to define and develop a unified core-particle architecture by introducing new interfaces, link layers, and early proof-of-concept.
According to the latest report released by the market research agency Omdia, microprocessor chips using “chips” in the design and manufacturing process will grow rapidly in the next five years. One hundred million U.S. dollars. At present, Marvell, AMD, Intel, TSMC and other semiconductor companies have successively released Chiplet products. Chiplet will bring new opportunities to the semiconductor industry, such as lowering the threshold for large-scale chip design; upgrading from IP to Chiplet supplier to enhance IP value and effectively reduce the design cost of chip customers; increase the multi-chip module (Multi-Chip Module). , MCM) business, Chiplet iteration cycle is much lower than ASIC, which can improve the production line utilization of fabs and packaging plants; establish an interoperable component, interconnect, protocol and software ecosystem.
Dr. Weimin Dai of VeriSilicon proposed the concept of “IP as a Chiplet”, which aims to realize the “plug and play” of IP with specific functions through Chiplets, solve the problem of balance between performance and cost in 7nm, 5nm and below process nodes, and reduce The design time and risk of LSI chips, evolving from IP in SoCs to IP presented as Chiplets in SiPs. The global semiconductor IP market is growing in size and is expected to rise from $5.0 billion in 2019 to $10.1 billion in 2027. The evolution of the Fabless model has spawned the chip design service industry, and the development of semiconductor IP licensing and chiplets will spawn more opportunities.
6. System-in-Package (SiP): the integrator of advanced packaging platforms
The development of chip packaging technology has roughly gone through four stages: the first stage is socket components (DIP/PGA); the second stage is surface mount (SMT); the third stage is area array packaging (BGA/CSP); The fourth stage is high-density system-in-package (SiP). At present, the mainstream technology of global semiconductor packaging has entered the fourth stage. Major packaging technologies such as SiP, PoP and Hybrid have been applied on a large scale, and some high-end packaging technologies have begun to develop in the direction of Chiplets. SiP packaging is shifting from single-sided packaging to double-sided packaging. It is expected that double-sided packaging SiP will become mainstream in 2021, and multi-layer 3D SiP products will appear in 2022.
Figure 6: Octavo’s SiP device integrates an MCU, memory, PMIC, MEMS oscillator, and some passives in a standard BGA package. (Source: Octavo Systems)
Flip Chip and Wire-bond have been widely used in SiP packaging of high-end and low-end chips, 2D/2.5D/3D heterogeneous SiP, and are currently the main SiP packaging forms. According to Yole’s market analysis report on SiP, the market size of SiP packaging products in the form of flip-chip and wire bonding will be US$12.2 billion in 2019 (accounting for more than 90% of the total SiP packaging market), and it is expected to reach 17.1 billion by 2025. USD, the compound annual growth rate from 2019 to 2025 is 6%; fan-out (FO) packaging led by TSMC has also become one of the main packaging forms of SiP, the market size in 2019 is 1.148 billion US dollars, by 2025 increased to $1.364 billion; embedded die SiP packaging is transitioning from single-die embedding to multi-die embedding, although this form of SiP packaging product market size is small, but growing strongly (up to 27% growth rate), It is expected to exceed $315 million in 2025.
Mobile and consumer electronics are the main application markets for SiP packaging, especially mobile phone RF devices. With the full deployment of 5G networks, telecom facilities such as 5G handsets and base stations will create new opportunities for SiP packaging. Wearable devices, mainly Apple Watch and AirPods, use SiP packages more because of strict requirements on volume and size, which has become the main growth point of SiP in the field of consumer electronics. Another driving force for SiP comes from MEMS and sensors, including pressure sensors, inertial measurement units, optical MEMS, microbolometers, oscillators and environmental sensors, etc. The fast-growing application areas mainly include automotive ADAS/autonomous driving, robotics and Internet of Things, etc.
7. Wide-bandgap semiconductors: replacing silicon-based devices in key areas
The third generation of semiconductors, also known as wide bandgap semiconductors, refers to semiconductor materials with a band gap greater than 2.2eV, mainly represented by silicon carbide (SiC) and gallium nitride (GaN) technologies. Compared with the first and second generation semiconductors, it has the advantages of higher forbidden band width, high breakdown voltage, low on-resistance, almost no switching loss, and excellent electrical and thermal conductivity. Under the high premise, the volume will be greatly reduced, and it is expected to replace the previous two generations of semiconductor materials in the fields of high temperature, high pressure, high power and high frequency.
Figure 7: Wide-bandgap semiconductors represented by silicon carbide (SiC) and gallium nitride (GaN) have great potential for development in new energy vehicles, 5G communications, and rail transit. (Source: EETimes)
Previously, the biggest reason that hindered the popularization of third-generation semiconductor technology was the high cost of SiC and GaN substrates, and the cost of devices was 5 to 10 times higher than that of traditional silicon-based products. With the maturity of substrate technology and the improvement of process, the manufacturing cost has approached silicon-based devices. 2021 will be a critical year for third-generation semiconductor devices, and it is expected that electric vehicles, industrial charging, 5G high-frequency devices, and power applications in the fields of renewable energy and energy storage can benefit from the development of wide-bandgap semiconductors, especially The original Si IGBT and Si MOSFET will be largely replaced in high frequency and high voltage applications.
In addition, because the third-generation semiconductor products mainly use mature technology, it is expected to become an industrial breakthrough in the context of the continuous upgrading of the technology blockade of China’s semiconductor industry by the United States. Therefore, in terms of policy, China has also clarified that the third-generation semiconductor is an important development direction in the 2030 plan and the “14th Five-Year Plan” national research and development plan.
8. The concept of “Domain Architecture” will lead the development of future automobiles
At present, the automobile industry generally adopts a flat point-to-point “distributed electronic architecture”, that is, the electronic and electrical functions of the vehicle are realized through hundreds of electronic control units (ECUs), and the associated ECUs are connected together through the corresponding automobile bus. . However, with the rapid development of automobiles in the direction of automation, interconnection, electrification and service, the traditional hardware-based distributed architecture has encountered bottlenecks in system scalability, software and hardware compatibility, security and upgrade convenience, and it is becoming increasingly difficult to Conducive to the requirements of rapid iteration in the automotive industry. In the future, the underlying electronic architecture of automobiles will develop towards a high-performance “domain architecture”, with stronger networking capabilities, providing secure OTA wireless updates, and high development efficiency. It is an upgradeable, scalable, and future-proof platform.
Figure 8: Automotive electronics will eventually move towards a high degree of integration. (Source: MPS)
Therefore, the goals of the Domain Controller (DCU) that comes with the “Domain Architecture” will be more focused on integration, security, and core computing than the ECU. For example, automation is achieved through unmanned driving or sensor fusion DCU; vehicle-to-vehicle, vehicle-to-everything else communication and wireless upgrade of software through intelligent cockpit DCU; plug-in hybrid system to full battery electric vehicle through powertrain DCU Electrification is a typical application of this trend.
Another reason the DCU is compelling is that it leads automotive suppliers to focus R&D dollars on a single subsystem, rather than a dozen or so different sub-units. In this way, to achieve complex and powerful DCUs, automotive suppliers can not rely on most existing off-the-shelf chips, but prefer carefully designed integrated devices.
9. FPGA: AI Accelerator for Data Center and Edge Computing
Since Altera and Xilinx pioneered the programmable logic device type FPGA in the 1980s, FPGAs have undergone several waves of dramatic changes. In addition to its inherent programmable flexibility, network connectivity and data exchange capabilities make FPGAs an indispensable mass data processing unit for cloud computing and data centers, especially for applications such as machine learning/AI, network acceleration, and computational storage. There is strong demand, such as SmartNIC, search engine accelerator, AI inference engine, etc. The emerging edge computing will set off a new wave of FPGA demand, including 5G base stations and telecom infrastructure, edge gateways and routers, and IoT smart terminals. Autonomous driving, smart factories, smart cities, and transportation will drive further growth and expansion of FPGA applications.
According to Semico Research, the global data center accelerator (including CPU, GPU, FPGA and ASIC) market will grow from $2.84 billion in 2018 to $21.19 billion in 2023, with a compound annual growth rate (CAGR) of up to 50%. Among them, the fastest growing is the number of FPGA accelerators, which was only $1 billion in 2018 and will exceed $5 billion by 2023. The growth driver is mainly from enterprise-level data load acceleration applications. Intel and Xilinx, the two largest manufacturers in the FPGA industry, have released a series of FPGA accelerator cards, such as Intel FPGA PAC D5005, N3000, and programmable accelerator cards based on Arria 10 GX FPGA; Xilinx Alveo U50/U200/U250/U280 data center accelerator card. Achronix also introduced Speedster7t FPGA-based accelerator cards to capture the data center’s need for high-bandwidth workload optimization.
Figure 7: Achronix’s VectorPath accelerator cards can support a range of high-speed data and storage interfaces. (Source: Achronix)
Altera was acquired by Intel, and Xilinx is now very likely to be included in AMD’s name, which shows that FPGA is always a niche market, and it is difficult to become a strong enough independent market in terms of market size compared with general-purpose chips such as CPU and GPU. However, we will see FPGA accelerator cards appear in more computing processing units in the coming years as FPGAs have unique properties that make them the AI inference accelerator of choice for cloud and edge computing.
10. AFE technology for sign signal monitoring: implant the “health monitoring” of the VSM system into smart wearable devices
According to the latest data for the third quarter of 2020 released by the market research agency IDC, the total global shipments of wearable devices were 125 million units, a year-on-year increase of 35%. Among them, the market shipment of hearables represented by Apple AirPods is about 70 million units, the shipment of smart watches represented by Apple Watch is over 30 million units, and the shipments represented by Xiaomi wristbands The shipment of smart bracelets (Wrist Band) is about 20 million units.
The global spread of the new coronavirus has greatly stimulated sales of smart wearables with “health monitoring” capabilities. Apple Watch, which is favored by users around the world, provides a wealth of health and medical management functions, especially heart rate detection. Apple Watch Series 6 allows users to measure blood oxygen saturation to better understand their overall health. The new generation of smart wearable devices utilizes high-precision analog front-end technology for human body sign signal monitoring, providing consumers with more “health monitoring” functions, and will have significant market growth in the next few years.
One of the fastest growing segments of smart wearables with vital sign monitoring (VSM) capabilities. Previously, VSM equipment was mainly used in professional rescue settings such as hospitals, ambulances and helicopters, such as bedside monitors and monitors in intensive care units. These high-end systems support multi-lead ECG measurements, oxygen saturation, body temperature, carbon dioxide, and other parameters. Now, wearable VSM systems are gradually being integrated into our daily lives, enabling doctors to monitor patients remotely and enabling older adults to live independently for longer periods of time. VSM applications in sports and exercise will also become a trend, which can not only help people monitor vital signs parameters, but also provide feedback on whether the exercise is effective.
Figure 8: ADI’s wearable VSM system solution and platform tools. (Source: ADI)
In wearable devices, multiple parameters such as heart rate, activity, skin impedance, oxygen saturation, and body temperature can often be measured. Analog Devices has developed a multi-mode analog front-end (AFE) chip that can measure cardiac signals directly by connecting biopotential electrodes. It measures galvanic skin response to track stress or mental state. Based on this single-chip AFE scheme, a wearable VSM system that is versatile, small in size, and very energy efficient can be created.
The Links: PM10CSJ060 AA084SA01