At present, the mainstream CPU process has reached 14-32 nm (Intel's fifth-generation i7 processor and Samsung Exynos 7420 processor are all using the latest 14-nm manufacturing process), even higher in the R & D process has reached 7nm or higher. However, behind these numbers, which are common to ordinary people, there is a fierce "standard dispute."
Intel's co-founder Gordon Moore put forward Moore's Law half a century ago, which means that each generation of process technology must double the number of transistors on the chip. Looking at the history of innovation in each generation of chips, the industry has followed this rule and named the new process node by approximately 0.7 times the previous generation process. This linear shrinkage means that the transistor density doubles. As a result, 90 nm, 65 nm, 45 nm, and 32 nm have emerged - each generation of process node can accommodate transistors twice as much as the previous generation in a given area.
Before entering the subject, what is the popularization of "process technology"?
According to the Baidu Encyclopedia, the semiconductor manufacturing process refers to the process of manufacturing a CPU or GPU, or the size of a transistor gate circuit, in nanometers (nm). The more advanced the manufacturing process, the more integrated transistors within the CPU and GPU chips will enable the processor to have more features and higher performance; more advanced manufacturing processes will reduce the processor's thermal dissipation (TDP) To solve the obstacles of processor frequency improvement; more advanced manufacturing processes can also reduce the core area of ​​the processor, which means that more CPU and GPU products can be manufactured on the same area of ​​the wafer, directly reducing CPU and GPU product costs.
At present, the mainstream CPU process has reached 14-32 nm (Intel's fifth-generation i7 processor and Samsung Exynos 7420 processor are all using the latest 14-nm manufacturing process), even higher in the R & D process has reached 7nm or higher.
However, behind these numbers, which are common to ordinary people, there is a fierce "standard dispute."
Can you change the score but not be kind?
The war is always fierce. Let's relax with a joke first:
In a class, I, S, and T students sat. The final exam was over. The teacher at the parent conference said that the class S and T students in our class had the highest scores and were commendable. I students were very aggrieved after hearing this and went home and told the parents that I had a test score of 98.5. That was true. The fools only had a 79. In order not to beat him, he secretly changed the score to 99.
Well, I started to talk seriously...
In less than one year after Intel introduced the 14nm process in 2014, Samsung and TSMC have successively introduced their own 14nm process and 16nm process, and are used by Apple to manufacture the A9 processor on the iPhone 6s. At the end of 2016, Samsung and TSMC have also launched their own 10nm process. It seems that this is 10 months earlier than Intel's 10nm process.
However, Mark Bohr, Intel's senior academician and director of process architecture and integration at the Technology and Manufacturing Division, criticized some of his competitors' practices. He pointed out that it may be due to the fact that further miniaturization of the process has become increasingly difficult, and some companies have deviated from Moore's Law. law. Even if transistor densities increase little or not at all, they continue to name new process process nodes. As a result, these new node names simply do not reflect the correct position on the Moore's Law curve.
"The industry urgently needs a standardized transistor density index in order to give customers a right choice. Customers should be able to compare the different manufacturing processes of chip makers at any time, as well as the 'same generation' products of various chip manufacturers. But semiconductor process and various The increasingly complex design makes the standardization more challenging,†said Mark Bohr.
In his view, "What the industry really needs is the absolute number of transistors in a given area (per square millimeter)." That is to say, each chip manufacturer should disclose the calculation using this simple formula when referring to the process node. The density of logic transistors in MTr/mm2 (number of transistors per square millimeter (in millions)). Only in this way can the industry clarify the chaotic situation of process node nomenclature and devote itself to promoting Moore’s Law.
According to data released by Intel, the minimum gate pitch for Intel's 10nm process has been reduced from 70nm to 54nm, and the minimum metal pitch has been reduced from 52nm to 36nm. This allows logic transistor densities of up to 008 million transistors per square millimeter, which is 2.7 times the previous Intel 14nm process, and about twice the industry's other "10nm" process. At the same time, the die area of ​​the chip has also been reduced more than ever. It can be seen that the improvement of each generation of the process before 22 nm can bring about a 0.62 fold reduction of the die area, and that of 14 nm and 10 nm brings a 0.46 fold and a 0.43 fold reduction.
Ultramicro is Intel's term used to describe the transistor density increase of 2.7 times from the 14nm to 10nm process. Ultra-micro scaling provides ultra-conventional transistor density for Intel's 14nm and 10nm processes and extends the life cycle of the process. Although the development time between process nodes was more than two years, ultra-compacting fully conformed to Moore's Law.
Vertically, Intel's 10nm process has improved performance by up to 25% and reduced power consumption by 45% compared to previous 14nm processes. The new enhanced version of 10 nm process - 10 + +, you can increase performance by 15% or reduce power consumption.
"If we compare horizontally with other industry competitors' 16/14nm processes, we will find that the transistor density of Intel's 14nm process is 1.3 times that of their counterparts. Other competing vendors' 10nm process transistor density is comparable to Intel's 14nm process." , but later than Intel's 14nm process for three years," said Stacy Smith, Intel's executive vice president and president of the Manufacturing, Operations and Sales Group.
Did Moore's Law ever fail?
In the second half of 2011, Intel released the 22nm process; in the first half of 2014 after two and a half years, Intel released the latest 14nm process; three years later in 2017, Intel officially released a new generation of 10nm process. Moreover, in the process of upgrading from 14nm to 10nm, Intel's previous Tick-Tock strategy (one-year upgrade process, one-year upgrade framework) is rarely mentioned again.
"Even the top chip makers like Intel have spent 3 years or so to complete the evolution between the two generations of technology. Isn't that still invalid?" People can't help but ask?
However, if we dig in carefully, we will find that the name of Intel's 14nm and the previous 22nm is not 0.7 times. In other words, if you look at the 0.7-times nomenclature rule, the 0.7-fold nomenclature for 22nm should be 16nm instead of the 0.64x 14nm process for 22nm.
As can be seen from the top two graphs, the transistor density under Intel's 14nm process is 37.5Mtr/mm2 (million transistors/mm2), and this density is 2.45 times the transistor density under Intel's 22nm process. If we follow Moore's Law to double the standard every two years, the number of transistors in the two-and-a-half-year cycle should increase by about 2.5 times, so Intel's 14nm process transistor density is also basically in line with Moore's Law.
Moreover, from Intel's 32nm to 22nm, the transistor density (the average number of transistors per unit area) improves more than twice as often as every two years (the 32nm transistor density is 2.27 times that of 45nm). Although Intel's upgrade time from 22nm to 14nm, and the time period from 14nm to 10nm have exceeded two years, the corresponding transistor density has also increased by 2.5 and 2.7 times, respectively.
The transistor density of Intel's newly released 10nm process has reached 100.8Mtr/mm2, which is about 2.7 times that of the previous generation 14nm process, which means that Intel realized a 2.7-fold growth in transistor density in a period of about three years. Although slightly lower than the three-fold growth, Intel's 10nm process is still in line with Moore's Law's linear growth requirements for transistor density, in conjunction with previous generations of growth beyond Moore's Law.
What is the meaning of super-micron?
In my opinion, ultra-compact technology still wants to maintain Moore's Law as much as possible, and keep the chip performance and cost raised. Because the smaller the process node, the more complex the process, the more difficult it is to follow the laws of Moore's Law. The smaller the die area, the larger the number of transistors that can be placed on a 300mm wafer. For Intel and other manufacturers, at least there is no need to advance 450mm wafers. This is one of them.
Second, to extend the node time is based on manufacturing costs. The more advanced the process, the higher the cost. Intel said that even if you do not do it, you will need to spend 7 billion US dollars to move equipment into the fab alone. I am afraid that Intel must also weigh the amount of its own; Second, the manufacturing difficulty is greatly increased, many technologies need further study, the former revolution The advancement of the formula will become gradual progress. This is why Intel proposed the concept of 10nm+ and 10nm++. It simply means “small step and runâ€.
Third, not all applications require such advanced manufacturing technology. Currently, chips such as CPUs, GPUs, and FPGAs are chasing new processes to improve performance, and more applications may be adequately handled by 28nm or even 90nm processes. Also, consider whether a company can design a 5nm chip. How much production capacity can be produced? How much profit? In the absence of a clear outlook, "repairing" may be more reliable in relatively mature processes.
Fourth, in order to achieve the goal of reducing the area and increasing the transistor density, Intel has also proposed many new technologies, such as FinFET, 3D stacking, through-silicon vias, and UV photolithography. But on the other hand, simply relying on the number of stacked transistors is not enough, because other factors need to be considered, such as the cache, drive voltage, current density, etc., so it is indeed a very complicated thing.
With the development of the process, the time between process nodes has been extended, the cost is more expensive, and fewer and fewer companies can afford the cost of advancing Moore's Law. This is the problem facing the entire industry. With scale advantages, process technology advantages, and integration density advantages, Intel is pursuing the advancement of Moore's Law in the future.
Subverting future calculations
The driving force behind Moore’s Law continues to evolve not only from the evolution of manufacturing processes.
As the need for collection, analysis, and decision making from highly dynamic, unstructured natural data grows, the need for computing goes beyond the classic CPU and GPU architecture. "It is not acceptable to not study the generation of data, the type of data, or the processing power needed. This is not the same as the previous general-purpose data processing. Simply emphasizing the computational power of a certain processor is quite one-sided." Intel Corporation Worldwide Yang Xu, vice president and president of China, believes that artificial intelligence equals GPU is a misunderstanding. Artificial intelligence needs to develop at least for a decade or two. Now no one manufacturer dares to say that it will respond to the development of artificial intelligence in the future. All computing power is ready, and it will be able to handle all future applications from simple to complex artificial intelligence.
In the process of transitioning to a data company, Intel defined itself as an end-to-end solution provider, which means that the product line covers the cloud, the network transmission end, and the terminal. Among them, the core comes from large-scale data processing in the cloud, and the layout of end-to-end allows Intel to grasp “what data is, what kind of data is, how it needs to be handledâ€.
In order to make up for the ability to process new data, keep up with the pace of technological development, and promote computing beyond PCs and servers, Intel has been researching proprietary architectures that have been able to speed up classic computing platforms for the past six years and has invested heavily in acquisitions. In March 2017, Intel spent US$15.3 billion to acquire Mobileye, an Israeli supplier of automated driving technology, and Mobileye led today's Intel’s autonomous driving division. In 2016, Intel acquired AI startup Nervana Systems and Movidius, a visual processing chip company. In June 2015, Intel spent $16.7 billion to acquire Altera, a manufacturer of programmable chip FPGAs, and set up a programmable solutions business unit.
In addition, Intel has also increased its investment in and research and development of artificial intelligence (AI) and neural quasi-state calculations. This is seen as Intel’s early layout of future calculations and aims to subvert the global future computing landscape.
â— Quantum calculation
Compared with traditional computing, the biggest advantage of quantum computing is that it can run data in parallel. It means that the capacity of data is up to 50 times that of traditional computers, allowing us to deal with problems that cannot be solved by traditional computers within a fixed memory time.
How to understand the difference between quantum computing and traditional computing in an easy-to-understand way, there is an industry-recognized example: throw a coin, usually, when it lands or faces up, or vice versa, there are only two answers, which is Binary traditional calculations. Then, now that you set the coin up and rotate, it is both 1 and 0. This is the quantum calculation.
For a data company such as Intel, the purpose of betting on quantum computing is obvious. At present, data in all areas of the world including unmanned and artificial intelligence is showing explosive growth, but its computational capabilities add up to a very limited extent. Once quantum computing is used, it will present a subversive leap.
In 2015, Intel accelerated the development of quantum computing together with academic partner QuTech. In October 2017, the two companies successfully tested 17 quantum-bit superconducting computing chips. During CES 2018, Intel officially delivered the first 49-qubit quantum computing test chip to QuTech.
Three months, from 17 qubits to 49 qubits, the speed of iterations is obvious. For Intel, this means that although quantum computing has gone through nearly 40 years and has not yet taken the first step in the long march to the world, there is no doubt that the quantum quantum hegemony of Intel has arrived.
â— Neuro mimicry prototype chip Loihi
The inspiration for neural quasi-state calculations comes from our current understanding of brain structure and its computational capabilities. The Intel neural mimicry prototype chip Loihi includes digital circuitry that mimics the basic mechanisms of the brain, making machine learning faster and more efficient, while also requiring less computing power. Compared with general-purpose computing chips that train artificial intelligence systems, the energy efficiency of Loihi chips has increased by a factor of 1,000.
Intel scientists used Rosalind Franklin's shaking head doll as a training tool to rotate it 360 degrees, allowing Loihi to remember Rosalin from every angle. After training once, Loihi was able to distinguish a rubber duck, a toy elephant and a shaking head doll from a small number of pictures in 4 seconds. Although the institute only gave Rosalin a back view, Loihi could still quickly identify it. Although this experiment used only Loihi's chip resources of less than 1%, it shows the validity of the architecture.
Experts predict that the robot will be the killer application of neural quasi-state calculations. In smart home applications, a thief who wants to burglary may be identified by Loihi in an intelligent surveillance camera when he enters the room and issues an alarm. Loihi may also play the role of “traffic police†in automotive applications. Traffic pressure, or identify the movement of cars and bicycles; in the industrial sector, Loihi may be transformed into a meticulous "supervisor" monitoring everything from ball bearings to industrial applications for building roads and bridges. "Neuromorphic chips will help people reduce some of the heavy and time-consuming work,"
â— Artificial intelligence
According to Song Jiqiang, director of the Intel China Research Institute, if the ultimate intelligence of the AI ​​is 100%, then the intelligence level of the current AI will be 10%. In other words, AI is still in its infancy and there is still huge room for growth.
Driven by computer and algorithmic innovations, the transformative power of artificial intelligence is expected to have a major impact on society. Now, Intel is using its own advantages to promote Moore's Law and create a leading position in the market to bring a variety of products - Intel Xeon processor, Intel Nervana technology, Intel Movidius technology and Intel FPGAs - from the edge of the network to the data Centers and cloud computing platforms to meet the unique needs of artificial intelligence computing tasks.
As mentioned earlier, Intel is unique in its ability to provide a diversified solution that can provide products that adapt to different workloads and energy consumption, rather than one or two of them. In this way, users can integrate their own applications with general-purpose software and choose to operate on Xeon processors. At the same time, if there are more detailed job requirements, they can choose to switch from general-purpose processors to more specific processors: If you want low latency, you can use an FPGA; if you want to focus more on low power, you can use
Movidius; If you want to pursue high performance such as data center, you can use Xeon.
In Song Jiqiang's view, "any kind of problem that deals with all the problems with a chip or an architectural approach is too exaggerated." Because many AI applications are not yet completed, and the predictable trend is the algorithm The dividends will be less and less, and the marginal effect will gradually decrease. As the saying goes, “it's a horse, pull it out,†and when the AI ​​application needs to be truly grounded, everyone needs to really think in terms of products rather than academics. Different computing power is suitable for different applications. Different Hardware needs to be market tested one by one.
VAPEAK TINY Vape is so convenient, portable, and small volume, you just need to take them
out of your pocket and take a puff, feel the cloud of smoke, and the fragrance of fruit surrounding you. It's so great.
We are the distributor of the ovns & vapeak vape brand, we sell ovns disposable vape,ovns vape kit, ovns juul compatible refillable pod, and so on.
We are also China's leading manufacturer and supplier of Disposable Vapes puff bars, disposable vape kit, e-cigarette
vape pens, and e-cigarette kit, and we specialize in disposable vapes, e-cigarette vape pens, e-cigarette kits, etc.
vapeak tiny vape bar,vapeak tiny vape box,vapeak tiny vape starter kit,vapeak tiny vape disposable,vapeak tiny vape mod kit
Ningbo Autrends International Trade Co.,Ltd. , https://www.supervapebar.com