Hi Dear user!
Let's digress from everyday life and dive headlong into the world of microprocessors
Difficulties in processor production!
Technically modern microprocessor is made in the form of one of the top of a large integrated circuit consisting of several billion elements. This is one of the most complex structures created by man, the key elements of any microprocessor are discrete switches - transistors. By blocking and passing electric current (switching on and off), they enable computer logic circuits to operate in two states, that is, in the binary system.
The sizes of transistors are measured in nanometers. The main part of the work when creating processors is not done at all by people, but by robotic mechanisms. They drag the silicon wafers back and forth.
One nanometer is one billionth of a meter, and more than 2000 transistor gates made up of 45 nanometer production technology can be placed on a single human hair cut.
TZ~65GD)DC1TZIW45O[W$OE.png (648.81 KB, Downloads: 3)
2019-06-14 02:26:04 Upload
W0N}YHMQ@)HF6%ZMMJ}L.png (724.77 KB, Downloads: 3)
2019-06-14 02:26:16 Upload
The production cycle of each plate can be up to two, three months. The plates are made of sand; silicon is the second most common in the earth’s crust oxygen. By chemical reactions, silicon oxide O2 is thoroughly cleaned from dirty to clean. For microelectronics need monocrystalline silicon is obtained from the melt. It all starts with a small crystal that is dipped into the melt.
Later, it turns a special single-crystal boules growth per person. Further, the main defects are removed and special threads with diamond powder boules are cut into disks.
Each disc is carefully processed to a completely flat and smooth surface; at an atomic level, the thickness of each plate is about one millimeter.
Solely so that it does not break and does not bend, that is, so that you can work comfortably with it. The diameter of each plate is exactly 300 millimeters, a little later hundreds, atoms and thousands of processors will grow in this area.
KCQ5RUMI)((X%E@Z79P.png (79.91 KB, Downloads: 3)
2019-06-14 02:25:38 Upload
UR%V9Z@BV@}JS1SH8EXGX(M.png (221.68 KB, Downloads: 3)
2019-06-14 02:26:08 Upload
The chip itself consists of silicon on which there are up to 9 metallization layers made of copper, so many levels are needed so that, according to a certain law, it can be connect transistors located on the surface of silicon, since it is simply impossible to do all this on the same level.
In essence, hese layers fulfill the role connecting wires only on a much smaller scale.
So that the wires do not short each other, they are separated by a layer of oxide with a low dielectric constant. The unit cell of the processor is a field effect transistor,
the first semiconductor products were made from germanium, and the first transistor was made from it. But as soon as they began to make the first transistors under which there is a special insulating layer (thin electric film that controls on and off) transistors - Germany immediately died out giving way to silicon.
For the last 49 years, silicon dioxide silicon o2 has been used as the main material for the dielectric gate; adaptability and the ability to systematically improve the characteristics of transistors.
As their size decreases, the scaling rule is simple: reducing the size of the transistor - the thickness of the dielectric should decrease proportionally.
For example, in chips with 65 nanometer processes, the thickness of the dielectric gate layer of silicon O2 was about 1.2 nanometers, which is equivalent to 5 atomic layers.
In fact, this is the physical limit for this material, as a result of further reduction of the transistor itself, and hence the reduction of the layer of silicon dioxide, the leakage current through the dielectric gate increases significantly, which leads to significant losses and excessive heat release.
In this case, the layer of silicon dioxide ceases to be an obstacle to quantum tunneling of electrons, because of which the possibility of guaranteed control of the state of the transistor is lost.
Accordingly, even with an ideal production of all transistors, the number of which in a modern processor reaches several billions, the incorrect operation of at least one of them means incorrect operation of all processor logic, which can easily lead to a catastrophe.
This is considering that microprocessors manage almost all digital devices from modern cell phones to fuel systems of cars. The process of miniaturization of transistors did not go contrary to the laws of physics, but computer progress, as we see, did not stop. This means that the problems with the dielectric are somehow solved and when switching to 45 nanometers, the intel company began to use the new material, the so-called High-k dielectric, which replaced the unpromising thin layer of silicon dioxide.
An oxide-based layer of rare earth metal hafnium with a high dielectric constant of 20 versus 4 in silicium O2.
High-k dielectric became thicker, but it reduced the leakage current by more than 10 times while maintaining the ability to correctly and stably control the operation of the transistor.
The new dielectric proved to be poorly compatible with poly silicon gates, but this did not become an obstacle to increase the speed of the gate in the new transistors, it was made of metal.
In 1965 one of the founders of Intel corporation, Gordon Moore, recorded empirical observation, which later became
the famous law of his name, presenting in the form of a graph the increase in the performance of memory chips, he found a curious pattern.
New models of microcircuits were developed after equal intervals of about 18 to 24 months after their appearance.
Predecessors and the capacity of the chips at the same time, each time approximately doubled.Later, Gordon Moore predicted the pattern by assuming that the number of microprocessor transistors would double every two years.
Actually constantly creating innovative technologies intel corporation ensures the fulfillment of Moore's law. For over forty years now the number of transistors continues to grow, although the size of the processor at the output remains relatively unchanged.
Again, there is no secret, it becomes clear if you look at the following relationship, as you can see once every two years topological dimensions are reduced by 0.7 times.
As a result of reducing the size of transistors, their switching speed is higher, the price is lower and less power consumption. Creating a new process to create a metal shutter resulted in a 22 percent increase in the performance of all transistors compared to 45 nanometers, as well as to the highest density of elements, which required the highest current density.
ProductionIntel company produces processors in three countries:
According to the data of 2010, the company had 4 factories for the mass production of processors using 32 nanometer technology, these are:
B in the device of these plants, and in their work there are many interesting things.
The cost of such a plant is about $ 5 billion, and if you make several such plants at once, then the amount of investment can be safely multiplied.
If we consider that the change of technology occurs once every two years, it turns out that the plant has exactly four years to recapture the $ 5 billion invested in it and make a profit. From which the obvious conclusion suggests itself: the economy very much dictates the development of technical progress, but in spite of all these huge numbers the cost of production of a transistor continues to fall, now it is less than one billion dollars.
It is not necessary to think that with the transition of several factories to 14 nanometers, everything will suddenly become manufactured, therefore, by a technical process, the same chipset and another Peripheral circuits are simply not necessary in most cases - they use 22 nanometers.
The gate area of the transistor is reduced, and for the transistor to work, the capacitance under the gate dielectric needs to be maintained. Therefore, it was necessary to reduce its thickness, and when it became impossible to find a material with a higher dielectric constant. When will the era of silicon end? the exact date is still unknown, but it is definitely not far off. In the technology of 14 nanometers, it will definitely continue to fight, most likely it will remain in 7 nanometers, but then the most interesting will begin.
The periodic table is, in principle, quite large and there is something to choose from, but most likely everything will rest not only in chemistry. Increasing the efficiency of the processor can be achieved or reducing the topological size, now they do. Or using other compounds possessing higher carrier mobility. Perhaps gallium-arsenide, possibly sensational and promising graphene - by the way, its mobility is hundreds of times higher than that of silicon.
But even here there are problems, now the technologies are designed for processing plates with a diameter of 300 millimeters, the amount of gallium arsenide needed for such a plate just not in nature, and graphene of such size is still extremely difficult to manufacture.
They learned to do it, but a lot of defects, reproduction problems, doping and so on. Most likely, the next step will be the application of single-crystal gallium arsenide to silicon, but then already graphene, and perhaps the development of microelectronics will go not only to improve technologies, but also along the path of development of fundamentally new logic - this, too, cannot be ruled out.
In general, now there is a struggle for technology and high mobility, but one thing is clear there is no reason for stopping progress!
Tick-TockThe process of manufacturing processors consists of two large parts for the first you need to have the manufacturing technology itself, and for the second you need an understanding of what to make and how - the architecture, how the transistors are connected. If at the same time make a new architecture and a new technology, then in case of failure it will be difficult to find the guilty ones will say that the architects are to blame, others that technologists generally follow this strategy are very short-sighted. In the intel company, the introduction of new technology and architecture is separated in time: the technology is introduced in one year, and technologies are produced according to the proven architecture. If something goes wrong, the technologists will be to blame, and when the new technology is worked out, the architects will make a new architecture for it, and if the technology is used up something will not work then the architects will be guilty.
Such a strategy was named Tick-Tock - an extensive microprocessor development strategy announced by Intel at the intel developer forum in September 2006.
The development cycle is divided into two stages tick-tock !!
TICK - means the miniaturization of the process and relatively small microarchitecture improvements.
TOCK - means the release of processors with a new micro-architecture, but with the help of the existing technological process.
According to intel plans, each part of the cycle should take about a year. The current pace of technology development requires fantastic investment in research. In development intel annually invests in this business from 4 to 5 billion dollars a part of the work takes place within the company, but very much from outside it.
It’s almost impossible to just keep a lab in the company like bell labs, the forge of Nobel laureates. As a rule, an idea is laid in universities, so that universities know exactly what it makes sense to work on, what technologies claimed what will be relevant. All semiconductor companies were merged into a consortium, after which they provide a kind of roadmap it says about all the problems that will face the semiconductor industry for the next 3-5-7 years.
In theory, any company in a project can literally go to a university and use one or another innovative development, but the right to them is usually remain with the university developer, this approach is called - open innovation!
After defending the selection at the engineering level and testing in real conditions, ideas have every chance of becoming a new technology. Increased productivity leads to higher costs for factories, and this in turn leads to natural selection, for example, in order to pay for itself in four years, each intel factory must produce at least 100 employees plates. Now there are thousands of chips on each plate, if we make certain calculations, it will become clear that intel has 80 percent of the global processor market. The company simply could not recoup the costs, the conclusion: having both our own design and our own production in our time is quite expensive, at least you need to have a huge market. As seen with its design and production, fewer and fewer companies keep up with technical progress.
What else has changed in recent years !?
If we recall that year so to 2004 statement: the higher the frequency of the processor, the better it was quite fair, starting with 2004-2005, the frequency of processors almost ceased to grow, which is associated with access to a kind of physical limitations. Now it is possible to increase productivity due to multi-core and execution of tasks in parallel. As a result, from this point on, the role of software has cardinally increased and the programmer’s importance in the near future will only gain momentum.
In general, summarizing the above:
Moore's law continues to operate, growth, the cost of developing new technologies and materials, as well as the costs of maintaining factories are rising.
Productivity also grows as a result of differentiation due to the development of software.
Thanks to Administrator!
Related thread:[How it Works #4] Type of Touchscreen and How it Work!
[How it Works #5] Type of Display and How it Work!
[How it Works #6] Camera in smartphone, and how it works!
[How it Works #7] What is the battery in your phone or smartphone?
[How it Works #8] What is a processor in a mobile phone?[How it Works #9] What is a RAM in smartphone!