An integrated circuit, package open, revealing the etched silicon in which transistors can be found
Despite their many advantages, developing and producing integrated circuits required and still requires significant initial investment. Its cost advantage exists only when it is produced at scale and the first integrated circuits were costly and far from the optimised version we have now. But with the cold war raging in the late 50s and the success of the USSR space programme sending Sputnik and Yuri Gagarin to space, the United States invested into the nascent technology, first through NASA and then through the military, allowing it to further develop.
For a deep dive into the history of microchips, have a look at “
Chip War
” by Chris Miller
Following this initial spurt of funding and trust in the 60s, other countries and sectors started adopting, researching and developing microchips, eventually making them a core component of many industries, from Texas Instruments’ first laser guided bomb to IBM’s first personal computer.
The success of integrated circuits is in part because the possibilities they offer allow for infinite potential applications. Chips indeed offer the ability to store information and perform binary operations very quickly and at a large scale which can be leveraged to make them handle complex tasks. With integrated circuits, one can calculate weather forecasts in real time or display and animate three-dimensional objects on a screen, process data from a sensor interpret it, things that require processing a lot of information really fast. Application specific integrated circuits (ASICs) have been developed that are optimised for particular tasks, but other chips can be used for many different computational tasks and roles.
Consequently, by the 90s, integrated circuits could be found in many everyday objects. The growth of the internet greatly increased demands for these powerful chips that act as the brain of personal computers and servers. Thanks to this development and other similarly important demands in markets at different times, microchips have roughly followed Moore’s law and doubled their number of transistors every two years, making increasingly powerful chips available and promising even more power with each generation.
“Moore’s law” is based on a prediction from Gordon Moore concerning the ability of the industry and research to double the transistor count of microchips every two years. It is an economic rule of thumb that has proved reliable in part because of competition and partially through becoming a self-fulfilling prophecy.
In 2025, chips are virtually everywhere: it’s almost impossible to buy a household appliance without a microchip in it, and they are incredibly diverse, with many being ASICs, be it in your washing machine or your Television. Up to 3000 chips can be found in a car, they are in batteries for power management, in cameras, medical devices, audio effect pedals… Some are small, simple, cheap and easily available, designed for very basic purpose such as to amplify a signal. Others are incredibly complex and expensive and fill the rack of data centers to run highly resource consuming processes like the highly in demand Nvidia H100 used by many generative AI tech companies.
With this picture in mind, we can see how important and ubiquitous chips have become. But what integrated circuits are used for is only the first part of the story. How they are developed, produced, licensed and traded is where the real challenges and concerns for our privacy and security come in.
The global microchip supply chain
There are a number of particularities of the modern supply chain behind integrated circuits that give rise to challenges for privacy and security.
First is their globalised nature. As with many modern industries, the complex process to produce a microchip is broken down into different steps, such as the extraction of silicon, the production of silicon wafers (on which modern chip are built), the design of microchips, the design of machinery used to produce microchips, the licensing of instruction sets architecture and so on and so forth. Different regions and countries have specialised in these different steps, meaning that the supply chain is truly global.
Directly related to this globalised supply chain is the high concentration of expertise. For example, China is by far the main producer of silicon. ASML, a Dutch company, is the only company able to produce the USD380M photolitography machines used to produce cutting edge chips (etching transistors that are smaller than 10nm!). The Taiwan Semiconductor Manufacturing Company (TSMC) is the undisputed leader in manufacturing these cutting edge chips using ASML machines (among other things), and does so for all the biggest chip companies. Meanwhile the design of the chips themselves is not something TSMC has expertise on: that is up to its clients, including Nvidia, Apple, Intel, and AMD.
This concentration means there is a very high barrier to entry, particularly with regard to the production of complex cutting edge chips. Costs depend on the need for R&D and the scale of production (you need to produce a massive quantity of powerful and high quality chips to repay the costs of research and development, acquisition of machines, raw materials, labour etc.).
Some chips that rely on open standards can be mass produced using old manufacturing techniques (eg the 555 timer chip). These can be extremely cheap to produce and so adopted by a larger number of players, but they don’t provide nearly the same complex computing power as higher end chips.
0 Comments