Last Updated: 2024-01-20

The physical parts of a computer.

What's the latest?

  • DDR5 (up to 4800 MT/s), LDDR5 (Up to 6400 MT/s)
  • USB4: spec ready, adoption expected to start from 2021. Thunderbolt 3 will become part of USB 4 standard. Up to 40 Gbps
  • PCIe
    • 5.0: introduced in 2019, mass production in 2020.
    • 6.0: announced in 2022.
  • Wifi 7
  • BT LE 5.2
  • HDMI 2.1
  • DisplayPort 2


  • M.2 replaces mSATA
  • SATA replaces PATA
  • DisplayPort and HDMI replace DVI and VGA

Semiconductor Industry

Semiconductor company types:

  • fabless: design only, e.g. AMD
  • foundries: manufacture only, e.g. TSMC
  • IDM: both design and manufacture, e.g. Intel

Wafers size:

  • 200mm = 8 inch, ~6.6 million wafers per month by 2024. Marketshare in 2021: 18% China, Japan 16%, Taiwan 16%
  • 300mm = 12 inch


FPGA: Field-programmable gate arrays.

  • Programmable logic blocks and programmable interconnects allow the same FPGA to be used in many different applications.
  • For prototypes, smaller designs or lower production volumes, FPGAs may be more cost effective than an ASIC design, even in production.

VLSI: Very Large-Scale Integration

Very large-scale integration (VLSI) is the process of integrating or embedding hundreds of thousands of transistors on a single silicon semiconductor microchip.

VLSI is a successor to large-scale integration (LSI), medium-scale integration (MSI) and small-scale integration (SSI) technologies.

The microprocessor and memory chips are VLSI devices.

Before the introduction of VLSI technology, most ICs had a limited set of functions they could perform. An electronic circuit might consist of a CPU, ROM, RAM and other glue logic. VLSI enables IC designers to add all of these into one chip.


IBM has the largest marketshare.

Open Mainframe Project: managed by the Linux Foundation to encourage the use of Linux-based operating systems and open source software on mainframe computers.

Supercomputers vs Mainframes

  • Supercomputers: used for scientific and engineering problems (high-performance computing) which crunch numbers and data. Measured in floating point operations per second (FLOPS) or in traversed edges per second (TEPS).
  • Mainframes: focus on transaction processing. Measured in millions of instructions per second (MIPS),

Top 500: https://www.top500.org/

Quantum Computing



ASIC: Application-Specific Integrated Circuit. Customized for a particular use. Notable examples:

  • TPU: Tensor processing unit, developed by Google to accelerate computing in Neural Networks.
  • AWS Nitro System: ASIC designed by Annapurna Labs is used to offload network, storage and management work from the main CPU.
  • High efficiency Bitcoin miner.


  • Standup desk
  • Monitor arm
  • Chair
  • Camera cover (for privacy)
  • Coffee machine

Mini Computers

Beyond Moore’s law

Moore’s law is slowing down.


  • domain-specific accelerators: e.g. TPUs or other AI chips; special chips to process videos
  • more efficiency in the software stack: e.g. improve profile-guided compiler tuning, or the “software-defined” server to better match workloads to hardware features.

Data Center

Data Center = software + servers + racks + power supplies + cooling


  • power supply units (or PSUs)
  • uninterruptible power supply (or UPS) units

The Open Compute Project (OCP) is an organization that shares designs of data center products and best practices among companies.

Example Devices

  • Servers:
    • HPE DL380 Gen10 Plus servers
  • Firewalls:
    • PANW firewall
  • Switches:
    • Cisco 93180 switch
  • Cisco Optics
  • Power
    • Cisco - Unified Computing System (UCS)
  • HSM (Hardware Security Modules)
    • Thales HSM
  • NTP protocol: The standard for time. E.g. SyncServer S650.

Data Center Metric

PUE: power utilization efficiency

PUE = Total Facility Power / IT Equipment Power

For example:

  • PUE=1.0: all power is used by servers
  • PUE=2.0: half of power is used by the building, half by servers
  • "The average PUE for all Google Data Centers is 1.10, although we could boast a PUE as low as 1.06 when using narrower boundaries." https://www.google.com/about/datacenters/efficiency/


  • Equinix


HPE EL8000 Edgeline: 5U, 17 in deep.


CXL (Compute Express Link) emerges as the clear winner of the CPU interconnect wars.

The CXL spec: an open industry standard that provides a cache coherent interconnect between CPUs and:

  • memories
  • accelerators: like GPUs
  • smart I/O devices: like DPUs
  • other peripherals.

Example usage:

  • adding memory using a PCIe slot (e.g. putting a CXL memory module into an empty PCIe 5.0 slot)
  • attach a modest amount of DDR5 directly to the CPU and use a slower, albeit cheaper DDR4 CXL memory-expansion module as part of a tiered-memory hierarchy.
  • deploying a standalone memory appliance packed with terabytes of inexpensive DDR4 that can be accessed by multiple systems simultaneously. (like a share storage array.) Memory can be allocated to any machine in the rack, no longer tied to the CPU.


  • 3.0: support for the PCIe 6.0 interface, memory pooling, and more complex switching and fabric capabilities to bear (provides means for direct peer-to-peer communications over that switch or even across the fabric. This means peripherals — say two GPUs or a GPU and memory-expansion module — could theoretically talk to one another without the host CPU's involvement. This eliminates the CPU as a potential chokepoint.)
  • 2.0: only allowed for a single accelerator to be attached to any given CXL switch.



  • White = USB 1.x
  • Black = USB 2.x
  • Blue = USB 3.0
  • Teal, Purple and Violet = USB 3.1
  • Red, yellow or orange: do not indicate a USB standard, but an additional function. Namely, the port is active even when the computer is asleep. e.g. for charging smartphones.