Computer Architecture
    Hardware - Overview
    Hardware - CPU
    Hardware - Motherboard
    Hardware - Memory
    Hardware - Storage
    Hardware - GPU
    Hardware - SoC
    Hardware - Monitor
    Hardware - Power
    Hardware - Camera
    Build A PC


Updated: 2022-08-06

The physical parts of a computer.

What's the latest?

  • DDR5 (up to 4800 MT/s), LDDR5 (Up to 6400 MT/s)
  • USB4: spec ready, adoption expected to start from 2021. Thunderbolt 3 will become part of USB 4 standard. Up to 40 Gbps
  • PCIe
    • 5.0: introduced in 2019, mass production in 2020.
    • 6.0: announced in 2022.
  • Wifi 6E
  • BT LE 5.2
  • HDMI 2.1
  • DisplayPort 2


  • M.2 replaces mSATA
  • SATA replaces PATA
  • DisplayPort and HDMI replace DVI and VGA

Semiconductor Industry

Semiconductor company types:

  • fabless: design only, e.g. AMD
  • foundries: manufacture only, e.g. TSMC
  • IDM: both design and manufacture, e.g. Intel

Wafers size:

  • 200mm = 8 inch, ~6.6 million wafers per month by 2024. Marketshare in 2021: 18% China, Japan 16%, Taiwan 16%
  • 300mm = 12 inch


FPGA: Field-programmable gate arrays.

  • Programmable logic blocks and programmable interconnects allow the same FPGA to be used in many different applications.
  • For prototypes, smaller designs or lower production volumes, FPGAs may be more cost effective than an ASIC design, even in production.


IBM has the largest marketshare.

Open Mainframe Project: managed by the Linux Foundation to encourage the use of Linux-based operating systems and open source software on mainframe computers.

Supercomputers vs Mainframes

  • Supercomputers: used for scientific and engineering problems (high-performance computing) which crunch numbers and data. Measured in floating point operations per second (FLOPS) or in traversed edges per second (TEPS).
  • Mainframes: focus on transaction processing. Measured in millions of instructions per second (MIPS),

Quantum Computing



ASIC: Application-Specific Integrated Circuit. Customized for a particular use. Notable examples:

  • TPU: Tensor processing unit, developed by Google to accelerate computing in Neural Networks.
  • AWS Nitro System: ASIC designed by Annapurna Labs is used to offload network, storage and management work from the main CPU.
  • High efficiency Bitcoin miner.


  • Standup desk
  • Monitor arm
  • Chair
  • Camera cover (for privacy)
  • Coffee machine

Mini Computers

Beyond Moore’s law

Moore’s law is slowing down.


  • domain-specific accelerators: e.g. TPUs or other AI chips; special chips to process videos
  • more efficiency in the software stack: e.g. improve profile-guided compiler tuning, or the “software-defined” server to better match workloads to hardware features.

Data Center

Data Center = software + servers + racks + power supplies + cooling

Server racks:

  • 1 unit (1U) = 18 inches (480 mm) x 1.75 inches (44mm).
  • The most common form-factor: 42U.

Blade Server: a stripped-down server computer with a modular design optimized to minimize the use of physical space and energy.


  • power supply units (or PSUs)
  • uninterruptible power supply (or UPS) units

The Open Compute Project (OCP) is an organization that shares designs of data center products and best practices among companies

Example Devices

  • Servers:
    • HPE DL380 Gen10 Plus servers
  • Firewalls:
    • PANW firewall
  • Switches:
    • Cisco 93180 switch
  • Cisco Optics
  • Power
    • Cisco - Unified Computing System (UCS)
  • HSM (Hardware Security Modules)
    • Thales HSM
  • NTP protocol: The standard for time

Data Center Metric

PUE: power utilization efficiency

PUE = Total Facility Power / IT Equipment Power

For example:

  • PUE=1.0: all power is used by servers
  • PUE=2.0: half of power is used by the building, half by servers
  • "The average PUE for all Google Data Centers is 1.10, although we could boast a PUE as low as 1.06 when using narrower boundaries." https://www.google.com/about/datacenters/efficiency/


HPE EL8000 Edgeline

Baseboard Management Controller (BMC)

A specialized processor for remote monitoring and management of a host system (a computer, network server or other hardware device).

Usually an ARM-based SoC (System on Chip) with graphics and control logic built in.

Accessed remotely either via a dedicated or shared network connection, and has multiple connections to the host system, giving it an ability to monitor hardware via sensors, flash BIOS/ UEFI firmware, give console access via serial or physical / virtual KVM, power cycle the host, and log events.

Allows a system administrator to perform many different monitoring and management tasks remotely without having to be physically located next to and connected to the system. All modern servers and other devices used in a data center (such as switches, storage devices, power supply devices etc.) now have a BMC.


HPE iLO (Integrated Lights-Out): a proprietary embedded server management technology by HPE;

  • provides out-of-band management facilities
  • The physical connection is an Ethernet port that can be found on most ProLiant servers
  • makes it possible to perform activities on an HP server from a remote location.
  • The iLO card has a separate network connection (and its own IP address) to which one can connect via HTTPS.
  • power up, reset, mount
  • The iLO firmware emulates the BMC functionality.


E.g. SyncServer S650.