Skip to main content
To KTH's start page To KTH's start page

Computer Architecture

Research on the design of computers by use of recent hardware to facilitate required functionality.

The performance of computer systems has, to a large extent, driven the computer revolution. Faster computers allow us to solve more complex problems, predict the weather better, or play more advanced (and visually) appealing games. Historically, performance increases in computer systems have been driven primarily by two observations: Moore's law (transistors become smaller) and Dennard's scaling (transistor power densities stay the same). Unfortunately, Dennard's scaling ended around 2005, and Moore's law is impending termination, and it is unclear how to continue this amazing performance scaling in a future "post-Moore" world.

Our research group focuses on researching, designing, and exploring computer systems that are different than the classical computer systems that surround us today. These "post-Moore" systems can be faster or more energy-efficient for certain applications compared to traditional (von Neumann-based) systems that contain a central processing unit (CPU) or graphics processing unit (GPU). More specifically, we research:

  • Neuromorphic ("Brain-inspired") Architectures: The human brain is a fantastic computing device consisting of over 80 billion parallel compute units (called neurons) and thousands more connections (synapses) while consuming as little as 20 Watts of power. Our group is researching how to build such "digital brains" on hardware and how to leverage their computational power to solve real-world applications.
  • Field-Programmable Gate Arrays (FPGAs): FPGAs are reconfigurable devices developed in the 80s that were created to emulate digital circuits prior to tape out. Today, they are far more than simple digital circuit emulators, where they can provide users with tens of TFLOP/s of performance while offering a reprogrammable way of freely moving data across the silicon. We use FPGAs as the vehicle for creating new architectures and accelerators, where we can empirically quantify and compare our designs without the need to tape them out. Furthermore, we adopt FPGAs in some of the teaching courses, such as the future IS1500 course that will use RISC-V cores on FPGAs.
  • Coarse-Grained Reconfigurable Architectures (CGRAs): CGRAs can offer computing performance above that of CPUs and GPUs while having a programmable interconnect. Architecturally, they lie in between FPGAs and CPUs/GPUs. Our research group is actively researching different forms of CGRAs -- both general-purpose and domain-specific.

More information about the group is found here " Artur Podobas "