Note

https://docs.google.com/document/d/15wdUJsimiYVPG6_HuLU_KqDxSKMSqarWvJ0dOlyf_ug/edit?tab=t.0

Why Concurrency

CPU utilization

High Throughput and Low Latency

Little’s Law

Moore’s Law

In 1965, Gordon Moore wrote a three-page paper that described both the consolida‐ tion of the electronics market toward integrated circuits, and the doubling of the number of components in an integrated circuit every year for at least a decade. In 1975, he revised this prediction to state that the number of components on an inte‐ grated circuit would double every two years. This prediction more or less held true until just recently—around 2012.

Several companies foresaw this slowdown in the rate Moore’s law predicted and began to investigate alternative ways to increase computing power. As the saying goes, necessity is the mother of innovation, and so it was in this way that multicore processors were born.

Amdahl’s law

Amdahl’s law describes a way in which to model the potential performance gains from implementing the solution to a problem in a parallel manner. Simply put, it states that the gains are bounded by how much of the program must be written in a sequential manner.

Amdahl’s law help us decide whether parallelization is the right way to address performance concerns in our system.

For problems that are embarrassingly parallel, it is recommended that you write your application so that it can scale horizontally.

Hardware and OS

The central processing unit (CPU), manages the fundamental computational work of the computer.

When starting a program, it will be loaded into the memory in binary form and started as a process. The computer’s CPU understands only binary instructions, so that’s the form the program needs to be in when it runs.

CPUs with more than one core are called multi-core processors. Multiple cores allow the CPU to execute code simultaneously (Parallel Execution).

On a single core though, it is not possible to have processes or threads truly executing at the same time.