1.5 Applications of Compiler Technology
Implementation of high-level programming languages
A high-level programming language defines a programming abstraction: the programmer
expresses an algorithm using the language, and the compiler must translate that program to
the target language. Generally, higher-level programming languages are easier to program in,
but are less efficient, that is, the target programs run more slowly. Programmers using a lowlevel language have more control over a computation and can, in principle, produce more
efficient code. Unfortunately, lower-level programs are harder to write and — worse still —
less portable, more prone to errors, and harder to maintain. Optimizing compilers include
techniques to improve the performance of generated code, thus offsetting the inefficiency
introduced by high-level abstractions.
Optimizations for computer architectures
The rapid evolution of computer architectures has also led to an insatiable demand for new
compiler technology. Almost all high-performance systems take advantage of the same two
basic techniques: parallelism and memory hierarchies. Parallelism can be found at several
levels: at the instruction level, where multiple operations are executed simultaneously and at
the processor level, where different threads of the same application are run on different
processors. Memory hierarchies are a response to the basic limitation that we can build very
fast storage or very large storage, but not storage that is both fast and large.
Parallelism-All modern microprocessors exploit instruction-level parallelism. However,
this parallelism can be hidden from the programmer. Programs are written as if all
instructions were executed in sequence; the hardware dynamically checks for
dependencies in the sequential instruction stream and issues them in parallel when
possible. In some cases, the machine includes a hardware scheduler that can change the
instruction ordering to increase the parallelism in the program. Whether the hardware
reorders the instructions or not, compilers can rearrange the instructions to make
instruction-level parallelism more effective.
Memory Hierarchies- A memory hierarchy consists of several levels of storage with
different speeds and sizes, with the level closest to the processor being the fastest but
smallest. The average memory-access time of a program is reduced if most of its accesses
are satisfied by the faster levels of the hierarchy. Both parallelism and the existence of a
memory hierarchy improve the potential performance of a machine, but they must be
harnessed effectively by the compiler to deliver real performance on an application.
Design of new computer architectures
In the early days of computer architecture design, compilers were developed after the
machines were built. That has changed. Since programming in highlevel languages is the
norm, the performance of a computer system is determined not by its raw speed but also by
how well compilers can exploit its features. Thus, in modern computer architecture
development, compilers are developed in the processor-design stage, and compiled code,
running on simulators, is used to evaluate the proposed architectural features.
While we normally think of compiling as a translation from a high-level language to the
machine level, the same technology can be applied to translate between different kinds of