Efficiency
of an algorithm
Programmers
find a tough time writing efficient code. But what exactly do we mean by
efficiency? What difference does it make if we write normal code instead of an
efficient code? Let’s find out with a simple example of traditional sorting
problem.
The first,
known as insertion sort, takes time roughly equal to c1*n2 to sort n items,
where c1 is a constant that does not depend on n. That is, it takes time
roughly proportional to n2. The second, merge sort, takes time roughly equal to
c2*n lg n, where lg n stands for log2 n and c2 is another constant that also
does not depend on n. Insertion sort usually has a smaller constant factor than
merge sort, so that c1 < c2. We shall see that the constant factors can be
far less significant in the running time than the dependence on the input size
n.
Where merge sort has a factor of lg n in its
running time, insertion sort has a factor of n, which is much larger. Although
insertion sort is usually faster than merge sort for small input sizes, once
the input size n becomes large enough, merge sort’s advantage of lg n vs. n
will more than compensate for the difference in constant factors. No matter how
much smaller c1 is than c2, there will always be a crossover point beyond which
merge sort is faster.
For a
concrete example, let us pit a faster computer (computer A) running insertion
sort against a slower computer (computer B) running merge sort. They each must
sort an array of one million numbers. Suppose that computer A executes one
billion instructions per second and computer B executes only ten million
instructions per second, so that computer A is 100 times faster than computer B
in raw computing power. To make the difference even more dramatic, suppose that
the world’s craftiest programmer codes insertion sort in machine language for
computer A, and the resulting code requires 2n2 instructions to sort n numbers. (Here, c1 = 2.) Merge sort, on the other hand, is programmed for computer B by
an average programmer using a high-level language with an inefficient compiler,
with the resulting code taking 50n lg n instructions (so that c2 = 50).
To sort
one million numbers, computer A takes :
2 · (106)**2 (here ** indicates power) instructions
10**9
instructions/second = 2000 seconds ,
while computer B takes :
50 · 10**6 lg 10**6
instructions
10**7 instructions/second ≈ 100 seconds .
By using an algorithm
whose running time grows more slowly, even with a poor compiler, computer B
runs 20 times faster than computer A! The advantage of merge sort is even more
pronounced when we sort ten million numbers: where insertion sort takes
approximately 2.3 days, merge sort takes under 20 minutes. In general, as the
problem size increases, so does the relative advantage of mergesort.
Needed to compose you a very little word to thank you yet again regarding the nice suggestions you’ve contributed here.
ReplyDeleteSELENIUM TRAINING IN CHENNAI