Friday, April 14, 2017

Fibonacci sequence and the worst AVL trees

Fibonocci numbers: We all know about Fibonacci sequence. It is the famous sequence that starts with 1 and 1 as the first two elements. From there on, every element is the sum of the previous elements. So, it goes like 1,1,2,3,5,8,13,21,... We can call the $n^{th}$ element this series $F_n$. So, we have $F_1=1$, $F_2=1$ and $F_n = F_{n-1} + F_{n-2}$. Fibonooci numbers are very closely related to AVL trees, a kind of self-balancing binary search tree. We will explore this relation in this blog article and then establish the worst case search complexity of an AVL tree.

A formula for Fibonacci numbers: Our problem is to find out any arbitrary element $F_{n}$ given $n$. Sure, one can just start from the beginning and keep computing each element until $F_n$ is reached, but this probably is not the most efficient method of computing $F_n$. It will be nice to have a formula for $F_n$. This is what we are going to find out.

Thursday, July 28, 2011

Floating Point Numbers

What is a Floating Point Number: There are two ways a decimal/fractional number can be represented in binary form - fixed point and floating point. In a fixed point number, the decimal point is in a fixed arbitrary position which is predefined. For example, in a 32 bit fixed point number, the decimal point may be assumed to be after the 16th bit. So any value represented by the 32 bits needs to be divided by 216 = 65536, to get the value of the decimal number represented by a fixed point.

A floating point number on the other hand has a completely different representation. A floating point number compromises on precision for achieving the following.
  1. Broader range of numbers that it can represent
  2. The precision of the number is fixed, irrespective of its value
  3. Represent special values like infinity and not-a-number
Bit Patterns of a Floating Point Number: As per IEEE 754, the floating point numbers are of two types - 32 bit single precision and 64 bit double precision.