Books

2 Oct 2017

Dynamic Programming

What is Dynamic Programming? 


- Dynamic programming (also known as dynamic optimization) is a method for solving a complex problem by breaking it down into a collection of simpler sub-problems, solving each of those sub-problems just once, and storing their solutions.The next time the same sub-problem occurs, instead of recomputing its solution, one simply looks up the previously computed solution, thereby saving computation time at the expense of a (hopefully) modest expenditure in storage space. (Each of the sub-problem solutions is indexed in some way, typically based on the values of its input parameters, so as to facilitate its lookup.) The technique of storing solutions to sub-problems instead of recomputing them is called "memoization"(not memorization).


(“Programming” in this context refers to a tabular method, not to writing computer code.)

Dynamic programming approach is similar to divide and conquer in breaking down the problem into smaller and yet smaller possible sub-problems. But unlike, divide and conquer, these sub-problems are not solved independently. Rather, results of these smaller sub-problems are remembered and used for similar or overlapping sub-problems.
Dynamic programming is used where we have problems, which can be divided into similar sub-problems, so that their results can be re-used. Mostly, these algorithms are used for optimization. Before solving the in-hand sub-problem, dynamic algorithm will try to examine the results of the previously solved sub-problems. The solutions of sub-problems are combined in order to achieve the best solution.

So we can say that −
  • The problem should be able to be divided into smaller overlapping sub-problem.
  • An optimum solution can be achieved by using an optimum solution of smaller sub-problems.
  • Dynamic algorithms use memoization(not memorization).

When developing a dynamic-programming algorithm, we follow a sequence of
four steps:

1. Characterize the structure of an optimal solution.
2. Recursively define the value of an optimal solution.
3. Compute the value of an optimal solution, typically in a bottom-up fashion.
4. Construct an optimal solution from computed information.

Steps 1–3 form the basis of a dynamic-programming solution to a problem. If we need only the value of an optimal solution, and not the solution itself, then we can omit step 4.


FIBONACCI SERIES EXAMPLE:

Here is a naïve implementation of a function finding the nth member of the Fibonacci sequence, based directly on the mathematical definition:


   function fib(n)
       if n <= 1 return n
       return fib(n − 1) + fib(n − 2)


Notice that if we call, say, fib(5), we produce a call tree that calls the function on the same value many different times:


  1.     fib(5)
  2.     fib(4) + fib(3)
  3.     (fib(3) + fib(2)) + (fib(2) + fib(1))
  4.     ((fib(2) + fib(1)) + (fib(1) + fib(0))) + ((fib(1) + fib(0)) + fib(1))
  5.     (((fib(1) + fib(0)) + fib(1)) + (fib(1) + fib(0))) + ((fib(1) + fib(0)) +   fib(1))
Above, fib(2)is calculated three times, this can take exponential time if problem is larger.

The resulting function requires only O(n) time instead of exponential time (but requires O(n) space):

var m := map(0 → 0, 1 → 1)
   function fib(n)
       if key n is not in map m
           m[n] := fib(n − 1) + fib(n − 2)
       return m[n]

This technique of saving values that have already been calculated is called memoization; this is the top-down approach, since we first break the problem into subproblems and then calculate and store values.


In the bottom-up approach, we calculate the smaller values of fib first, then build larger values from them. This method also uses O(n) time since it contains a loop that repeats n − 1 times, but it only takes constant (O(1)) space, in contrast to the top-down approach which requires O(n) space to store the map.

function fib(n)
       if n = 0
           return 0
       else
           var previousFib := 0, currentFib := 1
           repeat n − 1 times // loop is skipped if n = 1
               var newFib := previousFib + currentFib
               previousFib := currentFib
               currentFib  := newFib
       return currentFib


TOWER OF HANOI EXAMPLE:


A model set of the Towers of Hanoi (with 8 disks)(Source:Wikipedia)


Animated Tower of Hanoi (Source:Wikipedia)




In contrast to greedy algorithms, where local optimization is addressed, dynamic algorithms are motivated for an overall optimization of the problem.
In contrast to divide and conquer algorithms, where solutions are combined to achieve an overall solution, dynamic algorithms use the output of a smaller sub-problem and then try to optimize a bigger sub-problem. Dynamic algorithms use memoization to remember the output of already solved sub-problems.

The following computer problems can be solved using dynamic programming approach −
  • Fibonacci number series
  • Knapsack problem
  • Tower of Hanoi
  • All pair shortest path by Floyd-Warshall
  • Shortest path by Dijkstra
  • Project scheduling
Dynamic programming can be used in both top-down and bottom-up manner.



Source: TutorialsPoint & Wikipedia

1 Oct 2017

Greedy Algorithm


What is Greedy Algorithm?

-which makes local optimal choice at each step with the hope of finding global optimal solution.


greedy algorithm is a simple, intuitive algorithm that is used in optimization problems. The algorithm makes the optimal choice at each step as it attempts to find the overall optimal way to solve the entire problem.


Greedy algorithms do not always yield optimal solutions, but for many problems
they do.


For Example, Counting Coins:

This problem is to count to a desired value by choosing the least possible coins and the greedy approach forces the algorithm to pick the largest possible coin. 
If we are provided coins of ₹1, 2, 5 and 10 and we are asked to count ₹18 then the greedy procedure will be −
  • 1 − Select one ₹ 10 coin, the remaining count is 8
  • 2 − Then select one ₹ 5 coin, the remaining count is 3
  • 3 − Then select one ₹ 2 coin, the remaining count is 1
  • 4 − And finally, the selection of one ₹ 1 coins solves the problem
Though, it seems to be working fine, for this count we need to pick only 4 coins. But if we slightly change the problem then the same approach may not be able to produce the same optimum result.
For the currency system, where we have coins of 1, 7, 10 value, counting coins for value 18 will be absolutely optimum but for count like 15, it may use more coins than necessary. For example, the greedy approach will use 10 + 1 + 1 + 1 + 1 + 1, total 6 coins. Whereas the same problem could be solved by using only 3 coins (7 + 7 + 1).
Subsequently, we may reason that the eager approach picks a prompt improved arrangement and may come up short where global optimisation is a major concern.

Reaching the largest-sum (Source:Wikipedia)



Examples

Most networking algorithms use the greedy approach. Here is a list of few of them −
  • Travelling Salesman Problem
  • Prim's Minimal Spanning Tree Algorithm
  • Kruskal's Minimal Spanning Tree Algorithm
  • Dijkstra's Minimal Spanning Tree Algorithm
  • Graph - Map Coloring
  • Graph - Vertex Cover
  • Knapsack Problem
  • Job Scheduling Problem


Greedy algorithms produce good solutions on some mathematical problems, but not on others. Most problems for which they work will have two properties:

Greedy choice property
We can make whatever choice seems best at the moment and then solve the subproblems that arise later. The choice made by a greedy algorithm may depend on choices made so far, but not on future choices or all the solutions to the subproblem. It iteratively makes one greedy choice after another, reducing each given problem into a smaller one. In other words, a greedy algorithm never reconsiders its choices. This is the main difference from dynamic programming, which is exhaustive and is guaranteed to find the solution.
After every stage, dynamic programming makes decisions based on all the decisions made in the previous stage, and may reconsider the previous stage's algorithmic path to solution.

Optimal substructure 
"A problem exhibits optimal substructure if an optimal solution to the problem contains optimal solutions to the sub-problems."


Components of Greedy Algorithm

Greedy algorithms have the following five components −
    A candidate set − A solution is created from this set.
    A selection function − Used to choose the best candidate to be added to the solution.
    A feasibility function − Used to determine whether a candidate can be used to contribute to the solution.
    An objective function − Used to assign a value to a solution or a partial solution.
    A solution function − Used to indicate whether a complete solution has been reached.


Areas of Application

Greedy approach is used to solve many problems, such as
    Finding the shortest path between two vertices using Dijkstra’s algorithm.
    Finding the minimal spanning tree in a graph using Prim’s /Kruskal’s algorithm, etc.

Where Greedy Approach Fails

In many problems, Greedy algorithm fails to find an optimal solution, moreover it may produce a worst solution. Problems like Travelling Salesman and Knapsack cannot be solved using this approach.



Source:Tutorialspoint & Wikipedia

HEVC or H.265



What is HEVC ?

-High Efficiency Video Coding or H.265 Media Encoding



High Efficiency Video Coding (HEVC), also known as H.265 and MPEG-H Part 2, is a video compression standard, one of several potential successors to the widely used AVC (H.264 or MPEG-4 Part 10). In comparison to AVC, HEVC offers about double the data compression ratio at the same level of video quality, or substantially improved video quality at the same bit rate. It supports resolutions up to 8192×4320, including 8K UHD.

High Efficiency Video Coding (HEVC) is a new video compression standard, developed by the Joint Collaborative Team on Video Coding (JCT-VC).  The JCT-VC brings together image and video encoding experts from around the world, producing a single standard that is approved by two standards bodies;
  • ITU-T Study Group 16 – Video Coding Experts Group (VCEG) – publishes the H.265 standard as ITU-T H.265, and
  • ISO/IEC JTC 1/SC 29/WG 11 Motion Picture Experts Group (MPEG) – publishes the HEVC standard as ISO/IEC 23008-2.
The initial version of the H.265/HEVC standard was ratified in January, 2013.
HEVC was developed with the goal of providing twice the compression efficiency of the previous standard, H.264 / AVC.  Although compression efficiency results vary depending on the type of content and the encoder settings, at typical consumer video distribution bit rates HEVC is typically able to compress video twice as efficiently as AVC.  End-users can take advantage of improved compression efficiency in one of two ways (or some combination of both):
  • At an identical level of visual quality, HEVC enables video to be compressed to a file that is about half the size (or half the bit rate) of AVC, or
  • When compressed to the same file size or bit rate as AVC, HEVC delivers significantly better visual quality.
Block Diagram of HEVC(Source: WIKIPEDIA)




How can HEVC encode video files twice as efficiently as previous video coding standards?

  • Most of the power of video compression standards comes from a technique known as motion compensated prediction.  Blocks of pixels are encoded by making reference to another area in the same frame (intra-prediction), or in another frame (inter-prediction).  Where H.264/AVC defines macroblocks up to 16×16 pixels, HEVC can describe a much larger range of block sizes, up to 64 x 64 pixels.
  • HEVC allows predicted blocks to be coded in different block sizes than the residual error.  Each top level coding unit (or CTU) is first coded as a prediction quad-tree, where at each depth the encoder decides whether to encode with merge/skip, inter, or intra coding. The residual from those predictions is then coded with a second quad-tree which can optionally have greater depth than the prediction quad-tree.  For instance, this allows the residual error from a 32×32 inter coded coding unit (CU) to be represented by a mixture of 16×16, 8×8, and 4×4 transforms.
  • HEVC can encode motion vectors with much greater precision, giving a better predicted block with less residual error.  There are 35 intra-picture directions, compared with only 9 for H.264/AVC.
  • HEVC includes Adaptive Motion Vector Prediction, a new method to improve inter-prediction.
  • An improved deblocking filter
  • Sample Adaptive Offset – an additional filter that reduces artifacts at block edges


Two of the key features where HEVC was improved compared with H.264/MPEG-4 AVC was support for higher resolution video and improved parallel processing methods.

HEVC is targeted at next-generation HDTV displays and content capture systems which feature progressive scanned frame rates and display resolutions from QVGA (320x240) to 4320p (7680x4320), as well as improved picture quality in terms of noise levelcolor spaces, and dynamic range.


UHDTV – digital television formats with resolutions of 4K / 2160p (3840×2160) and 8K / 4320p (7680×4320)

  • Rec. 2020 – ITU-R Recommendation for UHDTV with standard dynamic range
  • Rec. 2100 – ITU-R Recommendation for HDTV and UHDTV with high dynamic range


Image file formats based on HEVC

  • Better Portable Graphics – a file format for images based on HEVC
  • High Efficiency Image File Format – a file format for images and image sequences based on HEVC


List of open-source codecs

  • x265 – an open-source software implementation of HEVC


List of multimedia (audio/video) codecs

  • H.264/MPEG-4 AVC – the video standard predecessor of HEVC
  • VP9 – an open format developed by Google as a competitor to HEVC
  • AV1 – an open format that is being developed by the Alliance for Open Media as a successor to VP9 and a competitor to HEVC
  • Daala – an open format that is being developed by Mozilla Foundation and Xiph.Org Foundation as a competitor to HEVC
  • Dirac (video compression format) – an open format that is being developed by the BBC Research & Development as a competitor to HEVC
  • Thor (video codec) – an open format that is being developed by Cisco as a competitor to HEVC






Source:Wikipedia & x265.org

Featured post

Dynamic Programming

What is Dynamic Programming?   - Dynamic programming (also known as dynamic optimization) is a method for solving a complex problem ...