State pros and cons of dynamic programming
Dynamic programming is an effective method for finding solutions to issues that can be partitioned into a number of more manageable subproblems. However, it can be very expensive to compute, therefore it might not be the greatest option for solving issues that aren’t well suited to this kind of strategy.
Why do you think greedy algorithms are so named?
Greedy algorithms are given this term because the judgments they make are “greedy” in the sense that they attempt to maximise some value at each step of the process, without taking into account the steps that will come after them. This might sometimes result in solutions that are not optimal; yet, greedy algorithms are able to find very good solutions in a lot of different situations.
Can you explain what divide and conquer methods are?
One type of algorithm is called a divide and conquer algorithm, and it involves solving a problem by first breaking it down into a series of smaller problems, then solving each of those smaller problems, and finally merging the findings to solve the original problem. If you take this technique to issue solving, you can solve difficulties more quickly and effectively than if you tried to address the full problem all at once.
Can you explain how quick sort works?
An array is split into two smaller arrays in order to facilitate the sorting process used by the quick sort algorithm. The elements in the first array are all those that are inferior to the pivot element, and the elements in the second array are all those that are superior to the pivot element. After doing a quick sort, recursively sort each of the two arrays.
What are some common sorting algorithms?
Quicksort, heapsort, and mergesort are just a few examples of the many methods that can be used to sort data. Because each algorithm has its own set of benefits and drawbacks, it is essential to select the appropriate algorithm for the task that has to be completed.
What are the different types of recursion?
Recursion can take many forms, the most common of which are called tail recursion, head recursion, and generic recursion. The term “tail recursion” refers to the situation in which the function’s execution is completed with the recursive call. The term “head recursion” refers to the situation in which the function’s execution begins with the recursive call. When a function has general recursion, the recursive call can take place at any point inside the code.
What is the difference between parallel and serial searching?
Serial searching is a form of an algorithm that can only be carried out on one processor at a time, whereas parallel searching is an algorithm that can be carried out on numerous processors simultaneously. Parallel searching is also known as distributed searching. Searching in parallel is typically more efficient than searching in serial, but it is more complicated and might be more challenging to put into practice.
Can you explain what iteration is and why we need it?
The process of iteration involves repeatedly carrying out a given set of directives in order to achieve a predetermined objective. We require it because it enables us to solve problems by partitioning them into constituent parts that are more controllable and therefore smaller. It also enables us to identify patterns and links that we might not be able to see if we simply looked at the problem in its whole. This is because we are able to look at the problem in more than one way.