Modern understanding of recursion: definition of functionality and access to it from outside and from this functionality. It is believed that recursion was born by mathematicians: factorial calculation, infinite series, fractals, continued fractions … However, recursion can be found everywhere. Objective natural laws “consider” recursion as their main algorithm and form of expression (existence) not so much of the objects of the material world, but in general the main algorithm of movement.
People of various speci alties in various fields of science and technology use the recursive algorithm f (x), where "x ~/=f (x)". A function that calls itself is a strong solution, but forming and understanding this solution is, in most cases, a very difficult task.
In ancient times, recursion was used to increase the palace space. Through a system of mirrors directed at each other, you can create stunning three-dimensional spatial effects. But is it so easy to understand howadjust these mirrors? It is even more difficult to determine where a point in space is, reflected through several mirrors.
Recursion, recursive algorithms: meaning and syntax
The problem, which is formulated by repeating a sequence of operations, can be solved recursively. A simple algorithm (computing a quadratic equation, a script to populate a web page with information, reading a file, sending a message…) does not require recursion.
Main differences of the algorithm that allows a recursive solution:
- there is an algorithm that needs to be executed several times;
- algorithm needs data that changes every time;
- the algorithm doesn't have to change every time;
- there is a final condition: the algorithm is recursive - not infinite.
In general, it cannot be argued that one-time execution is a necessary condition for the absence of a reason for recursion. You also cannot require a mandatory final condition: infinite recursions have their own scope.
The algorithm is recursive: when a sequence of operations is performed repeatedly, on data that changes each time and gives a new result each time.
Recursion formula
The mathematical understanding of recursion and its analogue in programming are different. Mathematics, although there are signs of programming, but programming is mathematics of a much higher order.
A well-written algorithm is like a mirror of the intellect of its author. Gener althe recursion formula in programming is "f(x)", where "x ~/=f(x)" has at least two interpretations. Here "~" is the similarity or absence of the result, and "=" is the presence of the result of the function.
First option: data dynamics.
- function "f(x)" has a recursive and immutable algorithm;
- "x" and the result "f(x)" have new values each time, the result "f(x)" is the new "x" parameter of this function.
Second option: code dynamics.
- function "f(x)" has several algorithms that refine (analyze) the data;
- data analysis - one part of the code and the implementation of recursive algorithms that perform the desired action - the second part of the code;
- the result of the function "f(x)" is not.
No result is normal. Programming is not mathematics, here the result does not have to be explicitly present. A recursive function can simply parse sites and populate the database, or instantiate objects according to the incoming input.
Data and recursion
Programming recursive algorithms is not a calculation of factorial, in which the function receives each time a given value that differs by one up or down - the implementation option depends on the developer's preference.
It doesn't matter how to calculate the factorial "8!",algorithm that strictly follows this formula.
Processing information is "mathematics" of a completely different order. Recursive functions and algorithms here operate on letters, words, phrases, sentences and paragraphs. Each next level uses the previous one.
The input data stream is analyzed over a wide range of conditions, but the analysis process is generally recursive. It makes no sense to write unique algorithms for all variants of the input stream. There should be one functionality. Here, recursive algorithms are examples of how to form an output stream that is adequate to the input. This is not the output of the recursive algorithm, but it is the desired and necessary solution.
Abstraction, recursion and OOP
Object-oriented programming (OOP) and recursion are fundamentally different entities, but they complement each other perfectly. Abstraction has nothing to do with recursion, but through the lens of OOP it creates the possibility of implementing contextual recursion.
For example, information is being parsed and letters, words, phrases, sentences and paragraphs are highlighted separately. Obviously, the developer will provide for the creation of instances of objects of these five types and offer a solution of recursive algorithms at each level.
Meanwhile, if at the level of letters “there is no point in looking for meaning”, then semantics appears at the level of words. You can divide words into verbs, nouns, adverbs, prepositions… You can go further and define cases.
At the phrase level, semantics is supplemented by punctuation marks and logicword combinations. At the level of sentences, a more perfect level of semantics is found, and a paragraph can be considered as a complete thought.
Object-oriented development predetermines the inheritance of properties and methods and proposes to start the hierarchy of objects with the creation of a completely abstract ancestor. At the same time, no doubt, the analysis of each descendant will be recursive and will not differ too much at the technical level in many positions (letters, words, phrases and sentences). Paragraphs, like complete thoughts, may stand out from this list, but they are not the essence.
It is important that the overwhelming part of the algorithm can be formulated at the abstract ancestor level, refining it at the level of each descendant with data and methods called from the abstract level. In this context, abstraction opens up new horizons for recursion.
Historical features of OOP
OOP has come to the software world twice, although some experts may single out the advent of cloud computing and modern ideas about objects and classes as a new round in the development of IT technologies.
The terms "object" and "objective" in the modern context of OOP are usually attributed to the 50s and 60s of the last century, but they are associated with 1965 and the emergence of Simula, Lisp, Algol, Smalltalk.
In those days, programming was not particularly developed and could not adequately respond to revolutionary concepts. The struggle of ideas and programming styles (C / C ++ and Pascal - mostly) was still far away, and databases were still conceptually formed.
In the late 80s and early 90s, objects appeared in Pascal and everyone remembered classes in C / C ++ - this marked a new round of interest in OOP and it was then that tools, primarily programming languages, became not only support object-oriented ideas, but evolve accordingly.
Of course, if earlier recursive algorithms were just functions used in the general code of the program, now recursion could become part of the properties of an object (class), which provided interesting opportunities in the context of inheritance.
Feature of modern OOP
The development of OOP initially declared objects (classes) as collections of data and properties (methods). In fact, it was about data that has syntax and meaning. But then it was not possible to present OOP as a tool for managing real objects.
OOP has become a tool for managing "computer nature" objects. A script, a button, a menu item, a menu bar, a tag in a browser window is an object. But not a machine, a food product, a word, or a sentence. Real objects have remained outside of object-oriented programming, and computer tools have taken on a new incarnation.
Due to the differences in popular programming languages, many dialects of OOP have emerged. In terms of semantics, they are practically equivalent, and their focus on the instrumental sphere, and not on the applied one, makes it possible to take the description of real objects beyondalgorithms and ensure their cross-platform and cross-language "existence".
Stacks and function call mechanisms
Mechanisms for calling functions (procedures, algorithms) require the transfer of data (parameters), return of the result and storing the address of the operator who must receive control after the function (procedure) ends.
Usually, the stack is used for this purpose, although programming languages or the developer himself can provide a variety of options for transferring control. Modern programming admits that the name of a function can be not only a parameter: it can be formed during the execution of the algorithm. An algorithm can also be created while executing another algorithm.
The concept of recursive algorithms, when their names and bodies can be determined at the time of the formation of the task (choosing the desired algorithm), extends recursiveness not only to how to do something, but also who exactly should do it. Choosing an algorithm by its "meaningful" name is promising, but creates difficulties.
Recursiveness on a set of functions
You can't say that an algorithm is recursive when it calls itself and that's it. Programming is not a dogma, and the concept of recursiveness is not an exclusive requirement to call yourself from the body of your own algorithm.
Practical applications do not always give a clean solution. Often, the initial data must be prepared, and the result of the recursive call must be analyzed in the context of the entire problem (the entire algorithm) inoverall.
In fact, not only before a recursive function is called, but also after it has completed, another program can or should be called. If there are no special problems with the call: the recursive function A() calls the function B(), which does something and calls A(), then immediately there is a problem with the return of control. Having completed the recursive call, function A() must receive control in order to re-call B(), which will call it again. Returning control as it should be in order on the stack back to B() is the wrong solution.
The programmer is not limited in the choice of parameters and can complete them with function names. In other words, the ideal solution is to pass the name of B() to A() and let A() itself call B(). In this case, there will be no problems with returning control, and the implementation of the recursive algorithm will be more transparent.
Understanding and level of recursion
The problem with developing recursive algorithms is that you need to understand the dynamics of the process. When using recursion in object methods, especially at the level of an abstract ancestor, there is a problem of understanding your own algorithm in the context of its execution time.
Currently, there are no restrictions on the nesting level of functions and stack capacity in call mechanisms, but there is a problem of understanding: at what point in time which data level or which place in the general algorithm called the recursive function and on what number of calls to the most herself she is.
Existing debugging tools are often powerlesstell the programmer the right solution.
Loops and recursion
It is considered that cyclic execution is equivalent to recursion. Indeed, in some cases, the recursive algorithm can be implemented in the syntax of conditional and cyclic constructs.
However, if there is a clear understanding that a particular function must be implemented through a recursive algorithm, any external use of a loop or conditional statements should be abandoned.
The meaning here is that a recursive solution in the form of a function using itself will be a complete, functionally complete algorithm. This algorithm will require the programmer to create it with effort, understanding the dynamics of the algorithm, but it will be the final solution that does not require external control.
Any combination of external conditional and cyclic operators will not allow us to represent the recursive algorithm as a complete function.
Recursion Consensus and OOP
In almost all variants of developing a recursive algorithm, a plan arises to develop two algorithms. The first algorithm generates a list of future objects (instances), and the second algorithm is actually a recursive function.
The best solution would be to arrange recursion as a single property (method) that actually contains the recursive algorithm, and put all the preparatory work into the object constructor.
A recursive algorithm will only be the right solution when it worksonly by himself, without external control and management. An external algorithm can only give a signal to work. The result of this work should be the expected solution, without external support.
Recursion should always be a complete stand-alone solution.
Intuitive understanding and functional completeness
When object-oriented programming became the de facto standard, it became obvious that in order to code effectively, you need to change your own thinking. The programmer must move from the syntax and semantics of the language to the dynamics of the semantics during the execution of the algorithm.
Characteristic of recursion: it can be applied to everything:
- web scraping;
- search operations;
- parsing text information;
- reading or creating MS Word documents;
- sampling or analyzing tags…
Characteristic of OOP: it makes it possible to describe a recursive algorithm at the level of an abstract ancestor, but provide for it to refer to unique descendants, each of which has its own palette of data and properties.
Recursion is ideal because it requires the functional completeness of its algorithm. OOP improves the performance of a recursive algorithm by giving it access to all unique children.