I am by no means a tempting guru, just a layman, but perhaps the conditions of a layman will be useful to someone. Again, I'm not a guru, so scream if you notice errors. Anyway, let's just plug in and plug in the “loop” ...
Given the number on the right side of Sigma, we are told that this sum starts with 1, therefore
First iteration
Amount = 1 + 1
Second iteration
2 + 2
Third iteration
4 + 3
Fourth iteration
7 + 4
until limit where you will have
SumBeforeLast + N
What is the Limit? Well in C you would define it as (i = 1; i <= N;). In English, if N is 100, then the Limit will be "where I = 100". If N = 5, then the "Limit" is "where i = 5", etc.
If you want to be pedantic (and you should), you can also say that Limit has two parts, I have the initial value when I “run” the “run” (Lambda Calculus does not apply to calculating either time or machine, but you can always delete this model, as soon as you get the concept), this is the "lower limit", and the "final value" is the "upper limit".
Do you want to know the "magic" of derivatives? It is difficult to explain, but basically, sometimes, sometimes sometimes, when N = Infinity or some other meaning, and again just SOME, if the algebra is arranged so that the produced algebra cancels itself, we are left with a new simplified algebra that is no longer takes "infinite" or "iterations" to achieve some desired result. The search for these “derivatives” is a huge part of both mathematics and algorithm development.
In other words, the “magic” of Calculus is nothing more than a Sigma operation that creates a pattern like (x / x) * someEquation combined with the fact that something (like (x / x)) divides by itself equals 1, and the fact that 1 multiplied by "someEquation" does not change "someEquation", which means that large parts of the equation are not needed to calculate the Sigma operation at some iteration.
Keep in mind that if we discard x / x or rather just x, then all complexity will be discarded with its discreteness or even infinity, in which case completely new calculations can be not only optimized, but simply made possible for our limited hardware.
I have little experience with this, but as I understand it, derivative development algorithms are used to optimize algorithms because they extract complexity from the recursive series. Sometimes these optimizations bring problems of infinite complexity to the finite, which makes them solvable by computers, which, in the end, have finite living spaces and resources and therefore are limited by finite or discrete mathematics.
The integral, on the other hand, while it increases the complexity of the integral function, is fundamental to discovering many new algorithms, especially those that have complexity that goes beyond our ability to discover, using only our intuition and raw mathematical reasoning.