Sri Lankan here. I used to compete in the IOI (International Olympiad in Informatics), back in the '90s.
I aced the competition in Sri Lanka, but I failed to win a medal the first time I competed internationally. I solved several problems with recursion, but these failed to complete within the strict time limits allowed.
One of the contestants from another country told me: "You needed dynamic programming to solve problems 1 and 5." I then spent the next year trying to figure out what dynamic programming was. The folks over here weren't familiar with the term. In fact, I was often asked "did you mean dynamic memory allocation?"
After a while, I managed to find a book that talked about dynamic programming, and I was like "Oh, it's recursion + storing results" and "Ah, you can also do this iteratively in certain circumstances"
Bottom line, armed with this new knowledge, I ended up winning a gold medal at IOI 2001. Fun times.
Has there been an "arms race" of sorts with programming contests, as knowledge percolates and harder categories of problems need to be chosen? Since these days dynamic programming is basically table stakes for ICPC-type problems.
I had the same experience: no one knew of my mentors what "dynamic programming" was and our country-level competition (that had created problems inspired by IOI) required dynamic programming for 2 out of 5 problems. And of course I failed the first time (2004). Then I learned what it was about and aced the next time (2005).
I was part of the hosting team at IOI 1997 in Cape Town and had similar conversations with the participants that you described and also learned about Dynamic Programming there. Your description of it as "recursion + storing results which you can sometimes do iteratively" is the best summary. Good example is iterative Fibonacci algorithm.
Another fun anecdote from that time is that we had to take special precautions with the scoring program which connected to the participants machine over the serial port. The previous year one participant had hacked the serial port interface to manipulate the results. :-)
Programming terminology is in the same vein as "Linear Programming..."
As per Wikipedia, that origin story is somewhat disputed:
"According to Russell and Norvig, the above story "cannot be strictly true, because his first paper using the term (Bellman, 1952) appeared before Wilson became Secretary of Defense in 1953." Also, Harold J. Kushner stated in a speech that, "On the other hand, when I asked [Bellman] the same question, he replied that he was trying to upstage Dantzig's linear programming by adding dynamic. Perhaps both motivations were true."
It seems dynamic programming (some predetermined sequence of steps) as envisioned by Bellman is strictly less dynamic than ordinary programming today, hence the confusion?
Elevating "memoization+recursion" to "dynamic programming" seems like a development that Bellman didn't anticipate
There is another mid century term which is oddly specific for something oddly general (or the other way?) "Operations Research"
Guess Bellman should have called it "Reactive Optimization", then we would have the totally appropriate "Recursive Memoization" or "Memoized Recursion"
You should think of the “programming” in dynamic programming the same way you think of in linear programming, integer programming, and constraint programming.
Indeed even in layman’s terms, thinking of it as in television programming is more accurate than thinking it is related to computer programming (as is mentioned in TFA)
The sequence of steps is the result of dynamic programming. Not every sequence of steps is a dynamic programme. And (what I would argue are) the core results are fairly mathematical, memoization and recursion don't feature, but partial differential equations do. Check out the Hamilton Jacobi Bellman equation to get a flavour.
Possibly the worst named thing in computing? Not only is "programming" not referring to programming, but "dynamic" is meaningless and just there to try to make it sound fancy.
I actually think overly focusing on the memoized recursion undermines the core conception of dynamic programming. Remember, as noted in the post, the term programming refers to a tabular method of planning, i.e. specifically, the bottom-up calculation to solve a problem. The emphasis here is on a predetermined order on how to tabulate the plan to get to the desired result which will be the solution to the broader problem.
The memoized recursive function is an implementation methodology on a digital computer that specifically side-steps the question of in which order one should do the tabulation. I would argue that recursive memoization is a possible solution technique to dynamic programming problems (defined as the set of problems that admit DP solutions), but strictly speaking in and of itself is not dynamic programming.
I think "cached recursion" is too broad. I tend to go with "Minimal Memory Memoization".
1) the recursion solution is often too slow, but uses little memory.
2) the memoization solution can make the algorithms from 1 a lot faster, but blows up memory use.
3) the dynamic programming solution only keeps the previous partial solutions in memory that will be needed for future partial solutions. Therefore it is the "Minimal Memory Memoized" solution. It often requires a smart ordering of partial solutions that allows for the earliest cache eviction.
Your "cached recursion" sounds like number 2 to me, and the crux about dynamic programming is to figure out when to remove an entry from your cache.
Well, cached recursion is a bit too generic. Almost anything can be done with both caches and recursion.
We had a different characterisation in one of our mathematical optimisation classes: dynamic programming is basically every problem that isomorphic to finding the longest path in a graph, with suitable 'overloading' of 'maximum' (for picking among multiple alternative paths) and 'addition' (for combining multiple sub-paths).
First, to illustrate what I mean by this overloading: matrix multiplication usually has multiplication () and addition (+). However, in the min-plus-algebra you overload (+) with minimum and () with plus, and then multiplying matrices becomes equivalent to hopping two paths in the adjacency matrix of a graph. (Sorry, if this is a bit confusing.) Specifically, taking an adjacency matrix A and calculating A* := 1 + A + AA + AAA + ... is equivalent to taking the transitive closure of edge hopping in your graph, ie the shortest paths between all pairs of vertices.
If you overload (+) with maximum and () with plus instead, you can find longest paths. For that to make sense, you should not have cycles in your graph.
Now let's go back to dynamic programming: the shared optimal sub-structure property of dynamic programming is exactly the same as what you have in finding longest paths.
The structure might sound overly restrictive, but you'll find that it works for all (most?) examples of dynamic programming.
I first learned about Dynamic Programming in the algorithms and datastructures class in the first semester at uni. But the lecturer very quickly moved on to only ever refer to it as "DP". This makes me think he may have done that as to not confuse students.
Even though I knew it wasn't referring to programming I did wonder why it was called that, interesting!
There's also dynamic scoping; as opposed to lexical scoping a.k.a. static scoping.
You can find defenders of dynamic typing, but dynamic scope is now widely seen as a mistake. Or at least dynamic scope by default; it has specialised uses--and in Haskell the Reader Monad is basically isomorphic to dynamic scoping and no one complains about it.
Dynamic optimality is a way more interesting concept to me. Data structures like the splay tree can effectively write the "ideal program" for accessing data based on the current use patterns by maintaining the data in a particular way over time. Instructions and data are mostly the same thing when you really think about it.
The article would probably be more complete with references to similar terms "linear program" or "integer program", unless this is also a slightly different meaning?
Sri Lankan here. I used to compete in the IOI (International Olympiad in Informatics), back in the '90s.
I aced the competition in Sri Lanka, but I failed to win a medal the first time I competed internationally. I solved several problems with recursion, but these failed to complete within the strict time limits allowed.
One of the contestants from another country told me: "You needed dynamic programming to solve problems 1 and 5." I then spent the next year trying to figure out what dynamic programming was. The folks over here weren't familiar with the term. In fact, I was often asked "did you mean dynamic memory allocation?"
After a while, I managed to find a book that talked about dynamic programming, and I was like "Oh, it's recursion + storing results" and "Ah, you can also do this iteratively in certain circumstances"
Bottom line, armed with this new knowledge, I ended up winning a gold medal at IOI 2001. Fun times.
Has there been an "arms race" of sorts with programming contests, as knowledge percolates and harder categories of problems need to be chosen? Since these days dynamic programming is basically table stakes for ICPC-type problems.
I had the same experience: no one knew of my mentors what "dynamic programming" was and our country-level competition (that had created problems inspired by IOI) required dynamic programming for 2 out of 5 problems. And of course I failed the first time (2004). Then I learned what it was about and aced the next time (2005).
I was part of the hosting team at IOI 1997 in Cape Town and had similar conversations with the participants that you described and also learned about Dynamic Programming there. Your description of it as "recursion + storing results which you can sometimes do iteratively" is the best summary. Good example is iterative Fibonacci algorithm.
Another fun anecdote from that time is that we had to take special precautions with the scoring program which connected to the participants machine over the serial port. The previous year one participant had hacked the serial port interface to manipulate the results. :-)
Programming terminology is in the same vein as "Linear Programming..."
As per Wikipedia, that origin story is somewhat disputed:
"According to Russell and Norvig, the above story "cannot be strictly true, because his first paper using the term (Bellman, 1952) appeared before Wilson became Secretary of Defense in 1953." Also, Harold J. Kushner stated in a speech that, "On the other hand, when I asked [Bellman] the same question, he replied that he was trying to upstage Dantzig's linear programming by adding dynamic. Perhaps both motivations were true."
It seems dynamic programming (some predetermined sequence of steps) as envisioned by Bellman is strictly less dynamic than ordinary programming today, hence the confusion?
Elevating "memoization+recursion" to "dynamic programming" seems like a development that Bellman didn't anticipate
https://cs.stackexchange.com/questions/99513/dynamic-program...
There is another mid century term which is oddly specific for something oddly general (or the other way?) "Operations Research"
Guess Bellman should have called it "Reactive Optimization", then we would have the totally appropriate "Recursive Memoization" or "Memoized Recursion"
You should think of the “programming” in dynamic programming the same way you think of in linear programming, integer programming, and constraint programming.
Indeed even in layman’s terms, thinking of it as in television programming is more accurate than thinking it is related to computer programming (as is mentioned in TFA)
I always find funny how many meanings "programming" has.
> linear programming, integer programming, and constraint programming
Can't think of a universe in which you'd learn about these things before learning "computer programming".
There was a universe before digital computers were popular or a thing at all.
Computing predates computers.
The sequence of steps is the result of dynamic programming. Not every sequence of steps is a dynamic programme. And (what I would argue are) the core results are fairly mathematical, memoization and recursion don't feature, but partial differential equations do. Check out the Hamilton Jacobi Bellman equation to get a flavour.
Possibly the worst named thing in computing? Not only is "programming" not referring to programming, but "dynamic" is meaningless and just there to try to make it sound fancy.
I prefer "cached recursion".
I actually think overly focusing on the memoized recursion undermines the core conception of dynamic programming. Remember, as noted in the post, the term programming refers to a tabular method of planning, i.e. specifically, the bottom-up calculation to solve a problem. The emphasis here is on a predetermined order on how to tabulate the plan to get to the desired result which will be the solution to the broader problem.
The memoized recursive function is an implementation methodology on a digital computer that specifically side-steps the question of in which order one should do the tabulation. I would argue that recursive memoization is a possible solution technique to dynamic programming problems (defined as the set of problems that admit DP solutions), but strictly speaking in and of itself is not dynamic programming.
I think "cached recursion" is too broad. I tend to go with "Minimal Memory Memoization".
1) the recursion solution is often too slow, but uses little memory.
2) the memoization solution can make the algorithms from 1 a lot faster, but blows up memory use.
3) the dynamic programming solution only keeps the previous partial solutions in memory that will be needed for future partial solutions. Therefore it is the "Minimal Memory Memoized" solution. It often requires a smart ordering of partial solutions that allows for the earliest cache eviction.
Your "cached recursion" sounds like number 2 to me, and the crux about dynamic programming is to figure out when to remove an entry from your cache.
Well, cached recursion is a bit too generic. Almost anything can be done with both caches and recursion.
We had a different characterisation in one of our mathematical optimisation classes: dynamic programming is basically every problem that isomorphic to finding the longest path in a graph, with suitable 'overloading' of 'maximum' (for picking among multiple alternative paths) and 'addition' (for combining multiple sub-paths).
First, to illustrate what I mean by this overloading: matrix multiplication usually has multiplication () and addition (+). However, in the min-plus-algebra you overload (+) with minimum and () with plus, and then multiplying matrices becomes equivalent to hopping two paths in the adjacency matrix of a graph. (Sorry, if this is a bit confusing.) Specifically, taking an adjacency matrix A and calculating A* := 1 + A + AA + AAA + ... is equivalent to taking the transitive closure of edge hopping in your graph, ie the shortest paths between all pairs of vertices.
If you overload (+) with maximum and () with plus instead, you can find longest paths. For that to make sense, you should not have cycles in your graph.
Now let's go back to dynamic programming: the shared optimal sub-structure property of dynamic programming is exactly the same as what you have in finding longest paths.
The structure might sound overly restrictive, but you'll find that it works for all (most?) examples of dynamic programming.
P.S. My recollection was a bit vague, so I asked ChatGPT for some help, and apparently I wasn't completely wrong: https://chatgpt.com/share/687de24c-c674-8009-9984-0fda56d1c1...
Also, the term "dynamic programming" has slightly different meanings in the context of leetcode problems and in the context of reinforcement learning.
In the first it's for optimizing a relatively small deterministic system, in the latter it is for optimizing a stochastic process.
Pretty confusing.
I first learned about Dynamic Programming in the algorithms and datastructures class in the first semester at uni. But the lecturer very quickly moved on to only ever refer to it as "DP". This makes me think he may have done that as to not confuse students.
Even though I knew it wasn't referring to programming I did wonder why it was called that, interesting!
> Try thinking of some combination that will possibly give [the word, dynamic] a pejorative meaning. It’s impossible*.
> *This Haskell fan’s contender is “dynamic typing” :P
Nothing is impossible!
There's also dynamic scoping; as opposed to lexical scoping a.k.a. static scoping.
You can find defenders of dynamic typing, but dynamic scope is now widely seen as a mistake. Or at least dynamic scope by default; it has specialised uses--and in Haskell the Reader Monad is basically isomorphic to dynamic scoping and no one complains about it.
> dynamic scoping
Right you are. Even more horrible. The tcl language still has it!
I think EmacsLisp also does dynamic scoping. Or at least they used to about ten years ago. Not sure, if they fixed it?
It’s certainly possible to use it as a euphemism, if not an outright pejorative. “Dynamic truthfulness”, “dynamic sense of morality”, etc.
Dynamic optimality is a way more interesting concept to me. Data structures like the splay tree can effectively write the "ideal program" for accessing data based on the current use patterns by maintaining the data in a particular way over time. Instructions and data are mostly the same thing when you really think about it.
The article would probably be more complete with references to similar terms "linear program" or "integer program", unless this is also a slightly different meaning?
I love that origin story.
Same for "linear programming".
Reminds me of "Extreme Programming" [0], which is referring to project programming rather than computer programming.
[0] https://en.m.wikipedia.org/wiki/Extreme_programming
Compare with "Extreme Ironing": https://www.ebaumsworld.com/pictures/23-insane-examples-of-e...
If it weren't for the fact that the page is 10 years old, I would've thought those were AI-generated images at first glance.