Imagine a simple (composed) language where functions look like this:
function f(a, b) = c + 42 where c = a * b
(Let's say this is a subset of Lisp that includes "defun" and "let".)
Also imagine that it includes immutable objects that look like this:
struct s(a, b, c = a * b)
Again, the analogy with Lisp (this time a superset), say, such a definition of the structure should generate functions for:
make-s(a, b) sa(s) sb(s) sc(s)
Now, given the simple setup, it seems clear that there are many similarities between what happens behind the scenes when you invoke f or make-s. When "a" and "b" are transmitted during invocation / instantiation, enough information is available to calculate "c".
You might think of creating an instance of the structure as a function call, and then saving the resulting symbolic environment for later use when calling the generated access functions. Or you might think of evaluating a function as creating a hidden structure, and then using it as a symbolic environment with which to evaluate the final expression of the result.
Is my toy model so simplistic that it is useless? Or is this a really useful way to think about how real languages ββwork? Are there any real languages ββ/ implementations that someone who does not have CS experience, but with an interest in programming languages ββ(i.e. me), should learn more about this in order to learn this concept?
Thanks.
EDIT: Thanks for the answers so far. To sort things out a bit, I think I'm wondering if there are any real languages ββthat people who speak the language speak. "you should think that objects are essentially closures." Or, if there are any realities of the real language, where is the case when an object instance and a function call are actually shared by some common (non-trivial, that is, not only library calls) code or data structures.
Is the analogy that I am doing that I know different, earlier than deeper than a simple analogy in any real situations?