Assuming that our brain does not have access to the metaphysical cloud server, Value is presented as a configuration of neural connections, hormonal levels, electrical activity - possibly even quantum fluctuations - and the interactions between all this and the outside world and other brains. So, this is good news: at least we know that there is at least one answer to your question (meaning somewhere is presented somewhere). The bad news is that most of us do not know how this works, and those who believe that they understand could not convince others or each other. Being one of the ignorant people, I cannot give an answer to your question, but I give a list of answers that I have come across to smaller and degenerate versions of the great problem.
If you want to represent the meaning of lexical objects (for example, concepts, actions), you can use distributed models, such as vector space models . In these models, as a rule, the geometric component is of importance. Each concept is presented in the form of a vector, and you put concepts into space so that similar concepts are closer to each other. A very simple way to build such a space is to choose a set of commonly used words (basic words) as the dimensions of the space and simply count the number of times the target concept will be observed together in the speech / text with these basic words. Similar concepts will be used in similar contexts; thus, their vectors will indicate similar directions. In addition, you can perform a bunch of weighing, normalizing, reducing dimensions and recombination methods (e.g. tf-idf , http://en.wikipedia.org/wiki/Pointwise_mutual_information , SVD ). A slightly related, but probabilistic - not geometrical - approach is the hidden Dirichlet distribution and other generative / Bayesian models that are already mentioned in another answer.
The vector model approach is good for discriminatory purposes. You can decide whether these two phrases are related semantically or not (for example, matching queries with documents or finding similar pairs of search queries to help the user expand their query). But on these models, it’s not so easy to include syntax. I cannot see very clearly how you could imagine the meaning of a sentence as a vector.
Grammar formalisms could help incorporate syntax and bring structure to meaning and the relationship between concepts (for example, a grammar of the structure of a phrase controlled by voice ). If you create two agents that separate vocabulary and grammar, and make them communicate (i.e. transfer information from one to another) through these mechanisms, you can say that they make sense. It is rather a philosophical question, where and how is the meaning presented when the robot tells another to select the “red circle above the black box” through the built-in or resulting grammar and vocabulary, and the other successfully selects the intended object (see this very interesting experiment on grounding vocabulary: Talking Heads )
Another way to make sense is to use networks. For example, representing each concept as a node in a graph and the relationship between concepts as edges between nodes, you can come up with a practical idea of meaning. The concept of Net is a project whose purpose is to present common sense, and it can be considered as a semantic network of concepts of common sense. In a sense, the meaning of a particular concept is represented through its location relative to other concepts in the network.
Speaking of common sense, Cyc is another ambitious example of a project that tries to capture common sense knowledge, but it does it in a completely different way than Concept Net. Cyc uses a well-defined symbolic language to represent the attributes of objects and the relationships between objects in an unambiguous way. Using a very large set of rules and concepts and a withdrawal mechanism, you can find conclusions about the world, answer questions such as “Can a horse hurt?”, “Bring me a picture of a sad person.”