Programming difficulty

Is there an objective measure of programming complexity in terms of syntax and semantics, and not as a complex language to use ?

I read a lot of subjective comments, but not a very strict analysis.

+6
complexity-theory programming-languages
source share
11 answers

See Denotational semantics and operational semantics :

Denotational semantics is an approach to formalizing the meanings of programming languages ​​by constructing mathematical objects (called denotations) that describe the meanings of expressions from languages.

Operational semantics for a programming language describes how a real program is interpreted as a sequence of computational steps. Then these sequences indicate the meaning of the program. In the context of functional programs, the final step in the final sequence returns the value of the program. (In the general case, there can be many return values ​​for a single program, because a program can be non-deterministic, and even for a deterministic program there can be many computational sequences, because semantics cannot accurately indicate which sequence of operations goes to this value.)

+4
source share

BNF language is a crude measure - just for taste :-)

A few examples,

+10
source share

It’s not clear to me that complexity is even a well-defined term in relation to a programming language.

If “objective” means “quantitative,” you can ask questions such as

  • How big is the unambiguous grammar?

  • How big is the working yacc grammar?

Since almost no language has formal semantics, it is difficult to conduct quantitative research. But you could ask

  • How large is the simplest interpretation of the language, relative to interpreters for other languages ​​that use the same metalanguage (the language the interpreter is written in)? This measure is somewhat related to Kolmogorov’s complexity.

With the exception of curiosity, it is not clear to me that this question is worth asking, it is difficult to provide useful answers.

+6
source share

The best measure I've seen in the language is the probability that the random string will be a valid program. Perl is a language that holds a high place on this scale, Ada takes a rather low place.

What this metric means is another problem.

+3
source share

Usually , the more dynamic and abstract syntax and semantics or implementation, the more difficult the language (not to use, as you stated).

Therefore, Java is a more complex language than C because:

  • C has simple rules for determining the scope of Java with respect to complex rules.
  • Types are more complex, method resolution and overload
  • Things like inheretance, argument enumeration, and checking, method overloading make the compilation process more complicated.

I would say that Python is simpler than Java on this basis, because the object model, although complex, is simple in terms of reduction to a simpler form. The ease with which this syntax can be translated into a simpler form in terms of time and computation can also be an angle.

On the other hand, a language such as lisp , some argue that it is difficult to use , but very simple. The same goes for things like Haskell.

You can measure complexity in one of the following ways, but not complete:

  • The number of keywords, lines of code and the complexity of semantics (for example, identifier resolution) for a simple task. Fibonacci calculation can be one. Comparing profitable efficient implementation of common algorithms.
  • What happens when? Are names bound late at runtime or resolved at compile time?
  • Can a given piece of code be perceived in more than one way when all the facts of identifiers, types and external code are not indicated?

There are a ton of ways . You can measure the computational complexity of the compilation process for a given syntax.

Not all of these examples are true. Some object models are very complex, but very fast because they use a fast foundation. I could be an example.

+3
source share

I like Project Euler for evaluating this. :)

+1
source share

The simplest two objective things that you need to pay attention to will be the number of outdated certain characters and keywords / reserved words, as well as the number of works in his BNF.

Another thing that you might pay attention to for the languages ​​they have is the relative sizes of their standard documents. Some argue that not all standard documents are written at the same level.

+1
source share

I think that if you look in the field of proof of correctness, you will find a more detailed analysis of semantic complexity. Systems such as CSP (and, to a lesser extent, Lambda Calculus) are designed for analysis through analysis. The closer the language is to the expression of the basic formal system, the easier it is from a semantic point of view.

The counter-example will look like the C programming language. It is impossible to determine what the C program actually does without knowing which OS and hardware it will run on.

+1
source share

Like other users, keywords are an objective measure of how complex a programming language can be. Grammar / syntax will describe how complex the code structure is (valid combinations of keywords). There are code metrics related to software quality to determine how complex a piece of code is.

It seems harder to measure semantic complexity. This is due to expressive ability (the higher the level of a programming language, the more expressive power). I see no reason to try to compare different solutions implemented in different languages ​​in order to measure their expressive power (for example, by implementing the solutions of the Euler = solutions project). The problem itself and each paradigm of a programming language can bias the comparison.

In the case of high-level programming languages, I assume that the number of possible implementations for a particular task (from an abstract point of view) is a good measure of semantic complexity.

In the case of low-level programming languages, it may be interesting to see how specification languages ​​generate code (find implementation = solution for this problem). Regardless, due to limited abstraction, this measure seems to be closely related to the software quality code metric.

As you can see, abstraction and semantic complexity are more difficult to automate (generate an implementation solution =, given the specification). Where programmer knowledge, intelligence and psychology are being introduced (AI has not reached this point).

+1
source share

How complicated is the use of language is somewhat subjective.

On the other hand, questions about how complicated the semantics of a language can be answered, but only in comparison with other languages. However, this is not necessarily useful. For example, I would give Smalltalk semantic complexity 1, and C ++ complexity 9. However, I bet everything that the browser in which you read it is written in C ++, and not in Smalltalk.

0
source share

If such an objective measure exists, it is probably almost useless for assessing the usefulness or cost of using a given language for a given purpose. This will help you eliminate gaps or brains, but you can do it just as easily without wasting resources on such an objective measure - subjectively observing the source code and understanding that you will never want to do serious work with it.

Most languages ​​will have many positive and negative factors that you need to weigh against the goal you are trying to achieve and the conditions that need to be fulfilled.

0
source share

All Articles