What are the data structures at the lowest level?

I carefully watched the SICP lecture, in which Sussman demonstrated how the cons car and cdr Scheme could be implemented using only prospectuses.

Something like this happened:

 (define (cons xy) (lambda (m) (mxy))) (define (car z) (z (lambda (pq) p))) 

It made me think; What is a data structure? When a language is created, are data structures implemented as abstractions created by procedures? If they simply consist of procedures, then which procedures are really at the lowest level?

I suppose that I want to know this is what is at the bottom of the chain of abstraction (unless it is abstraction all the way down). At what point does it become hardware?

+7
data-structures abstraction scheme sicp
source share
4 answers

The trick is that you care. cons , cdr and car are abstractions, so their basic implementation does not matter.

You have what is called the “Party of the Church,” we are building all of the functions. In modern machines, we build everything from lines 1 and 0, but in fact it does not matter.

Now, if you are interested in how all these abstractions are implemented in your specific implementation, it depends. In all likelihood, your compiler / interpreter jumps through hoops behind the scenes, highlighting cons cells as a tightly packed pair of pointers (or the like) and turning your functions into lines 0 and 1, forming the appropriate machine code and connecting this to a pointer to its environment.

But, as I said, the whole beauty of creating these abstractions is that you do not need to care as a user :)

+8
source share

The substrate at the bottom of the abstraction chain is hardware dependent, but it will probably be a kind of registration machine that will have some kind of instruction set of its own.

SICP last chapter is all about this layer. There is no real hardware in the book, but an abstract (gem turtle!) Recording machine, which is similar to what can actually happen ... which, apparently, is modeled on the Scheme, just for extra metacentric fun. I found this difficult but useful.

If you are interested in such things, you might want to check out Knuth .

+2
source share

There are several lower parts for chains of abstraction. Depending on the main task, a computer scientist may choose the one that is best suited. The most common are Turing Machine and Lambda Calculus .

0
source share

Cool question!

It becomes hardware when someone sends it to the processor (CPU). If you need to read the chip manufacturer’s manual to understand how to do what you want to do, you are at the level you are describing.

chipset
(source: micro-examples.com )

Here is a brief overview of the path from physics to hardware and code for users: http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-002-circuits-and-electronics-spring-2007/ video-lectures / lecture-1 / although I don't think this is related to lambda calculus. (But you just say what inspired the question - this is not a question in itself, right?)

There is a step where the person who writes the language must interact with the processor instructions, for example, https://en.wikipedia.org/wiki/X86_instruction_listings || https://en.wikipedia.org/wiki/X86_calling_conventions .

Take a look at kernel programming or think about embedded systems (in the microwave, on the wing of an airplane), ARMs, or mobile devices — this is what people program without chipsets.

People who write BLAS (linear algebra solver libraries) sometimes drop to that level of detail. For example, https://en.wikipedia.org/wiki/Math_Kernel_Library or https://en.wikipedia.org/wiki/Automatically_Tuned_Linear_Algebra_Software . When they say that BLAS is "tuned", what do they mean? They talk about finding out the facts about your processor and changing how internal-internal-internal loops decide to spend less time on how physical objects are configured.

north bridge

If I remember correctly, high-level programming languages ​​(for example, C ;)) make no assumptions about which system they will work on, so they make independent calls, which are approximately ten times slower than if they knew in advance . what call to make. Everyone. Time. This is something that can drive you crazy, but it is a classic compromise between technical staff and time for the end user. I assume that if you become a kernel programmer or an embedded system programmer, you can help put an end to all the wasted clock cycles on computers around the world - the processors heat up as they waste a lot of time. (Although obviously the worst things are happening in the world.)

†: I just quickly figured out how many BLAS accelerations, and yes, it could be about 15 or 20. So I don’t think I am exaggerating / wrongly recalling what I heard about wasted traffic. By the way, something similar happens in electricity production: the last step (turbine) in electricity production is only 20% efficient. Don't all this waste drive you crazy ?! Time to become an engineer. ;)


A cool project you can check out is MenuetOS ; someone wrote an assembly system operating system.

Even more interesting may be the guy who says that it is actually quite easy and interesting to learn x86 assembler. (!)

punch card programmer
(source: iqsdirectory.com )

You can also recall the old days when there were less distances between software and hardware (for example, programming with a punch card). Fortunately, people wrote “higher-level” languages, which are more like the way people speak and think, and less like moving some kind of tape. Data structures may not be the most obvious thing from everyday conversation, but they are more abstract than [lists a sequence of GOTO and STORE instructions...] .

NTN

0
source share

All Articles