In fact, I think you understand. First of all, with a header question, how does the hardware work. The hardware runs on machine code or machine instructions or any other term. Since you correctly described the assembly, this is a representative of this machine code, and not always, but close to one to one relation, one asm instruction to one machine command. These bits, ones and zeros, hardware can now perform actions that describe the bits.
Now how is the first assembler written? With pencil and paper. Usually you write the instruction into some kind of pseudo-assembly, since you may not have fully defined the language, and then write the bits based on the encoding, the same as the assembler. Then, using some mechanism, you download these bits to the computer and tell it to start.
In the end, naturally, it becomes tedious for larger programs, so you decide to write a larger program that analyzes a language that is easier to write, and then repeat this with more complex languages โโand programs.
Even today, depending on the team and how they do it, and the individual engineer testing the command decoder, etc. Handwritten code still happens. In the end, the assembler is created, and you switch to it, and sometimes there is a higher level compiler, and you switch to it for most of the coding, but in the chip development world you are still knowledgeable and will change the instruction bit from time to time to machine code level.
old_timer
source share