C, noticing that B receives the same encrypted message twice, is a problem called traffic analysis and has historically been a serious problem (but this was in the time leading up to public key encryption).
Any decent, publicly available encryption system includes some random additions. For example, for RSA, as described in PKCS # 1 , an encrypted message (no longer than 117 bytes for a 1024-bit RSA key) receives a header with at least eight random (non-zero) bytes and some additional data that allows the receiver to unambiguously find the bytes fill in and see where the "real" data begins. Random bytes will be regenerated every time; therefore, if A sends twice the same message to B, the encrypted messages will be different, but B will restore the original message twice.
Random filling is required for public key encryption precisely because the public key is publicly available: if encryption was deterministic, then the attacker could βtryβ potential messages and look for a match (this is an exhaustive search for possible messages).
Public key encryption algorithms often have serious limitations on the size or performance of the data (for example, using RSA, you have a strict maximum message length, depending on the size of the key). Thus, it is customary to use a hybrid system: public key encryption is used to encrypt the symmetric key K (i.e., a bunch of random bytes), and K is used to symmetrically encrypt the data (symmetric encryption is fast and has no limit on the size of the input message). In a hybrid system, you generate a new K for each message, so it also gives you the randomness you need to avoid the problem of encrypting the same message several times with this public key: at the public encryption level, you never encrypting the same message twice (the same key K), even if the data that is symmetrically encrypted with K is the same as in the previous message. This will protect you from traffic analysis, even if the public key encryption itself does not include random padding.
When symmetric encryption of data with key K, symmetric encryption should use the "initial value" (IV), which is generated randomly and uniformly; it is integrated into the encryption mode (in some modes it only requires a non-repeating IV without the need for random uniform generation, but CBC requires random uniform generation), This is the third level of randomness that protects you from traffic analysis.
When using an asymmetric key agreement (static Diffie-Hellman ), because they are a little more complicated, because the key agreement leads to a key K that you donβt choose, and which can be the same always and always (between the given sender and receiver). In this situation, protection from traffic analysis is based on the random chance of symmetric encryption IV.
Asymmetric encryption protocols such as OpenPGP describe how all symmetric encryption, public key encryption and randomness should be related to each other, smoothing tricky details. You are strongly advised not to invent your own protocol: it is difficult to develop a secure protocol, mainly because it is not easy to check for the presence or absence of any weakness.