More than 15 years have passed since the last time I played with the LZW compression algorithm, so do the following with a grain of salt.
Given the limitations of memory, at best it will be difficult. The dictionary that you create will consume the vast majority of what you have. (Assuming code + memory <= 2k.)
Choose a small fixed size for your vocabulary. Let's say 1,024 entries.
Let each dictionary entry take the form ....
struct entry { intType prevIdx; charType newChar; };
This structure makes the dictionary recursive. You need the element in the previous index to be valid for it to work correctly. Is it workable? I'm not sure. However, let's assume at that moment that this is so, and find out where he leads us ....
If you use standard types for int and char, you will run out of memory soon. You will want to pack things as tight as possible. To write to 1024 records, 10 bits are required. Your new character is likely to take 8 bits. Total = 18 bits.
18 bits * 1024 entries = 18432 bits or 2304 bytes.
At first glance, this seems too big. What are we doing? Take advantage of the fact that the first 256 entries are already known - your typical extended ascii set, or whatever you have. This means that we really need 768 records.
768 * 18 bits = 13824 bits or 1728 bytes.
This gives you approximately 320 bytes to play the code. Naturally, you can play around with the meaning of the dictionary and see what is good for you, but you will not have much space for your code. Since you are looking at such a small number of codes, I would expect you to finish coding in the assembly.
Hope this helps.
Sparky
source share