I recently worked with Ruby chr and ord methods, and there are a few things that I don't understand.
My current project involves converting individual characters to and from ordinal values. As I understand it, if I have a string with an individual character of type "A" and I call ord on it, I get its position in the ASCII table, which is 65. Calling the opposite, 65.chr gives me the value of the character "A", so this tells me that Ruby has a collection somewhere of the ordered values ββof a character, and she can use this collection to give me the position of a specific character or character in a specific position. Maybe I'm wrong, please correct me if I will.
Now I also understand that the default character encoding of Ruby uses UTF-8, so it can work with thousands of possible characters. Thus, if I ask about it something like this:
'ε₯½'.ord
I get the position of this character, which is 22909. However, if I call chr on this value:
22909.chr
I get "RangeError: 22909 from a char range". I can get char to work with values ββup to 255 that are ASCII extended. So my questions are:
- Why does Ruby seem to get the values ββfor
chr from the extended ASCII character set, but ord from UTF-8? - Is there any way to tell Ruby to use different encodings when using these methods? For example, tell me to use ASCII-8BIT encoding instead of what it defaults to?
- If you can change the default encoding, is there a way to get the total number of characters available in the set used?
ruby encoding
Jonathon nordquist
source share