There is no difference in performance. And there are no hidden optimizations made due to power 2.
The only thing that really affects how things are stored is the actual data. 100 characters stored in the VARCHAR2(2000) column are stored in exactly the same way as 100 characters stored in the VARCHAR2(500) column.
Think of length as a business constraint, not as part of a data type. The only thing that should affect your decision about the length is business restrictions on the data that is placed there.
Edit : the only situation where length matters is when you need an index in this column. Older versions of Oracle (<10) had a key length limit and were checked when creating the index.
Although this is possible in Oracle 11, it may not be the wisest choice for an index with a value of 4000 characters.
Edit 2 :
Therefore, I was curious and set up a simple test:
create table narrow (id varchar(40)); create table wide (id varchar(4000));
Then we filled both tables with rows consisting of 40 'X'. If there really was a (significant) difference between the repository, it should appear somehow when retrieving the data, right?
Both tables have exactly 1,048,576 rows.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> set autotrace traceonly statistics
SQL> select count (*) from wide;
Statistics
-------------------------------------------------- --------
0 recursive calls
1 db block gets
6833 consistent gets
0 physical reads
0 redo size
349 bytes sent via SQL * Net to client
472 bytes received via SQL * Net from client
2 SQL * Net roundtrips to / from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL> select count (*) from narrow;
Statistics
-------------------------------------------------- --------
0 recursive calls
1 db block gets
6833 consistent gets
0 physical reads
0 redo size
349 bytes sent via SQL * Net to client
472 bytes received via SQL * Net from client
2 SQL * Net roundtrips to / from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
SQL>
Thus, a full table scan for both tables did the same. So what happens when we actually select the data?
SQL> select * from wide;
1048576 rows selected.
Statistics
-------------------------------------------------- --------
4 recursive calls
2 db block gets
76497 consistent gets
0 physical reads
0 redo size
54386472 bytes sent via SQL * Net to client
769427 bytes received via SQL * Net from client
69907 SQL * Net roundtrips to / from client
0 sorts (memory)
0 sorts (disk)
1048576 rows processed
SQL> select * from narrow;
1048576 rows selected.
Statistics
-------------------------------------------------- --------
4 recursive calls
2 db block gets
76485 consistent gets
0 physical reads
0 redo size
54386472 bytes sent via SQL * Net to client
769427 bytes received via SQL * Net from client
69907 SQL * Net roundtrips to / from client
0 sorts (memory)
0 sorts (disk)
1048576 rows processed
SQL>
There is a slight difference in consecutive tricks, but this may be due to caching.