From the very beginning, full disclosure: I am working on ODB. And to answer your third question, no, no. -).
Seriously, however, the evolution of the schema is a complex problem, and it is one of the three big elements on our TODO list (the other two are multi-database support and SQL-to-C ++ compiler). The good news is that we pretty much work with support for multiple databases, and the next in line is the evolution of the schema.
As a rule, it is best to bring your schema (and data, if necessary) to the latest version. The alternative is that an application that can read several different versions just does not scale in the real world.
As an example, suppose we add a data item to a class, which translates to adding a column to the corresponding table at the database schema level. The way to handle this is to make this new column NULL-capable (with, for example, odb :: nullable or boost :: optional). The idea here is that old data that does not matter for this column will be NULL (which the application can detect and process).
Next we need to update the schema in the database. In this case, we will need to execute the ALTER TABLE ADD COLUMN statement, which will add a new column. When the ODB supports schema evolution, it automatically generates these migration instructions. Right now you have to write them yourself (pain in the ass, I know). All existing rows in the table will be automatically NULLed for this column.
Thus, usually the application will contain sets of such operators that update the scheme from one version to another. For example, from 1 to 2, from 2 to 3, etc. The database will store the schema version, and the application will know its latest schema version. Immediately after opening the database, the application will check the database version and, if it is lower than the version of the application schema, it will start these migration sets to update the schema to the latest version.