Assuming the source table is not too limited, invalid or inconsistent, I would recommend:
- Define a fact table that indicates the known width for each column, as defined
- Define a collection of Constraint objects for each colspan record that defines the starting column, column range, and total width.
- Make a pass through the definition of the entire table, collecting facts and limitations.
- Then go through the fact table, and for each column that is not defined, skip all the restrictions and see if there is a restriction on the set of columns for which all other columns are defined. This restriction will create a value for the current column in question.
- Each time a new column value is found, you start back at the beginning of the fact table, look at unknown columns and set all restrictions again for each scan.
This is an n-square (or worst) algorithm, but it should be accurate if there are no ten thousand rows or columns in the table. If the table is properly bounded, you will reach the point where all column widths are defined. The advantage of brute force algorithm, for example, is that it is relatively easy to debug and should be stable.
If the table is limited, you reach the point at which you are making a pass and the column widths that are not calculated remain. If you want to deal with this, you add another pass, and this time you execute an arbitrary restriction, which includes an off-grid column of the table, which should also include one or more other off-grid columns of the table and distribute the remaining space equally across all off-grid columns in restriction. Since this is an arbitrary restriction, you can get a different answer to different runs ... but the table is limited ... does it matter?
When you're done, you have a complete fact table with all column widths, and you can generate your LaTeX code with all of the specified table columns.
source share