As another answer already explained, Qi already has a mechanism for generating parsers on the fly, given the type of attribute.
The specific bit for the end user is qi::auto_ . qi::auto_ - parser instead of grammar.
This has clear advantages [1] .
- First of all, it allows users to use the parser inside the grammar with the skipper of their choice, as well as possibly using
qi::locals<> . - In addition, the terminal of the
auto_ Qi expression is already defined, so there is no need to instantiate the grammar using a detailed list of template arguments: - Finally, the parser returns an expression template, so there is no erasing of styles, therefore combining several auto_ parsers in this way is no less efficient than manually arranging the grammar (whereas the wrapper in
qi::rule<> and qi::grammar<> have an overhead of performance)
See how it is used:
std::vector<std::pair<double, int> > parsed; bool result_ = qi::phrase_parse(first, last, qi::auto_, qi::space, parsed);
As you can see, this holds the skipper, and also the "magic" selects a parser that matches parsed . Now, to get your sample format from OP, you need to connect to the tuning point for the auto_ parser
namespace boost { namespace spirit { namespace traits {
That is literally all that is needed. Here is a demo that parses:
VECTOR[ 1 , ( PAIR (0.97, 5), PAIR (1.75,10) ) ]
And prints the analyzed data as:
Parsed: 0.97 5 1.75 10
Watch Live On Coliru
Full code listing
#include <boost/fusion/adapted.hpp> #include <boost/spirit/home/qi.hpp> namespace qi = boost::spirit::qi; namespace boost { namespace spirit { namespace traits { // be careful copying expression templates. Boost trunk has `qi::copy` for this too, now #define PARSER_DEF(a) using type = decltype(boost::proto::deep_copy(a)); static type call() { return boost::proto::deep_copy(a); } template<typename T1, typename T2> struct create_parser<std::pair<T1, T2> > { PARSER_DEF(lexeme [ lit("PAIR") ] >> '(' >> create_parser<T1>::call() >> ',' >> create_parser<T2>::call() >> ')'); }; template<typename TV, typename... TArgs> struct create_parser<std::vector<TV, TArgs...> > { PARSER_DEF(lexeme [ lit("VECTOR") ] >> '[' >> qi::omit[qi::uint_] >> ',' >> '(' >> create_parser<TV>::call() % ',' >> ')' >> ']' ); }; #undef PARSER_DEF } } } #include <boost/spirit/home/karma.hpp> namespace karma = boost::spirit::karma; int main() { std::string const input("VECTOR[ 1 ,\n" " ( \n" " PAIR (0.97, \n" " 5), \n" " PAIR (1.75,10) \n" " ) \n" "]"); std::cout << input << "\n\n"; auto first = input.begin(); auto last = input.end(); std::vector<std::pair<double, int> > parsed; bool result_ = qi::phrase_parse(first, last, qi::auto_, qi::space, parsed); if (first!=last) std::cout << "Remaining unparsed input: '" << std::string(first, last) << "'\n"; if (result_) std::cout << "Parsed:\n " << karma::format_delimited(karma::auto_ % karma::eol, " ", parsed) << "\n"; else std::cout << "Parsing did not succeed\n"; }
[1] A potential drawback would be that the tuning point is fixed, and therefore, you could only map 1 auto_ parser to any type. Rolling your base template gives you more control and allows you (more) to easily have different "parser tastes." However, in the end, this is possible in order to have the best of both worlds, so I would go for convenience in the first place.