Vowpal wabbit readable model

I used Vowpal Wabbit and created a classifier trained as a readable model.

There were 22 functions in my dataset, and the readable model gave as output:

Version 7.2.1
Min label:-50.000000
Max label:50.000000
bits:18
0 pairs:
0 triples:
rank:0
lda:0
0 ngram:
0 skip:
options:
:0
101143:0.035237
101144:0.033885
101145:0.013357
101146:-0.007537
101147:-0.039093
101148:-0.013357
101149:0.001748
116060:0.499471
157941:-0.037318
157942:0.008038
157943:-0.011337
196772:0.138384
196773:0.109454
196774:0.118985
196775:-0.022981
196776:-0.301487
196777:-0.118985
197006:-0.000514
197007:-0.000373
197008:-0.000288
197009:-0.004444
197010:-0.006072
197011:0.000270

Can someone explain to me how to interpret the last part of the file (after options :)? I used logistic regression, and I need to check how iteration compared to training updates is my classifier so that I can understand when it reached convergence ...

Thank you in advance:)

+4
source share
2 answers

, , - 22 "" ( - - 116060) .

:

hash_value:weight

-, :

  • utl/vw-varinfo ( ) , . utl/vw-varinfo /
  • --invert_hash readable.model

BTW: - - . vw , . - -.

Edit:

, , - options:, :

:0

, "" ( , , , ) 0. , vowpal- wabbit , . :0 . , : feature_name :<value> vowpal wabbit , TRUE. IOW: , (:1), (:0). .

+10

Vowpal Wabbit --invert_hash, , .

, , , , .

+4

All Articles