I honestly cannot answer why you did not see the directory folder in the log_root directory if you passed all your parameters correctly. Another point is to make sure you have been waiting long enough. Therefore, when you perform your training run using Textsum, you see some detailed logs that indicate some error, for example, there is no list of files or something else. If so, then your path passed to one of the parameters is probably turned off. This applies to the path from which you are calling it, so you need to make sure that you are on the root path where your workspace file is located.
Another thing, do you use a processor or GPU? If you use a CPU ... it takes some time for the model to get to the point where it can even write data. Now, if you use the GPU, it is much faster, but you need to wait until you see that the average_loss logs are starting to print on your screen. After you notice this, then there is a good chance that you will see your “train” folder with the data.
As for the “real-time” test data, I still look at it myself, and now that my current data is being trained in the model, I will also start with this. The direction that I understand so far is that, once you have prepared your model and have your pickle file or something like ti, you can "serve" it using the information here: https: // tensorflow. imtqy.com/serving/
At this point, your model is trained, and you can request it and submit a new answer, so that over time your model becomes more intelligent. Again, I have not yet proven this with an example, but this is the approach I'm going to start here soon.
Regarding “model testing,” you can pretty much follow the instructions provided on textum git, recreate the vocab file, and then train. Then, after you get the average loss to a fairly small fraction, you can start decoding the data. Then, in your log_root decoding folder, you will see the created headers and the associated link files (what the actual header was). Hope this helps and good luck!
xtr33me
source share