Hello,
The issue you are seeing is due to the fact that there is no realistic way for us to ensure file consistency in the case of concurrent updates from within the training activity since the storage used to the training file can be varied (it’s even possible that multiple robots update a shared network file). As such, it is left up to the user.
What you can do is, before the parallel loop, read the contents of the file into a string variable. Use Intelligent Keyword Classifier Trainer with the “LearningData” string argument instead of the file path. At the end of the training write the string back to the original file location, overwriting the old content.
Hope this helps.