@sueyhan previously opened a ticket for this, but then closed it without getting a response.
These are the model weights I get for training without lexical features (python experiment.py -l settles.acl16.learning_traces.13m.csv.gz):
wrong -0.2245
right -0.0125
bias 7.5365
I do not see how it can be correct that the right feature has a negative weight. This will cause the half life to get shorter as a user gets more correct answers, and therefore the model will predict a lower and lower probability of the user getting correct answers.
How can this be correct?
@sueyhan previously opened a ticket for this, but then closed it without getting a response.
These are the model weights I get for training without lexical features (
python experiment.py -l settles.acl16.learning_traces.13m.csv.gz):wrong -0.2245
right -0.0125
bias 7.5365
I do not see how it can be correct that the
rightfeature has a negative weight. This will cause the half life to get shorter as a user gets more correct answers, and therefore the model will predict a lower and lower probability of the user getting correct answers.How can this be correct?