Hierarchical logistic regression; no task-induced anxiety | Reward coefficient | β = 0.98 ± 0.03, p < 0.001 | β = 0.95 ± 0.02, p < 0.001 |
β = 0.95 ± 0.03, p < 0.001 | β = 0.90 ± 0.02, p < 0.001 |
Punishment coefficient | β = –0.33 ± 0.04, p < 0.001 | β = –0.52 ± 0.03, p < 0.001 |
β = –0.29 ± 0.04, p < 0.001 | β = –0.50 ± 0.03, p < 0.001 |
Distribution; task-induced anxiety | | Mean = 21, SD = 14 | Mean = 22, SD = 13 |
Mean = 21, SD = 14 | Mean = 22, SD = 14 |
Correlation; task-induced anxiety and choice | | τ = –0.074, p = 0.033 | τ = –0.075, p = 0.005 |
τ = –0.068, p = 0.036 | τ = –0.081, p = 0.002 |
Hierarchical logistic regression; with task-induced anxiety | Reward coefficient | β = 0.98 ± 0.03, p < 0.001 | β = 0.95 ± 0.02, p < 0.001 |
β = 0.95 ± 0.03, p < 0.001 | β = 0.90 ± 0.02, p < 0.001 |
Punishment coefficient | β = –0.33 ± 0.04, p < 0.001 | β = –0.52 ± 0.03, p < 0.001 |
β = –0.29 ± 0.04, p < 0.001 | β = –0.50 ± 0.03, p < 0.001 |
Task-induced anxiety coefficient | β = –0.04 ± 0.04, p = 0.304 | β = 0.02 ± 0.03, p = 0.637 |
β = –0.03 ± 0.04, p = 0.377 | β = 0.003 ± 0.03, p = 0.925 |
Interaction of task-induced anxiety and reward | β = –0.06 ± 0.03, p = 0.074 | β = –0.003 ± 0.02, p = 0.901 |
β = –0.05 ± 0.03, p = 0.076 | β = –0.006 ± 0.02, p = 0.774 |
Interaction of task-induced anxiety and punishment | β = –0.10 ± 0.04, p = 0.022 | β = –0.23 ± 0.03, p < 0.001 |
β = –0.10 ± 0.04, p = 0.011 | β = –0.23 ± 0.03, p < 0.001 |
Hierarchical logistic regression; with task-induced anxiety, no interaction | Reward coefficient | β = 0.99 ± 0.03, p < 0.001 | β = 0.95 ± 0.02, p < 0.001 |
β = 0.95 ± 0.03, p < 0.001 | β = 0.89 ± 0.02, p < 0.001 |
Punishment coefficient | β = –0.33 ± 0.04, p < 0.001 | β = –0.52 ± 0.03, p < 0.001 |
β = –0.29 ± 0.04, p < 0.001 | β = –0.50 ± 0.03, p < 0.001 |
Task-induced anxiety coefficient | β = –0.09 ± 0.03, p = 0.012 | β = –0.08 ± 0.03, p = 0.005 |
β = –0.08 ± 0.03, p = 0.016 | β = –0.10 ± 0.03, p < 0.001 |
Computational model comparison | Winning model | 2 learning rate, 2 sensitivity | 2 learning rate, 2 sensitivity |
2 learning rate, 2 sensitivity | 2 learning rate, 2 sensitivity |
Correlation; task-induced anxiety and model parameters | Reward learning rate | τ = –0.019, p = 0.596 | τ = –0.010, p = 0.749 |
τ = –0.006, p = 0.847 | τ = –0.012, p = 0.637 |
Punishment learning rate | τ = –0.088, p = 0.015 | τ = –0.064, p = 0.019 |
τ = –0.073, p = 0.026 | τ = –0.074, p = 0.003 |
Learning rate ratio | τ = 0.048, p = 0.175 | τ = 0.029, p = 0.282 |
τ = –0.037, p = 0.276 | τ = 0.045, p = 0.072 |
Reward sensitivity | τ = –0.038, p = 0.286 | τ = –0.028, p = 0.285 |
τ = –0.047, p = 0.149 | τ = –0.023, p = 0.345 |
Punishment sensitivity | τ = 0.068, p = 0.051 | τ = 0.076, p = 0.004* |
τ = 0.047, p = 0.153 | τ = 0.094, p < 0.001 |
Reward-punishment sensitivity index | τ = –0.099, p = 0.005 | τ = –0.096, p < 0.001 |
τ = –0.084, p = 0.011 | τ = –0.103, p < 0.001 |
Mediation | Punishment learning rate | β = – 0.002 ± 0.001, p = 0.052 | β = –0.001 ± 0.003, p = 0.132 |
β = –0.001 ± 0.001, p = 0.222 | β = –0.001 ± 0.003, p = 0.031† |
Reward-punishment sensitivity index | β = 0.009 ± 0.003, p = 0.003 | β = 0.009 ± 0.002, p < 0.001 |
β = 0.006 ± 0.002, p = 0.011 | β = 0.009 ± 0.002, p < 0.001 |
Correlation; psychiatric symptoms and choice (overall proportion of conflict option choices) | GAD7 | τ = –0.026, p = 0.458 | τ = –0.001, p = 0.988 |
τ = 0.005, p = 0.894 | τ = –0.002, p = 0.919 |
PHQ8 | τ = –0.02, p = 0.579 | τ = –0.013, p = 0.639 |
τ = 0.014, p = 0.684 | τ = –0.012, p = 0.646 |
BEAQ | τ = – 0.059, p = 0.010 | τ = –0.029, p = 0.286* |
τ = – 0.048, p = 0.151† | τ = –0.020, p = 0.423 |
Correlation; psychiatric symptoms and task-induced anxiety | GAD7 | τ = 0.256, p < 0.001 | τ = 0.222, p < 0.001 |
τ = 0.267, p < 0.001 | τ = 0.231, p < 0.001 |
PHQ8 | τ = 0.233, p < 0.001 | τ = 0.184, p < 0.001 |
τ = 0.244, p < 0.001 | τ = 0.194, p < 0.001 |
BEAQ | τ = 0.15, p < 0.001 | τ = 0.176, p < 0.001 |
τ = 0.172, p < 0.001 | τ = 0.17, p < 0.001 |
Correlation; GAD7 and model parameters | Reward learning rate | τ = –0.06, p = 0.076 | τ = –0.03, p = 0.304 |
τ = –0.061, p = 0.074 | τ = –0.028, p = 0.264 |
Punishment learning rate | τ = –0.07, p = 0.077 | τ = –0.01, p = 0.621 |
τ = –0.037, p = 0.265 | τ = –0.017, p = 0.534 |
Learning rate ratio | τ = –0.001, p = 0.999 | τ = –0.034, p = 0.229 |
τ = –0.032, p = 0.35 | τ = –0.026, p = 0.301 |
Reward sensitivity | τ = –0.01, p = 0.802 | τ = –0.01, p = 0.745 |
τ = –0.005, p = 0.877 | τ = –0.009, p = 0.718 |
Punishment sensitivity | τ = 0.05, p = 0.154 | τ = 0.003, p = 0.907 |
τ = 0.022, p = 0.513 | τ = 0.012, p = 0.633 |
Reward-punishment sensitivity index | τ = –0.05, p = 0.149 | τ = 0.01, p = 0.746 |
τ = –0.023, p = 0.496 | τ = 0.007, p = 0.797 |
Correlation; PHQ8 and model parameters | Reward learning rate | τ = –0.03, p = 0.396 | τ = –0.03, p = 0.219 |
τ = –0.04, p = 0.239 | τ = –0.035, p = 0.17 |
Punishment learning rate | τ = –0.06, p = 0.100 | τ = –0.01, p = 0.630 |
τ = –0.022, p = 0.504 | τ = –0.025, p = 0.348 |
Learning rate ratio | τ = 0.008, p = 0.809 | τ = –0.033, p = 0.231 |
τ = –0.038, p = 0.259 | τ = –0.016, p = 0.521 |
Reward sensitivity | τ = –0.012, p = 0.610 | τ = 0.01, p = 0.729 |
τ = –0.009, p = 0.801 | τ = 0.004, p = 0.901 |
Punishment sensitivity | τ = 0.05, p = 0.179 | τ = –0.002, p = 0.948 |
τ = 0.008, p = 0.802 | τ = 0.012, p = 0.619 |
Reward-punishment sensitivity index | τ = –0.06, p = 0.123 | τ = 0.02, p = 0.557 |
τ = –0.01, p = 0.773 | τ = 0.005, p = 0.867 |
Correlation; BEAQ and model parameters | Reward learning rate | τ = –0.06, p = 0.085 | τ = –0.02, p = 0.394 |
τ = –0.047, p = 0.147 | τ = –0.017, p = 0.503 |
Punishment learning rate | τ = –0.08, p = 0.024 | τ = –0.03, p = 0.337* |
τ = –0.071, p = 0.032 | τ = –0.031, p = 0.228 |
Learning rate ratio | τ = 0.018, p = 0.618 | τ = –0.005, p = 0.883 |
τ = 0.002, p = 0.938 | τ = 0.007, p = 0.759 |
Reward sensitivity | τ = –0.01, p = 0.739 | τ = 0.01, p = 0.753 |
τ = –0.036, p = 0.29 | τ = 0.002, p = 0.927 |
Punishment sensitivity | τ = 0.07, p = 0.061 | τ = 0.02, p = 0.477 |
τ = 0.05, p = 0.127 | τ = 0.025, p = 0.308 |
Reward-punishment sensitivity index | τ = 0.02, p = 0.034 | τ = –0.01, p = 0.745* |
τ = – 0.073, p = 0.028 | τ = – 0.015, p = 0.548 |