Raquel G. Alhama
Raquel G. Alhama
Cognitive Science and Artificial Intelligence Department, Tilburg University
Verified email at tilburguniversity.edu - Homepage
Cited by
Cited by
Pre-wiring and pre-training: What does a neural network need to learn truly general identity rules?
RG Alhama, W Zuidema
Journal of Artificial Intelligence Research 61, 927-946, 2018
A review of computational models of basic rule learning: The neural-symbolic debate and beyond
RG Alhama, W Zuidema
Psychonomic bulletin & review 26 (4), 1174-1194, 2019
Neural discontinuous constituency parsing
M Stanojević, RG Alhama
Proceedings of the 2017 Conference on Empirical Methods in Natural Language …, 2017
Five ways in which computational modeling can help advance cognitive science: Lessons from artificial grammar learning
W Zuidema, RM French, RG Alhama, K Ellis, TJ O'Donnell, T Sainburg, ...
Topics in cognitive science 12 (3), 925-941, 2020
How should we evaluate models of segmentation in artificial language learning?
RG Alhama, R Scha, W Zudema
University of Groningen, 2015
The role of information in visual word recognition: A perceptually-constrained connectionist account
RG Alhama, N Siegelman, R Frost, BC Armstrong
The 41st annual meeting of the cognitive science society (cogsci 2019), 83-89, 2019
Evaluating word embeddings for language acquisition
RG Alhama, CF Rowland, E Kidd
(Online) Workshop on Cognitive Modeling and Computational Linguistics (CMCL …, 2020
Rule learning in humans and animals
R Garrido Alhama, RJH Scha, WH Zuidema
Los avances tecnológicos y la ciencia del lenguaje
M Martí, RG Alhama, M Recasens
Universidad de Santiago de Compostela, 2012
Segmentation as Retention and Recognition: the R&R model
RG Alhama, W Zuidema
Proceedings of the 39th Annual Conference of the Cognitive Science Society., 2017
Generalization in Artificial Language Learning: Modelling the Propensity to Generalize
RG Alhama, W Zuidema
Proceedings of the 7th Workshop on Cognitive Aspects of Computational …, 2016
'Long nose’and ‘naso lungo’: Establishing the need for retrodiction in computational models of word learning
F Zermiani, A Khaliq, RG Alhama
Many Paths to Language, 2020
Distributional semantic models for vocabulary acquisition
RG Alhama, C Rowland, E Kidd
the 26th Architectures and Mechanisms for Language Processing Conference …, 2020
Predictive generation of syntax during sentence reading
J Martorell, RG Alhama, N Molinaro, S Mancini
the XIV International Symposium of Psycholinguistics, 2019
Statistical learning shapes proficient reading: A cross-linguistic information-theoretic study
RG Alhama, N Siegelman, R Frost, BC Armstrong
the International Conference on Interdisciplinary Advances in Statistical …, 2019
Preactivating syntactic information during reading
J Martorell, RG Alhama, N Molinaro, S Mancini
the conference Psycholinguistics in Iceland: Parsing and Prediction, 2019
A perceptually-constrained visual word recognition model
RG Alhama, N Siegelman, R Frost, BC Armstrong
Architectures and Mechanisms for Language Processing (AMLaP 2019), 2019
What do neural networks need in order to generalize?
RG Alhama, W Zuidema
the MPI Proudly Presents series, 2019
Cognitive computational models of language learning [invited lecture]
RG Alhama
the Trends in AI course at the Radboud University, 2018
Memorization of sequence-segments by humans and non-human animals: the Retention-Recognition Model
RG Alhama, R Scha, W Zuidema
The system can't perform the operation now. Try again later.
Articles 1–20