default search action
Vikram Ramanarayanan
Person information
SPARQL queries
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j10]Michael Neumann, Hardik Kothare, Vikram Ramanarayanan:
Multimodal speech biomarkers for remote monitoring of ALS disease progression. Comput. Biol. Medicine 180: 108949 (2024) - [j9]Jessica L. Gaines, Kwang S. Kim, Benjamin Parrell, Vikram Ramanarayanan, Alvincé L. Pongos, Srikantan S. Nagarajan, John F. Houde:
Bayesian inference of state feedback control parameters for fo perturbation responses in cerebellar ataxia. PLoS Comput. Biol. 20(10): 1011986 (2024) - [c61]Vanessa Richter, Michael Neumann, Vikram Ramanarayanan:
Towards remote differential diagnosis of mental and neurological disorders using automatically extracted speech and facial features. ML4CMH@AAAI 2024: 101-111 - 2023
- [j8]Kwang S. Kim, Jessica L. Gaines, Benjamin Parrell, Vikram Ramanarayanan, Srikantan S. Nagarajan, John F. Houde:
Mechanisms of sensorimotor adaptation in a hierarchical state feedback control model of speech. PLoS Comput. Biol. 19(7) (2023) - [c60]Vikram Ramanarayanan, David Pautler, Lakshmi Arbatti, Abhishek Hosamath, Michael Neumann, Hardik Kothare, Oliver Roesler, Jackson Liscombe, Andrew Cornish, Doug Habberstad, Vanessa Richter, David Fox, David Suendermann-Oeft, Ira Shoulson:
When Words Speak Just as Loudly as Actions: Virtual Agent Based Remote Health Assessment Integrating What Patients Say with What They Do. INTERSPEECH 2023: 678-679 - [c59]Michael Neumann, Hardik Kothare, Doug Habberstad, Vikram Ramanarayanan:
A Multimodal Investigation of Speech, Text, Cognitive and Facial Video Features for Characterizing Depression With and Without Medication. INTERSPEECH 2023: 1219-1223 - [c58]Hardik Kothare, Michael Neumann, Jackson Liscombe, Jordan R. Green, Vikram Ramanarayanan:
Responsiveness, Sensitivity and Clinical Utility of Timing-Related Speech Biomarkers for Remote Monitoring of ALS Disease Progression. INTERSPEECH 2023: 2323-2327 - [c57]Michael Neumann, Hardik Kothare, Vikram Ramanarayanan:
Combining Multiple Multimodal Speech Features into an Interpretable Index Score for Capturing Disease Progression in Amyotrophic Lateral Sclerosis. INTERSPEECH 2023: 2353-2357 - [c56]Vanessa Richter, Michael Neumann, Jordan R. Green, Brian Richburg, Oliver Roesler, Hardik Kothare, Vikram Ramanarayanan:
Remote Assessment for ALS using Multimodal Dialog Agents: Data Quality, Feasibility and Task Compliance. INTERSPEECH 2023: 5441-5445 - [i4]Lakshmi Arbatti, Abhishek Hosamath, Vikram Ramanarayanan, Ira Shoulson:
What Do Patients Say About Their Disease Symptoms? Deep Multilabel Text Classification With Human-in-the-Loop Curation for Automatic Labeling of Patient Self Reports of Problems. CoRR abs/2305.04905 (2023) - 2022
- [c55]Hardik Kothare, Oliver Roesler, William Burke, Michael Neumann, Jackson Liscombe, Andrew Exner, Sandy Snyder, Andrew Cornish, Doug Habberstad, David Pautler, David Suendermann-Oeft, Jessica Huber, Vikram Ramanarayanan:
Speech, Facial and Fine Motor Features for Conversation-Based Remote Assessment and Monitoring of Parkinson's Disease. EMBC 2022: 3464-3467 - [c54]Oliver Roesler, Hardik Kothare, William Burke, Michael Neumann, Jackson Liscombe, Andrew Cornish, Doug Habberstad, David Pautler, David Suendermann-Oeft, Vikram Ramanarayanan:
Exploring Facial Metric Normalization For Within- and Between-Subject Comparisons in a Multimodal Health Monitoring Agent. ICMI Companion 2022: 160-165 - [c53]Vanessa Richter, Michael Neumann, Hardik Kothare, Oliver Roesler, Jackson Liscombe, David Suendermann-Oeft, Sebastian Prokop, Anzalee Khan, Christian Yavorsky, Jean-Pierre Lindenmayer, Vikram Ramanarayanan:
Towards Multimodal Dialog-Based Speech & Facial Biomarkers of Schizophrenia. ICMI Companion 2022: 171-176 - [c52]Hardik Kothare, Michael Neumann, Jackson Liscombe, Oliver Roesler, William Burke, Andrew Exner, Sandy Snyder, Andrew Cornish, Doug Habberstad, David Pautler, David Suendermann-Oeft, Jessica Huber, Vikram Ramanarayanan:
Statistical and clinical utility of multimodal dialogue-based speech and facial metrics for Parkinson's disease assessment. INTERSPEECH 2022: 3658-3662 - 2021
- [c51]Rahul R. Divekar, Haley Lepp, Pravin Chopade, Aaron Albin, Daniel Brenner, Vikram Ramanarayanan:
Conversational Agents in Language Education: Where They Fit and Their Research Challenges. HCI (45) 2021: 272-279 - [c50]Hardik Kothare, Vikram Ramanarayanan, Oliver Roesler, Michael Neumann, Jackson Liscombe, William Burke, Andrew Cornish, Doug Habberstad, Alaa Sakallah, Sara Markuson, Seemran Kansara, Afik Faerman, Yasmine Bensidi-Slimane, Laura Fry, Saige Portera, David Suendermann-Oeft, David Pautler, Carly Demopoulos:
Investigating the Interplay Between Affective, Phonatory and Motoric Subsystems in Autism Spectrum Disorder Using a Multimodal Dialogue Agent. Interspeech 2021: 1967-1971 - [c49]Michael Neumann, Oliver Roesler, Jackson Liscombe, Hardik Kothare, David Suendermann-Oeft, David Pautler, Indu Navar, Aria Anvar, Jochen Kumm, Raquel Norel, Ernest Fraenkel, Alexander V. Sherman, James D. Berry, Gary L. Pattee, Jun Wang, Jordan R. Green, Vikram Ramanarayanan:
Investigating the Utility of Multimodal Conversational Technology and Audiovisual Analytic Measures for the Assessment and Monitoring of Amyotrophic Lateral Sclerosis at Scale. Interspeech 2021: 4783-4787 - [i3]Michael Neumann, Oliver Roesler, Jackson Liscombe, Hardik Kothare, David Suendermann-Oeft, David Pautler, Indu Navar, Aria Anvar, Jochen Kumm, Raquel Norel, Ernest Fraenkel, Alexander V. Sherman, James D. Berry, Gary L. Pattee, Jun Wang, Jordan R. Green, Vikram Ramanarayanan:
Investigating the Utility of Multimodal Conversational Technology and Audiovisual Analytic Measures for the Assessment and Monitoring of Amyotrophic Lateral Sclerosis at Scale. CoRR abs/2104.07310 (2021) - 2020
- [j7]Yao Qian, Rutuja Ubale, Patrick L. Lange, Keelan Evanini, Vikram Ramanarayanan, Frank K. Soong:
Spoken Language Understanding of Human-Machine Conversations for Language Learning Applications. J. Signal Process. Syst. 92(8): 805-817 (2020) - [c48]Haley Lepp, Chee Wee Leong, Katrina Roohr, Michelle P. Martin-Raugh, Vikram Ramanarayanan:
Effect of Modality on Human and Machine Scoring of Presentation Videos. ICMI 2020: 630-634 - [c47]Vikram Ramanarayanan:
Design and Development of a Human-Machine Dialog Corpus for the Automated Assessment of Conversational English Proficiency. INTERSPEECH 2020: 419-423 - [c46]Vikram Ramanarayanan, Oliver Roesler, Michael Neumann, David Pautler, Doug Habberstad, Andrew Cornish, Hardik Kothare, Vignesh Murali, Jackson Liscombe, Dirk Schnelle-Walka, Patrick L. Lange, David Suendermann-Oeft:
Toward Remote Patient Monitoring of Speech, Video, Cognitive and Respiratory Biomarkers Using Multimodal Dialog Technology. INTERSPEECH 2020: 492-493 - [i2]Vikram Ramanarayanan, Matthew Mulholland, Debanjan Ghosh:
Exploring Recurrent, Memory and Attention Based Architectures for Scoring Interactional Aspects of Human-Machine Text Dialog. CoRR abs/2005.09834 (2020)
2010 – 2019
- 2019
- [j6]Benjamin Parrell, Vikram Ramanarayanan, Srikantan S. Nagarajan, John F. Houde:
The FACTS model of speech motor control: Fusing state estimation and task-based control. PLoS Comput. Biol. 15(9) (2019) - [c45]Rutuja Ubale, Vikram Ramanarayanan, Yao Qian, Keelan Evanini, Chee Wee Leong, Chong Min Lee:
Native Language Identification from Raw Waveforms Using Deep Convolutional Neural Networks with Attentive Pooling. ASRU 2019: 403-410 - [c44]Chee Wee Leong, Katrina Roohr, Vikram Ramanarayanan, Michelle P. Martin-Raugh, Harrison Kell, Rutuja Ubale, Yao Qian, Zydrune Mladineo, Laura McCulla:
Are Humans Biased in Assessment of Video Interviews? ICMI (Adjunct) 2019: 9:1-9:5 - [c43]Vikram Ramanarayanan, Matthew Mulholland, Yao Qian:
Scoring Interactional Aspects of Human-Machine Dialog for Language Learning and Assessment using Text Features. SIGdial 2019: 103-109 - [i1]Chee Wee Leong, Katrina Roohr, Vikram Ramanarayanan, Michelle P. Martin-Raugh, Harrison Kell, Rutuja Ubale, Yao Qian, Zydrune Mladineo, Laura McCulla:
To Trust, or Not to Trust? A Study of Human Bias in Automated Video Interview Assessments. CoRR abs/1911.13248 (2019) - 2018
- [j5]Vikram Ramanarayanan, Sam Tilsen, Michael I. Proctor, Johannes Töger, Louis Goldstein, Krishna S. Nayak, Shrikanth S. Narayanan:
Analysis of speech production real-time MRI. Comput. Speech Lang. 52: 1-22 (2018) - [j4]Colin Vaz, Vikram Ramanarayanan, Shrikanth S. Narayanan:
Acoustic Denoising Using Dictionary Learning With Spectral and Temporal Regularization. IEEE ACM Trans. Audio Speech Lang. Process. 26(5): 967-980 (2018) - [c42]Vikram Ramanarayanan, Michelle LaMar:
Toward Automatically Measuring Learner Ability from Human-Machine Dialog Interactions using Novel Psychometric Models. BEA@NAACL-HLT 2018: 117-126 - [c41]Keelan Evanini, Veronika Timpe-Laughlin, Eugene Tsuprun, Ian Blood, Jeremy Lee, James V. Bruno, Vikram Ramanarayanan, Patrick L. Lange, David Suendermann-Oeft:
Game-based Spoken Dialog Language Learning Applications for Young Students. INTERSPEECH 2018: 548-549 - [c40]Benjamin Parrell, Vikram Ramanarayanan, Srikantan S. Nagarajan, John F. Houde:
FACTS: A Hierarchical Task-based Control Model of Speech Incorporating Sensory Feedback. INTERSPEECH 2018: 1497-1501 - [c39]Vikram Ramanarayanan, David Pautler, Patrick L. Lange, Eugene Tsuprun, Rutuja Ubale, Keelan Evanini, David Suendermann-Oeft:
Toward Scalable Dialog Technology for Conversational Language Learning: Case Study of the TOEFL® MOOC. INTERSPEECH 2018: 1960-1961 - [c38]Keelan Evanini, Matthew Mulholland, Rutuja Ubale, Yao Qian, Robert A. Pugh, Vikram Ramanarayanan, Aoife Cahill:
Improvements to an Automated Content Scoring System for Spoken CALL Responses: the ETS Submission to the Second Spoken CALL Shared Task. INTERSPEECH 2018: 2379-2383 - [c37]Vikram Ramanarayanan, Robert Pugh, Yao Qian, David Suendermann-Oeft:
Automatic Turn-Level Language Identification for Code-Switched Spanish-English Dialog. IWSDS 2018: 51-61 - [c36]Vikram Ramanarayanan, Robert Pugh:
Automatic Token and Turn Level Language Identification for Code-Switched Text Dialog: An Analysis Across Language Pairs and Corpora. SIGDIAL Conference 2018: 80-88 - [c35]David Pautler, Vikram Ramanarayanan, Kirby Cofino, Patrick L. Lange, David Suendermann-Oeft:
Leveraging Multimodal Dialog Technology for the Design of Automated and Interactive Student Agents for Teacher Training. SIGDIAL Conference 2018: 249-252 - 2017
- [c34]Vikram Ramanarayanan, David Suendermann-Oeft, Hillary Molloy, Eugene Tsuprun, Patrick L. Lange, Keelan Evanini:
Crowdsourcing Multimodal Dialog Interactions: Lessons Learned from the HALEF Case. AAAI Workshops 2017 - [c33]Yao Qian, Rutuja Ubale, Vikram Ramanarayanan, Patrick L. Lange, David Suendermann-Oeft, Keelan Evanini, Eugene Tsuprun:
Exploring ASR-free end-to-end modeling to improve spoken language understanding in a cloud-based dialog system. ASRU 2017: 569-576 - [c32]Vikram Ramanarayanan, Chee Wee Leong, David Suendermann-Oeft, Keelan Evanini:
Crowdsourcing ratings of caller engagement in thin-slice videos of human-machine dialog: benefits and pitfalls. ICMI 2017: 281-287 - [c31]Kirby Cofino, Vikram Ramanarayanan, Patrick L. Lange, David Pautler, David Suendermann-Oeft, Keelan Evanini:
A modular, multimodal open-source virtual interviewer dialog agent. ICMI 2017: 520-521 - [c30]Vikram Ramanarayanan, David Suendermann-Oeft:
Jee haan, I'd like both, por favor: Elicitation of a Code-Switched Corpus of Hindi-English and Spanish-English Human-Machine Dialog. INTERSPEECH 2017: 47-51 - [c29]Tanner Sorensen, Zisis Iason Skordilis, Asterios Toutios, Yoon-Chul Kim, Yinghua Zhu, Jangwon Kim, Adam C. Lammert, Vikram Ramanarayanan, Louis Goldstein, Dani Byrd, Krishna S. Nayak, Shrikanth S. Narayanan:
Database of Volumetric and Real-Time Vocal Tract MRI for Speech Science. INTERSPEECH 2017: 645-649 - [c28]Vikram Ramanarayanan, Patrick L. Lange, Keelan Evanini, Hillary R. Molloy, David Suendermann-Oeft:
Human and Automated Scoring of Fluency, Pronunciation and Intonation During Human-Machine Spoken Dialog Interactions. INTERSPEECH 2017: 1711-1715 - [c27]Vikram Ramanarayanan, Chee Wee Leong, David Suendermann-Oeft:
Rushing to Judgement: How do Laypeople Rate Caller Engagement in Thin-Slice Videos of Human-Machine Dialog? INTERSPEECH 2017: 2526-2530 - [c26]Zhou Yu, Vikram Ramanarayanan, Patrick L. Lange, David Suendermann-Oeft:
An Open-Source Dialog System with Real-Time Engagement Tracking for Job Interview Training Applications. IWSDS 2017: 199-207 - 2016
- [j3]Ming Li, Jangwon Kim, Adam C. Lammert, Prasanta Kumar Ghosh, Vikram Ramanarayanan, Shrikanth S. Narayanan:
Speaker verification based on the fusion of speech acoustics and inverted articulatory signals. Comput. Speech Lang. 36: 196-211 (2016) - [j2]Vikram Ramanarayanan, Maarten Van Segbroeck, Shrikanth S. Narayanan:
Directly data-derived articulatory gesture-like representations retain discriminatory information about phone categories. Comput. Speech Lang. 36: 330-346 (2016) - [c25]Vikram Ramanarayanan, Saad Khan:
Novel features for capturing cooccurrence behavior in dyadic collaborative problem solving tasks. EDM 2016: 620-621 - [c24]Vikram Ramanarayanan, Benjamin Parrell, Louis Goldstein, Srikantan S. Nagarajan, John F. Houde:
A New Model of Speech Motor Control Based on Task Dynamics and State Feedback. INTERSPEECH 2016: 3564-3568 - [c23]Yao Qian, Jidong Tao, David Suendermann-Oeft, Keelan Evanini, Alexei V. Ivanov, Vikram Ramanarayanan:
Noise and Metadata Sensitive Bottleneck Features for Improving Speaker Recognition with Non-Native Speech Input. INTERSPEECH 2016: 3648-3652 - [c22]Zhou Yu, Vikram Ramanarayanan, Robert Mundkowsky, Patrick L. Lange, Alexei V. Ivanov, Alan W. Black, David Suendermann-Oeft:
Multimodal HALEF: An Open-Source Modular Web-Based Multimodal Dialog Framework. IWSDS 2016: 233-244 - 2015
- [c21]David Suendermann-Oeft, Vikram Ramanarayanan, Moritz Teckenbrock, Felix Neutatz, Dennis Schmidt:
HALEF: An Open-Source Standard-Compliant Telephony-Based Modular Spoken Dialog System: A Review and An Outlook. IWSDS 2015: 53-61 - [c20]Zhou Yu, Vikram Ramanarayanan, David Suendermann-Oeft, Xinhao Wang, Klaus Zechner, Lei Chen, Jidong Tao, Aliaksei Ivanou, Yao Qian:
Using bidirectional lstm recurrent neural networks to learn high-level abstractions of sequential features for automated scoring of non-native spontaneous speech. ASRU 2015: 338-345 - [c19]Vikram Ramanarayanan, Chee Wee Leong, Lei Chen, Gary Feng, David Suendermann-Oeft:
Evaluating Speech, Face, Emotion and Body Movement Time-series Features for Automated Multimodal Presentation Scoring. ICMI 2015: 23-30 - [c18]Zisis Iason Skordilis, Vikram Ramanarayanan, Louis Goldstein, Shrikanth S. Narayanan:
Experimental assessment of the tongue incompressibility hypothesis during speech production. INTERSPEECH 2015: 384-388 - [c17]Vikram Ramanarayanan, Lei Chen, Chee Wee Leong, Gary Feng, David Suendermann-Oeft:
An analysis of time-aggregated and time-series features for scoring different aspects of multimodal presentation data. INTERSPEECH 2015: 1373-1377 - [c16]Alexei V. Ivanov, Vikram Ramanarayanan, David Suendermann-Oeft, Melissa Lopez, Keelan Evanini, Jidong Tao:
Automated Speech Recognition Technology for Dialogue Interaction with Non-Native Interlocutors. SIGDIAL Conference 2015: 134-138 - [c15]Vikram Ramanarayanan, David Suendermann-Oeft, Alexei V. Ivanov, Keelan Evanini:
A distributed cloud-based dialog system for conversational application development. SIGDIAL Conference 2015: 432-434 - 2014
- [j1]Adam C. Lammert, Louis Goldstein, Vikram Ramanarayanan, Shrikanth S. Narayanan:
Gestural Control in the English Past-Tense Suffix: An Articulatory Study Using Real-Time MRI. Phonetica 71(4): 229-248 (2014) - [c14]Vikram Ramanarayanan, Louis Goldstein, Shrikanth S. Narayanan:
Motor control primitives arising from a learned dynamical systems model of speech articulation. INTERSPEECH 2014: 150-154 - [c13]Andrés Benítez, Vikram Ramanarayanan, Louis Goldstein, Shrikanth S. Narayanan:
A real-time MRI study of articulatory setting in second language speech. INTERSPEECH 2014: 701-705 - [c12]Colin Vaz, Vikram Ramanarayanan, Shrikanth S. Narayanan:
Joint filtering and factorization for recovering latent structure from noisy speech data. INTERSPEECH 2014: 2365-2369 - 2013
- [c11]Adam C. Lammert, Vikram Ramanarayanan, Michael I. Proctor, Shrikanth S. Narayanan:
Vocal tract cross-distance estimation from real-time MRI using region-of-interest analysis. INTERSPEECH 2013: 959-962 - [c10]Zhaojun Yang, Vikram Ramanarayanan, Dani Byrd, Shrikanth S. Narayanan:
The effect of word frequency and lexical class on articulatory-acoustic coupling. INTERSPEECH 2013: 973-977 - [c9]Colin Vaz, Vikram Ramanarayanan, Shrikanth S. Narayanan:
A two-step technique for MRI audio enhancement using dictionary learning and wavelet packet analysis. INTERSPEECH 2013: 1312-1315 - [c8]Ming Li, Jangwon Kim, Prasanta Kumar Ghosh, Vikram Ramanarayanan, Shrikanth S. Narayanan:
Speaker verification based on fusion of acoustic and articulatory information. INTERSPEECH 2013: 1614-1618 - [c7]Vikram Ramanarayanan, Adam C. Lammert, Louis Goldstein, Shrikanth S. Narayanan:
Articulatory settings facilitate mechanically advantageous motor control of vocal tract articulators. INTERSPEECH 2013: 2010-2013 - [c6]Daniel Bone, Chi-Chun Lee, Vikram Ramanarayanan, Shrikanth S. Narayanan, Renske S. Hoedemaker, Peter C. Gordon:
Analyzing eye-voice coordination in rapid automatized naming. INTERSPEECH 2013: 2425-2429 - 2012
- [c5]Vikram Ramanarayanan, Prasanta Kumar Ghosh, Adam C. Lammert, Shrikanth S. Narayanan:
Exploiting speech production information for automatic speech and speaker modeling and recognition - possibilities and new opportunities. APSIPA 2012: 1-6 - 2011
- [c4]Vikram Ramanarayanan, Athanasios Katsamanis, Shrikanth S. Narayanan:
Automatic Data-Driven Learning of Articulatory Primitives from Real-Time MRI Data Using Convolutive NMF with Sparseness Constraints. INTERSPEECH 2011: 61-64 - [c3]Shrikanth S. Narayanan, Erik Bresch, Prasanta Kumar Ghosh, Louis Goldstein, Athanasios Katsamanis, Yoon Kim, Adam C. Lammert, Michael I. Proctor, Vikram Ramanarayanan, Yinghua Zhu:
A Multimodal Real-Time MRI Articulatory Corpus for Speech Research. INTERSPEECH 2011: 837-840 - [c2]Athanasios Katsamanis, Erik Bresch, Vikram Ramanarayanan, Shrikanth S. Narayanan:
Validating rt-MRI Based Articulatory Representations via Articulatory Recognition. INTERSPEECH 2011: 2841-2844 - 2010
- [c1]Vikram Ramanarayanan, Dani Byrd, Louis Goldstein, Shrikanth S. Narayanan:
Investigating articulatory setting - pauses, ready position, and rest - using real-time MRI. INTERSPEECH 2010: 1994-1997
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-12-02 21:32 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint