More speakers will still be added.
|
|
Jan Brase, Goettingen State and University library
Germany
Jan Brase has a degree in Mathematics and a PhD in Computer Science from the University of Hannover, Germany. From 2005 he was coordinating the research issues in the field digital libraries for the German National Library for Science and Technology (TIB). He established the founding of the international consortium DataCite in 2009, a global consortium of libraries and information institutions to support the publication and citation of research data. He was head of DataCite until 2015 when he became head of research and development at the Goettingen State and University library (SUB). At SUB Jan Brase is also the scientific director of the eResearch-Alliance, the central coordination office off all research data related services on campus.
Dr. Brase is President of the International Council of Scientific and Technical Information (ICSTI). In 2011 he received the German Library Hi-Tech award.
Presentation: RDM at Campus
Presentation: Examples from the Gipplab in Germany
|
|
|
Dr. Jan Černý, Faculty of Informatics and Statistics, Prague University of Economics and Business
Czech Republic
Prague University of Economics and Business fellow and researcher focused on the intelligence studies, particularly on the CI, TECHINT, OSINT domains. Dr. Jan Černý‘s research activities cover topics on external data and information environment analysis of enterprises, early warning systems, surface web & deep web investigations, search strategy and tactics, and digital forensics. He also deals with public librarianship management, specifically on the role of libraries in today’s competitive environment.
|
|
|
Neil Jefferies, University of Oxford
United Kingdom
Neil Jefferies is Head of Innovation in Open Scholarship Support at the Bodleian Libraries and a Director of Data Futures GmbH. He is a co-creator of IIIF and the Oxford Common File Layout, Community Manager for the SWORD protocol, a member of the Bit List Council for the Digital Preservation Coalition, and co-chair of the Research Identifier National Coordinating Committee . Currently, he is PI on the Unlocking Digital Texts project, jointly funded by the AHRC and NEH and Technical Strategist for Early Modern Letters Online. Previously, he has been involved with projects such as The 15th Century Booktrade, Medieval Libraries of Great Britain, Broadside Ballads and the Fihrist Catalogue of Islamic Manuscripts. He teaches the “IIIF for Research” module on the MSc in Digital Scholarship and sessions on a variety of topics at the Digital Humanities at Oxford Summer School.
Presentation: UK Research Persistent Identifier Strategy
The PID presentation will look at the drivers for PID adoption in the UK, and the work that Jisc/MoreBrains have done to develop a PID strategy.
Workshop: IIIF, Annotation and Editions
The IIIF Workshop will discuss how Web Annotation of IIIF Resources can be made into a scholarly mechanism for analytics, and also for publishing annotated editions. I will then expand on the work we have been doing with Invenio, Zenodo and ORCID to make this a reality, along with a demonstration. I will then cover some parallel work on text APIs to enable fragments of online text documents to also be annotated and linked to images.
|
|
|
Petr Knoth, The Open University
United Kingdom
Dr. Petr Knoth leads the Big Scientific Data and Text Analytics group (BSDTAG) at the Knowledge Media institute, The Open University in the UK. He is the founder and Head of CORE (core.ac.uk), a service with over 30 million monthly active users providing access to the world’s largest collection of full text open access research papers aggregated from data providers around the world. Petr has a deep interest in the use of AI to improve research workflows and is a relentless advocate of open science. He has led the team developing the fosteropenscience.eu e-learning platform which has become widely used for training European researchers. Petr has also been involved as a researcher and as a PI in over 20 European Commission, national and international funded research projects in the areas of data science, text-mining, open science and technology enhanced learning and has over 80 peer-reviewed publications based on this work.
Presentation: CORE GPT: Large Language Models for question-answering over open access research
This presentation introduces CORE-GPT, an innovative platform that combines large language models (LLMs) with 34 million open-access scientific articles available through CORE. Addressing the challenge of generating credible, well-cited answers, the platform significantly reduces the potential for “hallucinations” in AI-generated text. Evaluated across the top 20 scientific domains in CORE, CORE-GPT demonstrates its effectiveness in producing reliable, in-depth answers with citations and links to original research articles. Initially designed to complement CORE Search, the tool expands the capabilities of the CORE services including recommendations and enhancing the user experience in academic libraries. By incorporating citations, links, and open-access articles, CORE-GPT fosters trustworthiness, efficiency, broad coverage, and promotes open access research, making it a highly useful tool for researchers and practitioners.
|
|
|
Riitta Koikkalainen, The National Library of Finland
Finland
Riitta Koikkalainen makes her living an information specialist in The National Library of Finland, as an expert on scholarly publishing and communication. As a part of her job, she coordinates the work of the Kotoistus, service in-between internationalisation and localisation. Current editor-in-chief of Tietolinja. Riitta is also one of the founders of the philosophical magazine niin & näin and a member of it’s editorial board. From the very beginning of her life in academia she has had a strong interest on the sociology of knowledge. There are no meanings outside social interaction, at least such that could be (re)presented purely as such. And this makes the world a very interesting place. You can find her on Twitter @Riitta_AK, and on Mastodon @RiittaK@mastodon.social
Presentation: Long live the knowledge! Proper metadata and how it is created with URN and other persistent identifiers.
|
|
|
Anthony Leroy, Université libre de Bruxelles
Belgium
Anthony Leroy is a software engineer at the Libraries of the Université libre de Bruxelles (Belgium) since 2011.He is in charge of the digitization infrastructure and the digital preservation program of the University Libraries. He coordinates the activities of the SAFE distributed preservation network, an international LOCKSS network operated by seven partner universities. He is also actively involved in various research data management activities at ULB.Anthony is an engineer in electronics and telecommunications with a PhD in microelectronics (ULB) and has been a researcher for almost ten years in collaboration with several industrial partners.
Workshop: Université libre de Bruxelles – Transforming Libraries Digitization Services : going beyond scanning our own collections to become a data provider for (Digital) Humanities researchers
|
|
|
Ben Mcleish, Dimensions
United Kingdom
Ben Mcleish is Product Solutions Team Lead for Altmetric and Dimensions and has been at Digital Science since 2014. He specialises in analysis of research security risk and foreign interference, research impact and knowledge transfer and mobilisation. He has previously worked at Wiley, ProQuest and the American Chemical Society, as well as in media monitoring services.
Workshop: Leveraging new Public-Private ventures and Knowledge Transfer in Eastern Europe
Universities are being called on to mobilise their research into start-ups and commercial ventures and demonstrate their research reproducibility and impact in broader society, which would lead to better funding opportunities and economic development. Old bibliometric tools and rankings do not help with or measure this mission. In this workshop we will explore the research data gathered by the Dimensions Analytics service, analyse the true activities of a university in Slovakia, and explore other data and functions as found in various Digital Science tools around research performance and impact.
|
|
|
David Minor, UC San Diego Library
USA
David Minor works at the University of California, San Diego, where he is the Director of the Research Data Curation Program in the UC San Diego Library. In this role he helps define and lead work needed for the contemporary and long-term management digital resources. His position includes significant interaction with stakeholders on the UC San Diego campus, throughout the UC System, and national initiatives. His program also includes management of Chronopolis, a national-scale digital preservation network.
|
|
|
Miroslav Mizera
Slovakia
Miroslav Mizera has been working in the field of foreign and security policy and strategic communication for almost 15 years. In April 2022 he assumend the post of a senior coordinator for strategic communication in the Press Department, Office of Minister of Interior of the Slovak Republic.
In 2020 – 2021 he was political, security and economic consultant. From 2019 to 2020, he worked as a consultant for the international consulting and advisory company Jones Lang LaSalle (JLL). In the years 2016 – 2018 he worked as a Special Adviser to the State Secretary of the Ministry of Defense of the Slovak Republic. In 2016, he worked as an Advisor to the State Secretary of the Ministry of Economy of the Slovak Republic. In the years 2014 – 2015, he held the position of Head of the Secretariat for the Presidency of the Council of the EU at the Ministry of Defense of the Slovak Republic. In the years 2010 – 2014 he worked as an Advisor to the State Secretaries of the Ministry of Defense of the Slovak Republic.
He regularly speaks and participates in security and foreign policy seminars and lectures in Slovakia, the Western Balkans and Europe.
Presentation: StratCom & Disinformation
|
|
|
Michal Tomczyk, Clarivate
Poland
Presentation: Artificial Intelligence in Clarivate and ProQuest Solutions
In an era of rapid technological advancement, artificial intelligence (AI) plays a key role in automating and optimizing research processes and knowledge management. Solutions from Clarivate and ProQuest, as leading providers of information tools, integrate AI to deliver efficient and precise solutions for searching, analyzing, and managing data. This presentation will highlight the key features of both companies’ products that leverage AI, including automated bibliometric analysis tools, natural language processing (NLP), and machine learning to optimize search and discovery processes for scientific information. Examples of AI applications will also be discussed in the context of research data management, including trend forecasting, content personalization, and facilitating access to critical information. The presentation will showcase the innovative opportunities offered by Clarivate and ProQuest tools in the context of the future of research and academia, emphasizing how artificial intelligence supports digital transformation in science and education.
|
|
|
Petr Žabička, Moravian Library
Czech Republic
Petr Žabička is an expert in library automation with experience in digitisation, digital libraries, and machine learning. As an associate director at the Moravian Library, he is responsible for research and development projects. Currently, his activities focus on implementing machine learning technologies to enhance access to digitised documents. He has been involved in the PERO project, which aimed to improve the accuracy of digitised texts through the application of machine learning algorithms to optical character recognition (OCR). Previously, he led projects related to map digitisation, online access to digitised maps, and the development of the Czech library portal Knihovny.cz.
Presentation: Enhancing Czech Digital Library with AI
This presentation showcases the latest AI innovations in Czech Digital Libraries. While the integration of Large Language Models (LLMs) for voice reading, translations, and summaries addresses immediate user needs, these features are just the first step. Our work also extends to AI-generated metadata, automated detection and categorization of non-textual page elements, and improved semantic text segmentation and search capabilities. This talk will also touch on the challenge of developing a user-friendly interface that effectively integrates these AI functionalities.
|