The SSH Training Discovery Toolkit provides an inventory of training materials relevant for the Social Sciences and Humanities.

Use the search bar to discover materials or browse through the collections. The filters will help you identify your area of interest.

 

CC-BY

Item
Title Body
Meeting Funders’ Requirements - Archiving and Data Sharing

This introductory webinar is for anyone who is involved in the collection of data and is considering making (some of) their data available in accordance with funders’ requirements. More and more funders are requiring that research data be made available after completion of the research project, usually through the archiving of data in a trusted repository. However, research teams often still lack the appropriate skills and knowledge regarding how to properly prepare their data for archiving and sharing. This webinar aims to raise awareness about relevant key data management practices for sharing, specifically regarding data documentation, gaining consent, and data anonymisation. Addressing each of these three topics, it provides a short theoretical introduction, including what FAIR means and how it is implemented, as well as practical illustrations drawing on a large-scale cross-national survey (the European Social Survey). It also provides some practical tips with respect to data archiving, in particular how to choose an appropriate archive or repository.

Finding and reusing data

This webinar is intended for everyone who wants to learn about ways of finding and reusing research data. Managing your research data in a FAIR and transparent manner is important. It helps researchers to meet requirements of funding institutions and ensures long-term re-usability of their data. The webinar introduces the CESSDA ERIC Data Catalogue as a means of finding and accessing research data. It enables participants to understand conditions for reuse (licenses) and introduces use-cases. This event is part of a workshop/webinar series organised by members of the SERISS project.

Introduction to Digital Humanities

The aim of the course is to introduce digital humanities and to describe various aspects of digital content processing. The practical aims consist of introducing current data sources, annotation, pre-processing methods, software tools for data analysis and visualisation, and evaluation methods.

Currently, we identified that students are somewhat aware of digital humanities but it is difficult for them to dive in and, mainly, to anticipate what they should learn for their future research. A more detailed goal of this course is to present some current projects, show the datasets and technologies behind, and encourage students to explore the datasets and use the technologies on data they already know. A high level goal is to set the knowledge of the technologies and available datasets into the research iteration loop (create hypotheses -> design instruments -> collect data -> analyze and evaluate).

 

Taken from: Teaching with CLARIN: https://www.clarin.eu/content/introduction-digital-humanities

GATE Training Course

The training materials are all based around teaching the use of GATE, a freely available open-source toolkit for Natural Language Processing that has been widely used in both academia and industry for many different tasks.

The modules provide instruction on how to get to grips with the GATE toolkit for basic language processing, as well as more advanced techniques, and include a number of different scenarios, such as processing social media, hate speech and misinformation detection. They include modules both for programmers who want to further develop their own tools within the toolkit, and for non-programmers who want to just make use of existing tools. The modules teach not only the use of GATE itself, but also how to adapt it to one’s own needs (for example, to adapt English tools to a different language, or how to customise existing tools), and also the basic concepts around a number of language processing tasks including both low-level (tokenisation, POS tagging, parsing) to more sophisticated (information extraction, social media analysis, hate speech detection, misinformation detection), as well as how to interpret and integrate the results of the processing. Finally, it teaches programmers how to extend the toolkit itself, by adding new tools or integrating it into other systems.

 

Taken from Teaching with CLARIN: https://www.clarin.eu/content/gate-training-course 

Bringing synergy to better data management and research in Europe

The course includes a series of recorded videos, quizes, and practical assignments that will allow you to go through the course at your own pace. It invites researchers, students, trainers and data professionals and any other individual that is looking to gain basic knowledge on Open Science, EOSC and best practices for FAIR data.

Meeting funders’ requirements – archiving and data sharing

YouTube Video: This introductory webinar is for anyone who is involved in the collection of data and is considering making (some of) their data available in accordance with funders’ requirements. This webinar aims to raise awareness about relevant key data management practices for sharing, specifically regarding data documentation, gaining consent, and data anonymisation. it provides a short theoretical introduction, including what FAIR means and how it is implemented, as well as practical illustrations drawing on a large-scale cross-national survey (the European Social Survey). It also provides some practical tips with respect to data archiving, in particular how to choose an appropriate archive or repository.

RDM for librarians

Content for a three-hour introductory RDM session for librarians. The course covers:

  • Research data and RDM
  • Data management planning
  • Data sharing
  • Skills

The materials consist of presentation slides and an accompanying handbook.

Data handling tutorials

Practical tutorials to manage and handle research data for particular software packages: SPSS, R, ArcGIS and N-Vivo. Tutorials contain many practical exercises.

Core Curriculum

The lessons introduce terms, phrases, and concepts in software development and data science, how to best work with data structures, and use regular expressions in finding and matching data. We introduce the Unix-style command line interface, and teach basic shell navigation, as well as the use of loops and pipes for linking shell commands. We also introduce grep for searching and subsetting data across files. Exercises cover the counting and mining of data. In addition, we cover working with OpenRefine to transform and clean data, and the benefits of working collaboratively via Git/GitHub and using version control to track your work.

Source
Title Body
Leadership & Skills Building - LIBER

Target group: library professionals. (Also leadership training)