|Research Data Management Toolkit||
This toolkit includes a number of resources on research data management. However, due to its broad scope, the toolkit is not structured as an online course.It contains courses, videos, infographics, books and other materials
|DIY Research Data Management Training Kit for Librarians||
Training kit for librarians who wish to gain confidence and understanding of research data management, based on open educational materials, covering five topics:
The kit uses the Research Data Mantra online course and selected exercises from the UK Data Archive. It further contains a training schedule, podcasts for short talks, presentation slides, evaluation forms, data curation profiles and reflective writing questions based on the experience of academic librarians who have taken the course.
Data Curation Profiles provide a complete framework for interviewing a researcher in any discipline about their research data and their data management practices.
|A Case Report: Building communities with training and resources for Open Science trainers||
To foster responsible research and innovation, research communities, institutions, and funders are shifting their practices and requirements towards Open Science. Open Science skills are becoming increasingly essential for researchers. Indeed general awareness of Open Science has grown among EU researchers, but the practical adoption can be further improved. Recognizing a gap between the needed and the provided training offer, the FOSTER project offers practical guidance and training to help researchers learn how to open up their research within a particular domain or research environment.
Aiming for a sustainable approach, FOSTER focused on strengthening the Open Science training capacity by establishing and supporting a community of trainers. The creation of an Open Science training handbook was a first step towards bringing together trainers to share their experiences and to create an open and living knowledge resource. A subsequent series of train-the-trainer bootcamps helped trainers to find inspiration, improve their skills and to intensify exchange within a peer group. Four trainers, who attended one of the bootcamps, contributed a case study on their experiences and how they rolled out Open Science training within their own institutions.
On its platform the project provides a range of online courses and resources to learn about key Open Science topics. FOSTER awards users gamification badges when completing courses in order to provide incentives and rewards, and to spur them on to even greater achievements in learning.
The paper at hand describes FOSTER Plus’ training strategies, shares the lessons learnt and provides guidance on how to reuse the project’s materials and training approaches.
|Recommendations on Open Science Training||
Building on Open Science Training Handbook and on successes of over 40 online and face-to-face events that FOSTER organized in 2017-2018, this report provides good practice recommendations on open science training targeting researchers and multipliers – train-the-trainers approaches for research support staff and librarians.
|Temporal Network Analysis with R||
Learn how to use R to analyze networks that change over time.
Temporal Network Analysis is still a pretty new approach in fields outside epidemiology and social network analysis. This tutorial introduces methods for visualizing and analyzing temporal networks using several libraries written for the statistical programming language R. With the rate at which network analysis is developing, there will soon be more user friendly ways to produce similar visualizations and analyses, as well as entirely new metrics of interest. For these reasons, this tutorial focuses as much on the principles behind creating, visualizing, and analyzing temporal networks (the “why”) as it does on the particular technical means by which we achieve these goals (the “how”). It also highlights some of the unhappy oversimplifications that historians may have to make when preparing their data for temporal network analysis, an area where our discipline may actually suggest new directions for temporal network analysis research.
|Introduction to Audiovisual Transcoding, Editing, and Color Analysis with FFmpeg||
This lesson introduces the basic functions of FFmpeg, a free command-line tool used for manipulating and analyzing audiovisual materials.
FFmpeg is “the leading multimedia framework able to decode, encode, transcode, mux, demux, stream, filter, and play pretty much anything that humans and machines have created” (FFmpeg Website - “About”). Many common software applications and websites use FFmpeg to handle reading and writing audiovisual files, including VLC, Google Chrome, YouTube, and many more. In addition to being a software and web-developer tool, FFmpeg can be used at the command-line to perform many common, complex, and important tasks related to audiovisual file management, alteration, and analysis. These kinds of processes, such as editing, transcoding (re-encoding), or extracting metadata from files, usually require access to other software (such as a non-linear video editor like Adobe Premiere or Final Cut Pro), but FFmpeg allows a user to operate on audiovisual files directly without the use of third-party software or interfaces.
|Code Reuse and Modularity in Python||
Computer programs can become long, unwieldy and confusing without special mechanisms for managing complexity. This lesson will show you how to reuse parts of your code by writing functions and break your programs into modules, in order to keep everything concise and easier to debug.
|Top 10 FAIR Data and Software Things||
The Top 10 FAIR Data & Software Things are brief guides (stand alone, self paced training materials), called "Things", that can be used by the research community to understand how they can make their research (data and software) more FAIR. Each discipline/topic has its own specific list:
|Geospatial Data Curriculum||
This workshop is co-developed with the National Ecological Observatory Network (NEON). It focuses on working with geospatial data - managing and understanding spatial data formats, understanding coordinate reference systems, and working with raster and vector data in R for analysis and visualization.
|Social Science Curriculum||
This workshop uses a tabular interview dataset from the SAFI Teaching Database and teaches data cleaning, management, analysis and visualization. There are no pre-requisites, and the materials assume no prior knowledge about the tools. We use a single dataset throughout the workshop to model the data management and analysis workflow that a researcher would use.