For today’s graduate students and post-doctoral researchers, conducting research often starts by trying to make sense of the many tools, technologies, and work environments used in data-intensive research and computing. Fortunately, there is help in navigating this new research landscape.
The NSF Cyber Carpentry Workshop: Data Lifecycle Training is a two-week summer training program aimed at helping graduate students understand the many aspects of the data-intensive computing environment. Even more important, the workshop will focus on bridging the gap between domain scientists and computer and information scientists so that data-intensive research is quicker, less complicated, and more productive.
The workshop will take place July 16- 27, 2018, at the University of North Carolina at Chapel Hill. Travel and accommodations will be provided for accepted participants, and a certificate of completion from the UNC School of Information and Library Science (SILS) will be awarded at the end of the training.
The workshop is open to doctoral students and postdocs in basic sciences and computational sciences. Women, applicants from underrepresented groups, and persons with disabilities are especially encouraged to apply. Applications must be submitted by 5 p.m. Pacific Time on March 15 to receive full consideration. For more information and a link to the application form, visit the UNC Cyber Carpentry Training website.
The Cyber Carpentry workshop is supported by the National Science Foundation (NSF) through a grant awarded to Arcot Rajasekar, Frances McColl Distinguished Term Professor at UNC SILS.
Workshops topics will be taught by researchers who participated in the successful DataNet Federation Consortium (DFC), an NSF-funded project to develop national data management infrastructure to support collaborative multidisciplinary research. Drawing from their own expertise and their experiences with the DFC from 2013 through 2017, instructors will focus on providing students with an overview of best data management practices, data science tools, methods for performing end-to-end data intensive computing, data lifecycle management, and promoting reproducible science and data reuse.