You are here: Home Research Projects & Joint Activities Modelling at PIK Seminar Series

Seminar Series

To make more effective usage of expertise available at the institute for model implementation and to foster co-operation, information exchange, and model component sharing across Research Domains and modelling groups, PIK sets up an organizational framework. Among others, this includes a regular seminar series on software development methodologies, code instrumentation, debugging and profiling tools organised by PIK and supported by the Berlin and Potsdam universities with which PIK has co-operation agreements.

In particular, PIK encourages PhD students to attend these seminars. For other PIK seminars check the Seminar Series "Science & Pretzels" and the Trainings Courses for PhD students.


Modelling Strategy Seminar Programme:

Date, time and location: second Thursday of each month, 2 pm, Cupola A31 (exceptions may apply)

Please send general comments, feedbacks to individual seminars and proposals for the seminar programme to Michael Flechsig. The slides of previous seminars are only available from the PIK Intranet.

DateSpeakerTitle & Abstract

 

Seminar History

DateSpeakerTitle,  Abstract & Slides
Jun 2,
2016
Stefan
Schinkel,
RD3
The [zi:n] project - a story of big data, citizen science and the economy
This talk will give a short overview over the [zi:n] project and the closely related agent-based economic model acclimate. The focus of this talk will be on technology. We will introduce the 3-tiered nature of the project, review various programming languages, frameworks and other parts of the software stack behind [zi:n] and talk about the challenges we face in dealing with (the lower bound of) big data. A particular focus will be given to design choices, data formats, development culture and the guidelines we aim to establish in order to produce high quality software.
NOTE: This talk is focused on the technology involved, not the scientific concepts underlying the project.
Download Slides
May 18,
2016
Simon
Hirsbrunner,

University of Siegen
DFG-Graduiertenkolleg Locating Media
Seeing levels rise: online mappings of sea level rise from a visual communication perspective
Considering the major spatial implications of environmental change, maps have always been an important format for the visualization of climate change impacts. Moreover, powerful simulation technologies enable the imagination of increasingly detailed prospects for changes and risks on regional and local scales - and such information cries out to be mapped.
Easy-to-use online mapping tools offer new possibilities for dynamic navigation of local past, present and future with climate change; and making temperature rise, rainfall or sea level rise more tangible. What happens if such devices of simulation and digital „time-space-travel“ leave the scientific context and become broadly available for urban planners, decision makers and society as a whole? How do people perceive such visualizations of likely futures? And how do they deal with the frictions present simulated local realities.
In my presentation, I will share some insights from my recent field research focusing on visualizations of sea level rise and flood risks in California. During my stay in San Francisco, I had the opportunity to conduct a series of interviews with developers of digital mappings of sea level rise such as NOAA, the Pacific Institute, and the Bay Conservation and Development Commission. My presentation of preliminary results of this survey will cover topics such as sense of place, experienced past vs. informed future, as well as collaborative knowledge production.
Download Slides
Mar 17,
2016
Stefan
Schneider,
PIK-IT
Private Cloud Services at PIK
A private cloud service has been provided by IT-Services at PIK for a year now. It is already used by a few dozens of PIK's staff. The intention of this presentation is to bring some basic information about this service to a wider audience.
The idea to provide this service arose, when cloud storage like Dropbox or Google Drive became more and more popular on the one hand, but information privacy concerns, due to international espionage affairs, grew on the other hand. The service should provide a file-hosting and sharing platform, which assures that the data is kept locally and under no third party's control. What is offered now is a service that provides the possibility to access one's own data hosted in the cloud via different clients (PCs, notebooks, smartphones, tablets), the ability to share files with co-workers and even externals. Moreover a main feature of the cloud in use is to provide calendars, which can be synchronized between devices in one's office and mobile devices, when being on the way. In this presentation the main features of the service (synchronizing,sharing, and the use of calendars) and how they are accessed will be shown. Thereby some use cases and their technical implementation with different clients for mobile and office devices will be introduced.
Download Slides
May 21,
2015
Ciaron
Linstead,

Karsten
Kramer

PIK-IT
PIK's new high-performance supercomputing cluster
PIK's new high-performance supercomputing cluster will be coming into service in summer this year. While architecturally similar to the current IBM iDataPlex cluster (64-bit Linux on Intel x86_64 processors), there are several significant changes which developers and users will need to prepare for.
This talk will cover what hardware, filesystems, software development tools and libraries will be available on the new cluster and give an overview of SLURM, the replacement for the LoadLeveler batch queuing system.
Significant C/C++/Fortran compiler options related to new Intel Haswell CPU instruction sets, in particular Advanced Vector Extensions (AVX), will be covered.
Download Slides Talk K. Kramer
Download Slides Talk C. Linstead
Feb 19,
2015
Dominik
Reusser,
RD2

Anna-Lena
Lamprecht,
Uni Potsdam
Visual programming of workflows for climate impact assessment: A proof of concept application with simple impacts of sea-level rise as featured on ci:grasp.
The Climate Impacts: Global and Regional Adaptation Support Platform (ci:grasp - www.cigrasp.org) has been developed by the Climate Change and Development group. It is a web-based climate information service for exploring climate change related geo-information. Assessment of impacts of climate change as featured on ci:grasp involves the processing of large and heterogeneous datasets. In the presentation, we show how the GIS operations necessary for the preparation of the information on ci:grasp can be made available at a user-accessible level, so that users can easily define and perform multi-objective workflows tailored to their specific needs. Concretely, we introduce a proof-of-concept implementation based on the jABC process modeling and execution framework, which features graphical representations of the modeled computational processes and supports a particularly agile workflow development style.
We would like to discuss with you whether this technology is suitable to increase the productivity of the research process. Moreover, we provide you with the opportunity to make a hands-on experience by experimenting with the workflow framework. You will be able to execute and customize workflows and generate your own results.
Download Slides
Nov 27,
2014
Dominik
Reusser,
RD2

Martin
Hammitzsch,
gfz-CeGIT
SciForge: Is your code worth a full citation?
Development of the software used for research at PIK is discussed in order to strengthen quality and reproducibility. Often, efforts put into good maintenance of software is not sufficiently recognised. However, the community is actively working on concepts and solutions enabling researchers to publish software, cite it and be credited for it.
Software must meet the quality criteria of the scientific discourse to be a valuable and citeable contribution to science. Solutions also need to be developed regarding versioning and documentation, traceability, reproducibility and reusability. Furthermore, the archiving of source code and executables, the use of persistent identifiers, and metrics measuring productivity, impact, and recognition have to be addressed.
SciForge is a network and a currently running project at GFZ addressing these questions. In the joint seminar, the current debate on scientific software publication will be presented and discussed.
Download Slides
Oct 22,
2014
Jonathan
Donges,
RD1
and
Planetary
Boundary
Lab,
Stockholm
Resilience
Center
Pythonic Functional Network Analysis and Modeling in the Geosciences: The pyunicorn Package
Functional networks are a powerful tool for analyzing large geoscientific datasets such as global fields of climate time series originating from observations or model simulations. pyunicorn (pythonic unified complex network and recurrence analysis toolbox) is an open-source, fully object-oriented and easily parallelizable package written in the language Python. It allows for constructing functional networks (aka climate networks) representing the structure of statistical interrelationships in large datasets and, subsequently, investigating this structure using advanced methods of complex network theory such as measures for networks of interacting networks, node-weighted statistics or network surrogates. Additionally, pyunicorn allows to study the complex dynamics of geoscientific systems as recorded by time series by means of recurrence networks and visibility graphs. The range of possible applications of the package is outlined drawing on several examples from climatology. I will show how pyunicorn can be used along with other popular scientific python and network analysis packages such as numpy, scipy, scikit-learn, networks etc. Also, possibilities to integrate high-performance C++ code in Python using the scipy.weave package will be presented.
Download Slides
Sep 11,
2014
Matthias
Büchner,
RD2
ESGF Based Data Publishing in ISI-MIP
During ISI-MIP Fasttrack phase a huge collection of global impact modeling data from multiple modeling groups worldwide from several sectors have been collected and published on an ESGF Server at PIK. Not only that PIK has turned into a source of climate impact scenario data, we've also shown that the ESGF infrastructure and services are able to serve gridded non CMIP5 netCDF data sets as well. The seminar will give an overview about the setup, publishing and maintenance activities, as well as data policies for an ESGF server, its integration in the global grid and possible use for other projects at PIK which intend to publish harmonized gridded data sets to global communities.
Download Slides
Jun 19,
2014
Stefan
Petri,
RD1
An Introduction to the TotalView Debugger
The talk gives an overview of tools such as TotalView and valgrind and their interaction with the compiler to answer questions like
- Why does my program crash?
- Why does my program run and run and run instead of producing a result?
- Why does it print garbage instead of real numbers?
- Why does it use so much memory?
- Why is it so slow, and how can I make it faster?
Download Slides
Jun 12,
2014
Simon
Kiertscher,
Potsdam
University
On Bitcoins
Recently, Bitcoins hit the media when MtGox, the former biggest trading platform for Bitcoins, announced that they have lost a massive amount of Bitcoins due to a bug in their wallet system. The finance market describes Bitcoins as a highly speculative investment. But how does Bitcoins work actually? This presentation will give an overview on digital currencies and as its most famous example we take a close look on Bitcoins, how they work and how you can mine your own Bitcoins at home.
Download Slides
May 22,
2014
Mahe
Perrette,
RD2
dimarray: A python Package to Manipulate numpy Arrays with Dimensions, Including netCDF I/O
After a general introduction to python and numpy arrays, the dimarray package will be presented with examples related to the analysis of the CMIP5 archive and sea level rise projections (website: https://github.com/perrette/dimarray). In python the numpy package provides a unified way of manipulating data arrays. The newly developed dimarray package comes as an attempt to ease the manipulation of arrays with dimensions. It offers a number of features making it useful for geophysicists, such as handy indexing (e.g. a[1950, 'RCP45']), time series alignment and netCDF I/O.
Download Worksheet
Mar 13,
2014
Frank
Wechsung,
RD2

TBD

cancelled

Feb 18,
2014
Thomas
Nocke,
RD4
ClimateImpactsOnline: The Details Behind a Web Portal Development for Climate Impact Information
The talk will introduce the ClimateImpactsOnline portal and based on that discuss procedural and techniqual issues within such a portal development. These include
- techniqual portal details (software basis) and visualization aspects (visualization techniques used, uncertainty visualization, color coding),
- experiences of working together with an external partner (WetterOnline),
- experiences when being a ClimateKIC project and commercialization aspects,
- feedback we gained both from official authorities and individual users and our strategy to address this feedback,
- data processing details handling multiple, large and heterogeneous ensemble data sets from observations as well as from (regional) climate and climate impact model simulations and their aggregation to best fit into a portal.
Finally, future portal development directions will be discussed.
Download Slides
Jan 9,
2014
Ciaron
Linstead,
PIK-IT
Introduction to Source Code Management with Subversion
In this talk I will present the motivations for using a revision control system such as Subversion for managing source code, as well as an overview of a typical software development workflow using Subversion at PIK.
Download Slides
Dec 9,
2013
Susanne
Rolinski,
RD2
R Trainings Courses at PIK - An Introduction
(In co-operation with PIK's Training Course Programme for PhD students)
Download Slides
Dec 5,
2013
Marian
Leimbach,
RD3
Solution Algorithms of Large-scale Integrated Assessment Models
Integrated assessment (IA) models on climate change are a widely used form of models to evaluate climate policies. As they integrate modules from different domains in a single model framework, they are often characterized by high numerical complexity. Although there are attempts to run such models in a modularized and distributed framework, the standard approach is still to apply monolithic models. Complexity and non-linearity provide a big challenge for the solution algorithm of such models. A large part of IA models are formulated as optimization models that embed energy system modules, climate modules and land use modules around a welfare maximizing economic model. For such models we can principally distinguish between algorithms that yield a market or decentralized solution and algorithms that yield a social planner solution. Deviations between these two solution concepts are due the existence of externalities. Nevertheless, absence or internalization of externalities may result in the same solution and provide a basis for comparing the principally different solution algorithms with respect to their numerical performance. In this paper we present and compare a Negishi algorithm that provides a globally optimal solution and a Nash algorithm that provides a market solution. The Nash algorithm follows the standard Walrasian auctioneer who sets prices on international markets based on the deviation of supply and demand. The number of different markets provides a big challenge for the Nash algorithm. A penalty term included in each actor’s budget constraint implies reaction on expected price changes and helps the Nash algorithm to converge and to outperform the Negishi algorithm in an environment without externalities.
Download Slides
Nov 14,
2013
Dim
Comou,
RD1
Development of the Aeolus 1.0 Atmosphere Model: An Overview
Aeolus is a novel Statistical-Dynamical Atmosphere Model (SDAM) developed and maintained within RD1. It is part of the Earth System Model of Intermediate Complexity (EMIC) Climber-4, which also contains dynamical ocean-, vegetation-, and ice-models as well as other types of atmosphere models. In this talk I will discuss several technical aspects of Aeolus and the challenges involved in developing such model given limited resources. First of all, I will outline the code´s design. Aeolus is written in c++ with a clear object-oriented and modular code-design. The rationale behind its design has been to strictly separate the implementation of data storage, gridding (i.e. discretization) and computations. This way, future additions to any of these can in principle be easily harbored, without limiting the maintainability of the code. Further, throughout the code several safety mechanisms have been build-in to avoid bugs and/or to detect them at an early stage. These include explicit read-write functionality, design-by-contract and variable range-checking. I will conclude my talk with a short discussion of some of the tools which proved to be very helpful during development as well as some “lessons learned”.
Download Slides
Oct 10,
2013
Jacob
Schewe,
RD3
The ISI-MIP Project: Climate Impacts across Sectors - Uncertainties along the Modelling Chain
The Inter-Sectoral Impact Model Intercomparison Project (ISI-MIP) offers a framework to compare climate impact projections in different sectors and at different scales. Consistent climate and socioeconomic input data provide the basis for a cross-sectoral integration of impacts projections. The project is designed to enable quantitative synthesis of climate change impacts at different levels of global warming. The first, fast-tracked phase of ISI-MIP was initiated at PIK in late 2011, bringing together more than 30 global impacts modelling groups from around the world. It has resulted in numerous publications and an extensive database of climate change impacts on agriculture, biomes, water, health, and coastal infrastructure, which is now being made accessible to the wider research community. A second phase has just been launched and will include regional models as well, cover additional sectors, and specifically explore the impacts of extreme events and the interaction between impacts in different sectors. We present the project framework and some key results from the fast-track phase.
Download Slides
Jul 10,
2013
Jan Philipp
Dietrich,
RD3
Setting up a "Model Operations" Group - Tasks, Structure, Problems, Realization
Working with big models code management and technical coordination becomes an increasing challenge. For this reason RD3 decided end of 2011 to establish a new so called "model operations" group specifically dedicated to these types of issues. In this seminar we will share our experiences we made over the past 1,5 years with this new format. We will present how the group was set up, what we have achieved so far and what will come next. Furthermore, there will be the chance to discuss whether this format might be also useful for other working groups or research domains at PIK.
Download Slides
May 15,
2013
Christopher
Reyer,
RD2
A trip to Bayesland: An Introduction to Bayesian Calibration and Bayesian Model Comparisons
Bayesian methods are receiving more and more attention in the scientific world due to their focus on probabilistic thinking, the quantification of uncertainties, and their strength as data-assimilation technique. I provide a short history of Bayesian statistics as opposed to frequentialist statistics and explain briefly some underlying theory on Bayesian calibration (BC) and Bayesian model comparison (BMC). Then I will present examples of how we use BC and BMC to quantify uncertainties in forest ecosystem modeling at PIK using SimEnv. I conclude by highlighting some curious real-life applications of Bayesian thinking and invite the audience to a short trip to Bayesland.
Download Slides
Mar 19,
2013
Michael
Hauhs,
University of Bayreuth,
BayCEER
Ecosystem Modelling from a Theoretical Perspective
The modelling of ecological systems comprising both living and non-living parts is usually perceived as applied science. Dynamical system theory as developed in physics has been used widely, treating ecosystems as state-based systems governed by a set of evolution equations. Results, however, have not (yet) lived up to expectations set by geosciences such as meteorology. Especially predictions of ecosystem dynamics do not extend beyond existing empirical models. This has resulted in a turn towards data-driven approaches in ecosystem modelling.
Here, theoretical rather than technical limitations to ecological modelling are reviewed. The traditional approach is firstly rephrased in category theory. Secondly, it can be characterised as dual to a novel approach derived in theoretical computer science. Thirdly, a tentative interpretation of this new formalisation is provided in terms of biological and ecological notions. The new approach emphasizes aspects of behaviour. Viewing an ecosystem as a black box is no longer an inevitable result of complexity, but a theoretical goal in proper specification of (interactive) behaviour. Examples will be given for agent-based models.
Download Slides
Mar 14,
2013
Michael
Flechsig,
RD4
SimEnv: Hands on Models, Experiments, Analyses & More
SimEnv is a multi-run simulation environment to evaluate models mainly for quality assurance matters and scenario analyses. In this seminar I show by examples how to interface models, define experiments, distribute them at the compute cluster, post-process and analyze experiment output and visualize it by SimEnvVis.
Download Slides
Feb 7,
2013
Ciaron
Linstead,
PIK-IT
Introduction to Source Code Management with Subversion
In this talk I will present the motivations for using a revision control system such as Subversion for managing source code, as well as an overview of a typical software development workflow using Subversion at PIK.
Download Slides
Jan 15,
2013
Thomas
Nocke,
Magnus
Heitzler,
RD4
Visualization of Climate Simulation Data at PIK
Visualization is an important method to analyze climate simulation data and to present scientific results to the research community and the public. The sizes and the structure of climate simulations (multi-model, multi-ensemble, multi-variate) lead to a variety of challenges for their visualization (e.g. how to present high-dimensional data or uncertainties). In this context, the talk will provide an overview of the available visualization methods, software and hardware facilities at PIK and their potentials for visual data analysis. Recent developments of the climate data visualization system SimEnvVis will be presented in detail.
Download Slides
Nov 8,
2012
Mario
Schmitz,
HNE Eberswalde
System Dynamics Modeling for the Assessment of Strategies for Sustainable Development: Methodological Analysis and Implementation
The assessment of strategies for sustainable development is an ill-structured decision situation, because it is characterized by conflicting goals, interests, perceptions and a missing consensus about goal definitions. In this presentation the potential of system dynamics (SD) is explored as a trans-disciplinary systems thinking intervention in a decision situation. First, a catalog is presented defining criteria from different disciplines which a decision-support-system in this context should satisfy. Then, SD is tested against this set of criteria in order to provide an analytical basis for the implementation of a web-based simulator presented in this talk.
Download Slides
Sep 17,
2012
Per
Nyberg,
Cray Inc.
The Climate Knowledge Discovery Initiative
As we enter the age of data intensive science, knowledge discovery in simulation based science rests upon analyzing massive amounts of data. Geoscientists gather data faster than they can be interpreted. They possess powerful tools for stewardship and visualization, but not for data intensive analytics to understand causal relationships among simulated events. Tools that employ a combination of high-performance analytics, with algorithms motivated by network science, nonlinear dynamics and statistics, as well as data mining and machine learning, could provide unique insights into challenging features of the Earth system, including extreme events and chaotic regimes. Using complex networks has been identified as one very promising solution. By representing the climate system as networks, the understanding of observed climate phenomena, complex relationships in the global climate system, and anticipation of the consequences of climate change can be improved. The breakthroughs needed to address these challenges will come from collaborative efforts involving several disciplines, including end-user scientists, computer and computational scientists, computing engineers, and mathematicians. The Climate Knowledge Discovery effort is a community initiative to educate climate researchers about the potential of using knowledge discovery tools and semantic technologies, and to conduct research into ways and means of applying advanced analytical techniques to multi-disciplinary climate model data. As scientific research becomes increasingly multi-disciplinary integrating the efforts of globally diverse human and digital resources from different organizations, HPC facilities and data archives, the storage, middleware and analysis technologies required to enable this collaboration will evolve tremendously.
Download Slides
Sep 13,
2012
Jana
Schwanitz,
RD3
Validating ReMIND - Lessons Learned
ReMIND - as are other integrated assessment models - is used for advising policy-makers and informing the global society about human impacts on climate change. It is therefore very legitimate to ask how much one can trust these global simulation models. The talk will present our lessons learned from applying different validation methods to ReMIND in order to judge the model's output and behavior. Among these are: comparisons with long-term trends and patterns (stylized facts), the performance of diagnostic and shock experiments, the introduction of a 'new feature protocol' or the development of a validation routine in order to keep track of model changes.
Download Slides
Jun 14,
2012
Nicola
Botta,
RD4
A Pragmatic Approach to Software Construction
I present two standard techniques for software construction -- abstract data type (ADT) analysis and design by contract (DBC) -- from a practitioner's viewpoint. I show how DBC can be applied in a number of concrete examples and discuss the advantages and the limitations of this approach.
Download Slides
May 10,
2012
Ciaron
Linstead,
PIK-IT
Advanced Features of the iDataPlex Cluster and LoadLeveler Workload Scheduler:
How to Get your Jobs to the Top of the Queue

I will present some of the features of the IBM LoadLeveler that allow for higher job priority, more robust job management (e.g. multiple jobsteps), the difference between using the total_tasks/blocking pairing and the nodes/tasks_per_node pairing for process distribution and the implications for memory-bound parallel applications. For parallel applications launched with Intel's mpiexec/mpirun, I will show some useful options for debugging and tuning. I will also briefly demonstrate running distributed and parallel Matlab tasks on the cluster from a Matlab client.
Download Slides
May 7,
2012
Joint gfz-PIK
Workshop
Open Source, Licenses & More: How to Outreach Model Source Code and Software
In the course of research a multitude of algorithms, models, and software is created; sometimes, existing code is adapted to fit new purposes. This creation and adaptation of code in a scientific context gives rise to a number of questions:
- How can models and software be published?
- Which licence models are available and which are suitable in a scientific context?
- Which distribution channels an platforms are available for the distribution of source code from models and software?
The aim of this workshop is to get an overview of the types of code created at the three Institutes and determine the requirements towards software publications. The outcomes of this workshop will be the basis for upcoming workshops targeted at specific aspects of software publication and will be used to develop consulting and publication services for software to be offered through the Library of Wissenschaftspark Albert Einstein.
Download Slides
Apr 12,
2012
Markus
Wrobel,
RD2
Models, Data, and Human - Computer Interaction
Graphical user interfaces can in principle foster effective usage of model related resources, with potential applications ranging from facilitated data access for modellers to flexible outreach of model projections to scientific and non-scientific audiences. Using the Java programming language as example, the talk will give an introduction to how basic principles of object-oriented software construction can be applied in developing desktop-based and web-based graphical user interfaces. It further will outline some insights from the field of Human-Computer Interaction to illustrate why developing appropriate user interfaces often is a non-trivial challenge.
Download Slides
Mar 7,
2012
Jan Philipp
Dietrich,
RD3
Documentation, Bug-Tracking, Planning: The Project Management Software Redmine as a Tool for Structured Model Development
Redmine is a project management software which helps you organizing and coordinating day-to-day model work. It consists of several modules including a Wiki for model documentation, a Bug- and Feature-Tracker for model development, SVN integration, a Roadmap module and a calendar. For illustration of the benefits you could have from Redmine I will explain its basic features and present its current application in the development and maintenance of the MAgPIE model.
No Slides - online presentation
Feb 10,
2012
Karsten Kramer,
PIK-IT
An Introduction to IT-Services
IT - which stands for information technology - is a basic service provided for all employees of the institute. Although the main focus of the IT group traditionally has been high performance computing and data management, services cover a variety of other very important aspects of computing, notably a personal computer helpdesk, data networks, e-mail, printing, server hosting and video conferencing. In this talk I will briefly introduce the range of IT services available at PIK and the people behind them. This lecture may thus be of particular interest for people new at the institute. After the talk there will be a chance to discuss strengths and shortcomings of IT services in general.
Download Slides
Jan
12,
2012

Stefan
Petri,
RD1
Experiences from Coupling Components of the Climber-4 Model
Aeolus atmosphere
--- Grid abstractions, variables, and parallelization
--- Encapsulation, verification, testing and tools
MOM4 and FMS
--- Grids, mosaics, and interpolation
--- Coupling interface
LPJ vs. FMS
--- Time step after time step
Ice in the graphics card
--- CUDA
--- Performance Analysis
Download Slides
Dec
9,
2011
Joachim Glauer /
Michael Flechsig,
PIK-IT /
RD4
Metadata Management at PIK - Annotate your Model Output!
Annotating model output data produced at PIK is a prerequisite for state-of-the-art (i) result documentation, (ii) reproducibility, (iii) long term storage, (iv) inhouse communication and accessibility, and (v) data outreach. After a short introduction to the theme the web interface of the PIK metadatabase is introduced by an online presentation. The interface allows for metadata definition and update, browsing and retrieval, and comes with a report functionality.
Download Slides
Nov
10, 2011
Ciaron Linstead,
PIK-IT
An Introduction to Cluster Computing at PIK
This talk will cover the basics of getting started with high performance computing at PIK, including a summary of available tools, example serial and parallel applications, preparing jobs and submitting them to the batch queuing system, and LoadLeveler commands for managing jobs.
Download Slides

Document Actions