This is the home page for Darshan, a scalable HPC I/O characterization tool. Darshan is designed to capture an accurate picture of application I/O behavior, including properties such as patterns of access within files, with minimum overhead. The name is taken from a Sanskrit word for “sight” or “vision”.
Darshan can be used to investigate and tune the I/O behavior of complex HPC applications. In addition, Darshan’s lightweight design makes it suitable for full time deployment for workload characterization of large systems. We hope that such studies will help the storage research community to better serve the needs of scientific computing.
Darshan was originally developed on IBM Blue Gene series computers at the Argonne Leadership Computing Facility, but today it is portable across a wide variety of platforms and is deployed in production at computing facilities around the world.
You will find current news about the Darshan project posted below. Additional documentation and details about the Darshan are available from the links at the top of this page.
Using Darshan with non-MPI applications (e.g., AI/ML frameworks)
Darshan is an application-level I/O characterization tool that has been traditionally used in the HPC community for understanding file access characteristics of MPI applications. However, in recent years Darshan has been redesigned to relax it’s dependence on MPI so that it can support instrumentation of other programming models and runtime environments that are gaining traction in HPC. In this article, we will cover some of these new improvements to Darshan and cover best practices for general instrumentation of applications that don’t use MPI, ranging from serial applications to Python multiprocessing frameworks (e.g., PyTorch, Dask, etc.).
- Darshan enhancements for non-MPI usage
- Best practices for non-MPI instrumentation in Darshan
- Example Darshan runtime library configuration
- Future work
Darshan enhancements for non-MPI usage
Support for non-MPI instrumentation in Darshan began starting with our 3.2.0 release (thanks in large part to contributions from Glenn Lockwood, Microsoft). These changes revolved around adopting new mechanisms for bootstrapping the Darshan library when a process launches and shutting down the Darshan library when a process terminates. Traditionally, this was handled by intercepting MPI_Init
/MPI_Finalize
routines that MPI applications conveniently call at application startup/shutdown. To support more general mechanisms for this, Darshan adopted the usage of GCC constructor/destructor attributes1 for its startup/shutdown routines.
Beyond this initial redesign, additional changes have recently been made to the Darshan library based on our experiences in instrumenting various non-MPI applications (e.g., workflow systems, Python multiprocessing packages). These changes are outlined below.
- Processes that call
fork()
- Problem: Child processes from
fork()
calls inherit their parent’s memory, including Darshan library state. This can lead to duplicate accounting of the parent’s I/O statistics in the child process’s log. - Solution: Use
pthread_atfork()
handlers to get hooks into child process initialization, allowing Darshan library state to be reinitialized. Initial support provided in Darshan’s 3.3.1 release.
- Problem: Child processes from
- Processes that terminate abruptly using
_exit()
calls- Problem: Some multiprocessing frameworks use fork-join models that call “immediate” exit routines (i.e.,
_exit()
). For example, we have observed this behavior in some configurations of Python’smultiprocessing
package, which is commonly used by PyTorch and other frameworks. This immediate exit routine is generally used to prevent child processes from interfering with resources that may still be used by the parent process (e.g., by flushing buffers, callingatexit
handlers, etc.). But, immediate exit also bypasses the Darshan library’s destructor routine which finalizes Darshan and writes out its log file. - Solution: Darshan has been updated to intercept calls to
_exit()
in the same way it would traditionally interceptMPI_Finalize()
for MPI applications. This change enables Darshan to cleanly shutdown before the process starts its immediate termination. Initial support provided in Darshan’s 3.4.5 release.
- Problem: Some multiprocessing frameworks use fork-join models that call “immediate” exit routines (i.e.,
- Processes that terminate abruptly via kill signals
- Problem: Some multiprocessing frameworks use fork-join models that simply terminate child processes via kill signals. We have also observed this behavior in some configurations of Python’s
multiprocessing
package. Termination via kill signals (i.e.,SIGTERM
) similarly bypasses Darshan’s typical shutdown procedure. Unfortunately, the only mechanism to interpose Darshan’s shutdown before this signal is using a signal handler, but the Darshan shutdown procedure is not async-signal-safe (i.e., it cannot be safely called in a signal handler). - Solution: Darshan actually has a longstanding optional feature to store its log data in memory-mapped files as the application executes, instead of storing this data on the heap and writing it out to a log file at process termination time. This feature was originally envisioned to support cases where MPI applications don’t call
MPI_Finalize()
(e.g., because they hit their wall-time limit on a batch scheduled system), but it actually helps preserve Darshan data in cases like this where processes are abruptly terminated. Initial support provided in Darshan’s 3.1.0 release.
- Problem: Some multiprocessing frameworks use fork-join models that simply terminate child processes via kill signals. We have also observed this behavior in some configurations of Python’s
Best practices for non-MPI instrumentation in Darshan
- To take advantage of all of the extensions detailed above, use a Darshan release version >= 3.4.5.
- When building darshan-runtime, enable the mmap logs feature to help protect against processes that abruptly terminate via kill signals.
- For Spack builds, use the
+mmap_logs
variant. - For darshan-runtime source builds, use the
--enable-mmap-logs
configure option.
- For Spack builds, use the
- To interpose the Darshan library, you have two options2:
- Set
LD_PRELOAD=/path/to/darshan/lib/libdarshan.so
to ensure Darshan instrumentation wrappers can intercept application I/O routines.- This option is necessary for Python applications, as there is no way to directly link the Darshan library into the Python binary.
- Directly link the Darshan library on the command line using
-ldarshan
when building your application.- Darshan should precede all other libraries to ensure it’s first in link ordering, otherwise it may not intercept application I/O calls.
- Set
- Enable Darshan’s non-MPI mode by setting
DARSHAN_ENABLE_NONMPI=1
in your environment.- non-MPI mode requires this variable to be explicitly set so Darshan doesn’t inadvertently generate log files for extraneous commands (e.g.,
ls
,git
, etc.). - Instrumenting specific applications can then be accomplished by simply running a command like:
DARSHAN_ENABLE_NONMPI=1 <binary> <cmd_args>
- non-MPI mode requires this variable to be explicitly set so Darshan doesn’t inadvertently generate log files for extraneous commands (e.g.,
- If necessary, consider using Darshan library configuration files to increase Darshan’s default memory/record limits, to enable/disable certain Darshan modules, or to limit Darshan instrumentation to files matching some pattern (e.g., a mount point prefix, a file extension suffix).
- This is particularly helpful for Python applications, which tend to access tons of shared libraries (.so), Python compiled code (.pyc), etc., which can quickly exhaust Darshan’s record memory.
- If using traditional Darshan tools like
darshan-parser
or the PyDarshan job summary tool, an error message is reported if Darshan ran out of memory, in which case this configuration file is needed to help ensure Darshan allocates and uses a sufficient amount of memory. - See the next section for example usage of config files.
- After your Darshan instrumented application terminates, check the
/tmp
directory (the default output location for Darshan mmap log files) for any Darshan logs generated by processes that terminate abruptly.- We recommend copying these log files somewhere permanent and compressing them in Darshan’s standard compressed format to save space using the
darshan-convert
utility, e.g.:darshan-convert /tmp/logfile.darshan /path/to/darshan/log/dir/logfile.darshan
- Processes that terminate normally do not output logs to
/tmp
and instead output the logs in standard compressed format in your standard Darshan log output directory.
- We recommend copying these log files somewhere permanent and compressing them in Darshan’s standard compressed format to save space using the
Example Darshan runtime library configuration
Darshan runtime library configuration options can be expressed using a configuration file that can be passed to Darshan at runtime by setting the following environment variable: DARSHAN_CONFIG_PATH=/path/to/darshan.conf
An example configuration file is given below that demonstrates the types of settings you can control within the Darshan runtime library. Not all settings may be needed, depending on your workload and your use case, and often times some experimentation is needed to determine appropriate settings. This is a necessary trade-off as Darshan is designed for low-overhead, comprehensive instrumentation of applications — increasing default memory limits or restricting scope of instrumentation are not our default operational modes.
# allocate 4096 file records for POSIX and MPI-IO modules
# (Darshan only allocates 1024 per-module by default)
# NOTE: MODMEM setting may need to be bumped independent of this setting,
# as it does not force Darshan to use a larger instrumentation buffer
MAX_RECORDS 4096 POSIX,MPI-IO
# in this case, we want all modules to ignore record names
# with a ".pyc" or a ".so" file extension
# NOTE: multiple regex patterns can be provided at once, separated by commas
# NOTE: the '*' specifier can be used to apply settings for all modules
NAME_EXCLUDE \.pyc$,\.so$ *
# bump up Darshan's default record memory usage to 8 MiB
MODMEM 8
# bump up Darshan's default name record memory usage to 2 MiB
# NOTE: Darshan uses separate memory for storing record names (i..e, file names)
# that can also be exhausted, so this must be bumped independently of
# MODMEM in the case where lots of file name data is captured
NAMEMEM 2
# default modules not of interest can be disabled like this
MOD_DISABLE STDIO
# non-default modules like DXT tracing modules can be enabled like this
MOD_ENABLE DXT_POSIX,DXT_MPIIO
More extensive details on Darshan configuration file format is provided HERE.
Future work
To help with the analysis of Darshan log data from multiprocessing frameworks that generate numerous Darshan logs, we are working to extend Darshan analysis tools to support aggregation of this data into single summary outputs. This will enable more comprehensive analysis of these frameworks, similar to how Darshan provides summaries of all processes in MPI applications in a single, concise summary. We expect our next release (3.4.7) to have some capabilities for analyzing data from multiple logs. Stay tuned for updates on this ongoing work.
- https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html ↩︎
- Darshan’s non-MPI mode only works for dynamically-linked executables and requires a compiler that supports GCC constructor/destructor attributes (most do). ↩︎
Darshan 3.4.6 release is now available
Darshan version 3.4.6 is now officially available for download HERE. This point release includes a couple of important new capabilities and bug fixes:
- Added enhancements to Darshan’s Lustre instrumentation module to capture more extensive details on Lustre striping configurations
- Lustre file records now composed of potentially multiple component counter sets (e.g., LUSTRE_COMP1_*, LUSTRE_COMP2_*, etc.)
- Allows full characterization of newer Lustre striping features, including progressive file layouts , data-on-metadata, file-level redundancy, and self-extending layouts
- Fixed bugs in Darshan’s log compression/decompression routines that are triggered when using the zlib-ng software package, a new implementation of the zlib compression library
- darshan-runtime bug fix corrects problematic log compression strategy
- darshan-util bug fix corrects logs already generated with this issue
- Fixed bug leading to hangs when parsing improperly formatted Darshan runtime library configuration settings
We have also released PyDarshan 3.4.6.0 on PyPI, though this is just to track the 3.4.6 darshan-util library. There are no modifications to PyDarshan functionality.
Documentation for Darshan and PyDarshan is available HERE.
Please report any questions, issues, or concerns with this release on our Slack instance, using the Darshan-users mailing list, or by opening an issue on our GitHub.
Darshan 3.4.5 now available
Darshan version 3.4.5 is now officially available for download HERE. This point release includes a couple of important new capabilities and bug fixes:
- Added capability for Darshan’s runtime library to properly shutdown in non-MPI applications that call
_exit()
directly- This behavior has been commonly observed in the Python
multiprocessing
package, which has traditionally prevented Darshan from properly instrumenting applications that use it (e.g., the PyTorch DataLoader)
- This behavior has been commonly observed in the Python
- Added optional integration with the LDMS data/metrics collection system, allowing realtime analysis of Darshan instrumented I/O operations
- For more details, see https://ovis-hpc.readthedocs.io/en/latest/ldms/ldms-streams.html#darshan
- Contributed by Sara Walton (SNL), Ana Luisa Solorzano (Northeastern), and the LDMS team
- Fixed bug preventing instrumentation of
fscanf()
calls on some systems - Fixed bug in HDF5 module causing any call to HDF5’s
H5Pset_fapl_mpio()
routine to fail
We have also released PyDarshan 3.4.5.0 on PyPI, though this is just to track the 3.4.5 darshan-util library. There are no modifications to PyDarshan functionality.
Documentation for Darshan and PyDarshan is available HERE.
Please report any questions, issues, or concerns with this release using the Darshan-users mailing list or by opening an issue on our GitHub.
Darshan 3.4.4 now available
Darshan version 3.4.4 is now officially available for download HERE. This point release includes a few minor bug fixes:
- Fixed bug leading to inconsistent heatmap record shapes when Darshan shared file reductions are disabled
- Also added a darshan-util library fix to resolve this inconsistency on already impacted logs (any generated with 3.4.0+ versions of Darshan)
- Added workaround for potential undefined symbol errors for ‘H5FD_mpio_init’ when LD_PRELOADing an HDF5-enabled runtime library
- Bug triggered by 1.13+ versions of HDF5
We have also released PyDarshan 3.4.4.0 on PyPI, though this is just to track the 3.4.4 darshan-util library. There are no modifications to PyDarshan functionality.
Documentation for Darshan and PyDarshan is available HERE.
Please report any questions, issues, or concerns with this release using the darshan-users mailing list or by opening an issue on our GitHub.
Darshan 3.4.3 now available
Darshan version 3.4.3 is now officially available for download here: https://www.mcs.anl.gov/research/projects/darshan/download/. This point release includes a few minor bug fixes for darshan-runtime libraries:
- Added new configure option ‘–with-username-env’ to allow specification of an env variable to use to find the username associated with a job (e.g., SLURM_JOB_USER)
- Fixed bug causing crashes for applications that call fork() and use Darshan app exclusions settings
- Fixed bug related to not closing open HDF5 file ID when instrumenting H5Fflush() calls
More notably, we have also released PyDarshan 3.4.3.0 on PyPI, with this release including a number of improvements/changes to the log analysis package and corresponding tools:
- PyDarshan job summary tool improvements:
- Added new module overview table
- Added new file count summary table
- Added new plot of POSIX module sequential/consecutive accesses
- Included PnetCDF `wait` time in I/O cost figures
- Dropped default generation of DXT-based heatmaps and added a new cmdline option to force generate them (–enable_dxt_heatmap)
- Dropped usage of scientific notation in “Data access by category” plot
- Made captions, axis labels, and annotations clearer and easier to read
- Integrated Python support for darshan-util accumulator API for aggregating file records and calculating derived metrics
- Added backend routine `accumulate_records`, which returns a derived metric structure and a summary record for an input set of records
- Added backend routine `_df_to_rec` to allow conversion of a DataFrame of records into raw byte arrays to pass into the darshan-util C library (e.g., for using accumulator API)
- Fixed bug allowing binary wheel installs to prefer darshan-util libraries found in LD_LIBRARY_PATH
- Fixed bug in DXT heatmap plotting code related to determining the job’s runtime
- Updated docs for installation/usage of PyDarshan
- Dropped support for Python 3.6
For reference, an example report generated by the updated PyDarshan job summary tool can be found here: https://www.mcs.anl.gov/research/projects/darshan/docs/e3sm_io_report.html.
Documentation for Darshan and PyDarshan is available here: https://www.mcs.anl.gov/research/projects/darshan/documentation/.
Please report any questions, issues, or concerns with this release using our mailing list, or by opening an issue on our GitHub: https://github.com/darshan-hpc/darshan.
Join us on Slack
Follow the below invitation to join Darshan’s new Slack workspace:
https://join.slack.com/t/darshan-io/shared_invite/zt-1n6rhkqu8-waSQCVWYDrUpBdcg_1DwqQ
We hope this workspace will provide another opportunity for the Darshan team and users to engage, whether it be about bug reports, usage questions, feature requests, project roadmap, etc. The Darshan team will also use this workspace to get user feedback on upcoming Darshan enhancements and other changes, as well as to announce new software releases.
Hope to see you there!
Darshan 3.4.2 release is now available
Darshan version 3.4.2 is now officially available for download here. This point release includes important bug fixes for Darshan’s new PnetCDF module:
- Fixed segfault when defining scalar variables in PnetCDF module
- Fixed bug attributing all PnetCDF variable instrumentation to the first variable instrumented
- Fixed memory corruption (and potential segfault) when reading/writing high-dimensional PnetCDF variables using vara/vars/varm interfaces
- Fixed crashes related to using PnetCDF vard interfaces with input MPI_DATATYPE_NULL datatypes
Note that these bugs can only be triggered by the PnetCDF module released in Darshan version 3.4.1, which is disabled by default. There should be no impact on Darshan 3.4.1 configurations that did not explicitly enable PnetCDF instrumentation.
We have also released PyDarshan 3.4.2.0 on PyPI, though this is just to track the 3.4.2 darshan-util library. There are no new modifications to PyDarshan functionality.
Documentation for Darshan and PyDarshan is available here.
Please report any questions, issues, or concerns with this release using the darshan-users mailing list, or by opening an issue on our GitHub: https://github.com/darshan-hpc/darshan.
Darshan 3.4.1 release is now available
Darshan version 3.4.1 is now officially available for download here. This release includes the following new features, bug fixes, etc.:
- Added comprehensive instrumentation of PnetCDF APIs via PNETCDF_FILE and PNETCDF_VAR modules (contributed by Wei-Keng Liao)
- disabled by default, enabled by passing `–enable-pnetcdf-mod` to configure
- Modified Darshan log format to support a max of 64 instrumentation modules, since the current version of Darshan reached the old max (16)
- Modified Darshan to report job start/end times at nanosecond granularity (previously only second granularity was possible)
- Added support for instrumenting H5Oopen family of calls
- Modified HDF5 module extraction of dataspace selection details
- Extraction of point selections now possible regardless of HDF5 version
- H5S_ALL selections are no longer counted as regular hyperslab accesses
- Fixed bug causing no instrumentation of child processes of fork() calls (reported by Rui Wang)
- Deprecated –file-list and –file-list-detailed options in darshan-parser
- Added “darshan_accumulator” API to the logutils library
- _create(), _inject(), _emit(), and _destroy()
- generalizes the mechanism for producing summation records and derived metrics for sets of records from a given module
- refactored darshan-parser to use new API
- implemented support for accumulators in POSIX, STDIO, and MPIIO modules
- Fixed memory leak in darshan-util helper functions used by PyDarshan
- darshan_log_get_name_records
- darshan_log_get_filtered_name_records
- Integrated the µnit Testing Framework in darshan-util
- implemented unit tests for darshan_accumlator API
We have also released PyDarshan 3.4.1.0 on PyPI, which includes a number of improvements:
- Fixed memory leaks in the following backend CFFI bindings (reported by Jesse Hines):
- log_get_modules
- log_get_mounts
- log_get_record
- log_get_name_records
- log_lookup_name_records
- Added PnetCDF module information to job summary tool
- Testing modifications:
- Switched to use of context managers for log Report objects to avoid test hangs in certain environments
- Marked tests requiring lxml package as xfail when not installed
Documentation for Darshan and PyDarshan is available here.
Please report any questions, issues, or concerns with this release using the darshan-users mailing list, or by opening an issue on our GitHub: https://github.com/darshan-hpc/darshan.
Darshan 3.4.0 release is now available
Darshan version 3.4.0 is now officially available for download here. This release is a follow-up to our recent 3.4.0-pre1 pre-release, and we believe it is stable and ready for production use. In addition to features and bug fixes introduced in 3.4.0-pre1, this full release includes the following bug fixes to Darshan libraries/tools:
- Fix segfault affecting new DARSHAN_MOD_DISABLE/ENABLE environment variables
- Fix divide-by-zero condition that can potentially be triggered by new heatmap module
- Fix potential MPI errors related to calling MPI_Type_size() on a user-supplied MPI_DATATYPE_NULL type (reported by Jim Edwards)
- cuserid() is no longer the default method for determining username, and must be manually enabled at configure time
- Fix backwards compatibility bug affecting darshan-3.0.0 logs in darshan-util C library functions used by PyDarshan
- Suppress noisy output warnings when using darshan-job-summary.pl
- Clarify units displayed by darshan-job-summary.pl (reported by Jeff Layton)
We have also released PyDarshan 3.4.0.1 on PyPI, which includes a number of improvements:
- New Darshan job summary report styling
- HTML job summary reports can be generated using:
python -m darshan summary <logfile_path>
- HTML job summary reports can be generated using:
- Bug fix to heatmap module plotting code caused by logs with inactive ranks
- Fix warnings related to Pandas deprecation of df.append
Documentation for Darshan and PyDarshan is available here.
Please report any questions, issues, or concerns with this release using the darshan-users mailing list, or by opening an issue on our GitHub: https://github.com/darshan-hpc/darshan.
darshan-3.4.0-pre1 release is now available
We are pleased to announce a pre-release version of Darshan 3.4.0 (3.4.0-pre1) is now available HERE. As always, please be aware that Darshan pre-releases are experimental and not recommended for full-time use in production yet. An official 3.4.0 release will be made available soon.
This release contains a number of exciting new features and enhancements to Darshan:
- Added new heatmap module to record per-process histograms of I/O activity over time for POSIX, MPI-IO, and STDIO modules
- Added comprehensive darshan-runtime library configuration support, via environment variables and/or configuration file
- Allows user to control how much memory Darshan modules use at runtime, restricts instrumentation to specific file name patterns, etc.
- See the following link for more details: https://www.mcs.anl.gov/research/projects/darshan/docs/darshan-runtime.html#_configuring_darshan_library_at_runtime
- Implemented performance optimizations to Darshan’s wrappers, locking mechanisms, and timing mechanisms
- Includes optional RDTSCP-based timers via ‘–enable-rdtscp’ configure option
- Removed deprecated performance estimates from darshan-parser and added 2 new derived metrics when using ‘–perf’ :
- agg_time_by_slowest (total elapsed time performing I/O by the slowest rank)
- slowest_rank_rw_only_time (total elapsed time performing read/write operations by the slowest rank)
- Adopted automake/libtool support for Darshan build (contributed by Wei-Keng Liao)
- Increased default record name memory to 1 MiB per-process to avoid recent user reports of exceeding old limit (256 KiB)
This release also marks our first stable release of the PyDarshan log analysis module, including a new PyDarshan-based job summary tool (ultimately will replace darshan-job-summary script). Users can get PyDarshan directly from PyPI, e.g., using ‘pip install darshan’. Documentation can be found here: https://www.mcs.anl.gov/research/projects/darshan/documentation/
Please report any questions, issues, or concerns with this pre-relase using the darshan-users mailing list, or by opening an issue on our GitHub: https://github.com/darshan-hpc/darshan.