Projects using Mochi

The following external projects are using Mochi components:

  • HXHIM (LANL): Hexadimensional hashing indexing middleware
  • UnifyCR (LLNL): Distributed burst buffer file system
  • Proactive Data Containers (LBNL): Novel data abstraction for storing science data in an object-oriented manner
    • https://github.com/hpc-io/pdc 
    • Houjun Tang, Suren Byna, Francois Tessier, Teng Wang, Bin Dong, Jingqing Mu, Quincey Koziol, Jerome Soumagne, Venkatram Vishwanath, Jialin Liu, and Richard Warren, “Toward Scalable and Asynchronous Object-centric Data Management for HPC”, 18th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid) 2018 [pdf]
    • Kimmy Mu, Jerome Soumagne, Houjun Tang, Suren Byna, Quincey Koziol, and Richard Warren, “A Server-managed Transparent Object Storage Abstraction for HPC”, IEEE Cluster 2018, Belfast [pdf]
    • Houjun Tang, Suren Byna, Bin Dong, Jialin Liu, and Quincey Koziol, “SoMeta: Scalable Object-centric Metadata Management for High Performance Computing”, The IEEE Cluster Conference 2017 [pdf]
  • GekkoFS (JGU Mainz): Temporary distributed file system for HPC applications
    • https://storage.bsc.es/gitlab/hpc/gekkofs
    • Vef, Marc-André & Moti, Nafiseh & Süß, Tim & Tocci, Tommaso & Nou, Ramon & Miranda, Alberto & Cortes, Toni & Brinkmann, André. “GekkoFS – A temporary distributed file system for HPC applications”, IEEE Cluster 2018, Belfast
  • DAOS (Intel): Distributed Asynchronous Object Storage
  • IOF (Intel): POSIX I/O forwarding
  • Hermes (IIT, the HDF Group, and UIUC): management of I/O storage tiers
    • https://github.com/HDFGroup/hcl
    • H. Devarajan, A. Kougkas, K. Bateman, and X. Sun. “HCL: Distributing Parallel Data Structures in Extreme Scales.” In 2020 IEEE International Conference on Cluster Computing (CLUSTER). IEEE, 2020.
  • Seer (LANL): lightweight insitu wrapper library adding insitu capabilities to simulations
    • https://github.com/lanl/seer
    • Pascal Grosset, Jesus Pulido, and James Ahrens. “Personalized In Situ steering for Analysis and Visualization.” In Proceedings of ISAV 2020: In Situ Infrastructures for Enabling Extreme-Scale Analysis and Visualization.
  • Chimbuko (BNL): in-situ performance analysis for HPC applications
    • https://github.com/CODARcode/Chimbuko
    • Christopher Kelly, Sungsoo Ha, Kevin Huck, Hubertus Van Dam, Line Pouchard, Gyorgy Matyasfalvi, Li Tang, Nicholas D’Imperio, Wei Xu, Shinjae Yoo, and Kerstin Van Dam. “Chimbuko: A Workflow-Level Scalable Performance Trace Analysis Tool”. In Proceedings of ISAV 2020: In Situ Infrastructures for Enabling Extreme-Scale Analysis and Visualization.
  • Dataspaces (Rutgers): shared tuple-space abstraction for use between HPC applications
  • CHFS (Tsukuba): ad hoc file system for persistent memory based on consistent hashing
    • https://github.com/otatebe/chfs 
    • Osamu Tatebe, Kazuki Obata, Kohei Hiraga, Hiroki Ohtsuji, “CHFS: Parallel Consistent Hashing File System for Node-local Persistent Memory”, Proceedings of the ACM International Conference on High Performance Computing in Asia-Pacific Region (HPC Asia 2022), 2022 (to appear)
  • SERVIZ (University of Oregon): A Shared In Situ Visualization Service
    • https://github.com/srini009/serviz 
    • S. Ramesh, H. Childs and A. Malony, “SERVIZ: A Shared In Situ Visualization Service,” in 2022 SC22: International Conference for High Performance Computing, Networking, Storage and Analysis (SC) (SC), Dallas, TX, US, 2022 pp. 277-290.

In addition, the Mochi project itself has also produced the following user-facing data services: