GRASP is aimed at developing scalable solutions for delivering high-performance on graph processing tasks by exploiting the resources available on a heterogeneous computing cluster. The key goals of this work are to provide: an easy to use programming interface; a compiler and runtime system that delivers high performance; and the ability to handle large problem sizes. GRASP comprises of the following components:
The long communication latencies involved in cluster based processing heavily impact the performance of graph processing frameworks. While the communication cannot be completely eliminated, the inherent asynchrony involved in graph algorithms allow tolerating the inter-node latency by using stale vertex values. ASPIRE [OOPSLA'14] which incorporates a novel relaxed memory consistency model & protocol that simultaneously tolerate communication latency and minimize the use of stale values. CoRAL [ASPLOS'17a], a fault-tolerance technique that performs minimal rollback and leverages the algorithmic asynchrony to progress the global execution state to a resumable state that provides accurate results. To process graphs that change over time, EvoG [TACO] incorporates optimizations for processing multiple graph snapshots by incremental processing and pipelined execution. KickStarter [ASPLOS'17b] is a dynamic dependence tracking based technique to correctly and efficiently process streaming graphs in presence of edge additions and deletions.
While today's machines are well equipped with multiple cores and large main memories, the ever growing graph sizes pose a memory scalability challenge in processing real world graphs on single machines. For in-memory scaling, our Input Reduction [HPDC'16] based processing trasforms the original graph to a smaller graph which is used to calculate intermediate approximation that is usable for incremental processing over the original graph. For out-of-core graph processing, Dynamic Partitions [USENIX ATC'16] truly capture the dynamic nature of graph computation by adapting the set of edges that they hold during runtime. InfiniMem [LCPC'15] enables generation and processing of large graphs that do not fit in memory by having them reside on disk and allows the programmer to code the applications in a size oblivious fashion by transparently handling memory and I/O management.
GPU's have been an attractive solution for various general purporse processing, including processing of irregular graphs. CuSha [HPDC'14] is a CUDA-based graph processing framework that uses two novel graph representations, G-Shards and Concatenated Windows to allow coalesced memory access and overcome GPU under-utilization while processing. WS-VR [PACT’15] is a multi-GPU graph processing framework that delivers high SIMD-efficiency and inter-GPU communication efficiency.
PaRMAT [GitHuB] is a tool designed to create very large RMAT graphs on machines with limited amount of memory. It provides various options for the RMAT graph: being directed or non-directed, disallowing duplicate edges, sorting the output, etc.
PnP paper on point-to-point graph queries accepted in ASPLOS'19! 11/10/2018
KickStarter paper on streaming graphs accepted in ASPLOS'17! 11/10/2016
CoRAL paper on recovery accepted in
Evolving graph paper accepted in
Dynamic Shards paper accepted in USENIX ATC'16! 4/13/2016
Graph Reduction paper accepted in HPDC'16! 3/12/2016
CTE paper on intra-warp load balancing in GPUs accepted in IPDPS'16! 12/18/2015
CCC paper on tolerating divergence in GPUs accepted in MICRO'15! 09/01/2015
InfiniMem paper on size oblivious programming accepted in LCPC'15! 08/06/2015
WS&VR paper on scalable graph processing using GPUs accepted in PACT'15! 08/01/2015
Stash paper on scalable hashing for GPUs accepted in PACT'15! 08/01/2015
ASPIRE paper on DSM based graph processing accepted in OOPSLA'14! 05/24/2014
CuSha paper on processing graphs using GPU accepted in HPDC'14! 03/25/2014

WS&VR is available on GitHub
for download!
PaRMAT is now available on GitHub
for download!
CuSha is now available on GitHub
for download!