This one-day workshop aims to bring together researchers interested in optimizing database performance on modern computing infrastructure by designing new data management techniques and tools.
The continued evolution of computing hardware and infrastructure imposes new challenges and bottlenecks to program performance. As a result, traditional database architectures that focus solely on I/O optimization increasingly fail to utilize hardware resources efficiently. Multi-core CPUs, GPUs, FPGAs, new memory and storage technologies (such as flash and non-volatile memory), and low-power hardware imposes a significant challenge to optimizing database performance. Consequently, exploiting the characteristics of modern hardware has become an essential topic of database systems research.
The goal is to make database systems adapt automatically to sophisticated hardware characteristics, thus maximizing performance transparently for applications. To achieve this goal, the data management community needs interdisciplinary collaboration with researchers from computer architecture, compilers, operating systems, and storage. This involves rethinking traditional data structures, query processing algorithms, and database software architectures to adapt to the advances in the underlying hardware infrastructure.
We seek submissions bridging database systems to computer architecture, compilers, and operating systems. We also invite submissions to our call for papers on hardware/software co-design for modern data-intensive workloads, including but not limited to machine learning training and inference, graph analytics, and similar tasks. As these workloads continue to grow in scale and complexity, innovative co-design approaches that tightly integrate hardware architectures and software systems are crucial to achieving breakthroughs in performance, energy efficiency, and scalability. In particular, submissions covering topics from the following non-exclusive list are encouraged:
We invite submissions to two tracks:
Full papers: A full paper must be no longer than six pages, excluding the bibliography. There is no limit on the length of the bibliography. Full papers describe a complete work in data management for new hardware. Accepted papers will be given 10 pages (plus a bibliography) for the camera-ready version and a long presentation slot during the workshop.
Short Papers: Short papers must not exceed two pages, excluding the bibliography. Short papers describe very early-stage works or summaries of mature systems. Short papers will be included in the proceedings, given 4 pages (plus a bibliography) for the camera-ready version, and may be given a short presentation slot during the workshop.
All accepted papers (full and short) will also be presented as posters during a workshop poster session
This year all accepted DaMoN papers will be considered for a best paper award.
We intend to invite extended versions of a selection of DaMoN'25 papers for submission to the VLDB Journal. Extended papers that are accepted to the VLDB Journal will appear in a special section (“Best of DaMoN 2025”) within one of the regular VLDBJ issues.
Paper submission: March 14th, 2025 March 21st, 2025 (11:59pm AoE)
Notification of acceptance: April 28th, 2025
Camera-ready copies: May 23rd, 2025
Workshop: June 23rd, 2025
Authors are invited to submit original, unpublished research papers that are not being considered for publication in any other forum. Manuscripts should be submitted electronically as PDF files using the latest ACM paper format consistent with the ACM SIGMOD formatting guidelines to the DaMoN 2025 CMT site at https://cmt3.research.microsoft.com/DaMoN2025 (available from mid-February on). Submissions will be reviewed in a single-blind manner. Submissions that are two pages or shorter, excluding the bibliography, will be reviewed as short papers. Submissions that are six pages or shorter, excluding the bibliography, will be reviewed as full papers. Submissions that are longer than six pages, excluding the bibliography, will be desk-rejected.
Accepted papers will be included in the informal online proceedings at the website. Additionally, all accepted papers will be published online in the ACM Digital Library. Therefore, the papers must include the standard ACM copyright notice on the first page.
TBD
TBD
Exploiting Locality in Flat Memory with CXL for In-Memory Database Management Systems
MINSEON AHN (SAP Labs Korea)*; Thomas Willhalm (Intel Deutschland GmbH); Donghun Lee (SAP Labs Korea); Norman May (SAP SE); Jungmin Kim (SAP Labs Korea); Daniel Ritter (SAP SE); Oliver Rebholz (SAP SE)
Fetch Me If You Can: Evaluating CPU Cache Prefetching and Its Reliability on High Latency Memory
Fabian Mahling (Hasso Plattner Institute, University of Potsdam)*; Marcel Weisgut (Hasso Plattner Institute, University of Potsdam); Tilmann Rabl (Hasso Plattner Institute, University of Potsdam)
Path to GPU-Initiated I/O for Data-Intensive Systems
Karl B. Torp (Samsung Denmark Research Center); Simon A. F. Lund (Samsung Denmark Research Center); Pinar Tozun (IT University of Copenhagen)*
Bang for the Buck: Vector Search on Cloud CPUs
Leonardo Kuffo (CWI)*; Peter Boncz (CWI)
An Analysis of AWS Nitro Enclaves for Database Workloads
Adrian Lutsch (TU Darmstadt)*; Christian Franck (TU Darmstadt); Muhammad El-Hindi (TU Darmstadt); Zsolt István (TU Darmstadt); Carsten Binnig (TU Darmstadt)
A Wake-Up Call for Kernel-Bypass on Modern Hardware
Matthias Jasny (TU Darmstadt)*; Muhammad El-Hindi (TU Darmstadt); Tobias Ziegler (TU München); Carsten Binnig (TU Darmstadt)
De²Dup: Extended Deduplication for Multi-Tenant Databases
Alexander Krause (TU Dresden); Jannis Kowalick (TU Dresden); Johannes Pietrzyk (TU Dresden); Dirk Habich (TU Dresden)*; Wolfgang Lehner (TU Dresden)
ParaGraph: Accelerating Graph Indexing through GPU-CPU Parallel Processing for Efficient Cross-modal ANNS
Yuxiang Yang (Southern University of Science and Technology); Shiwen Chen (Southern University of Science and Technology); Yangshen Deng (AlayaDB AI & Southern University of Science and Technology); Bo Tang (Southern University of Science and Technology & AlayaDB AI)*
Insert-Optimized Implementation of Streaming Data Sketches
Pascal Pfeil (Amazon Web Services )*; Dominik Horn (Amazon Web Services); Orestis Polychroniou (Amazon Web Services); George Erickson (Amazon Web Services); Zhe Heng Eng (Amazon Web Services); Mengchu Cai (Amazon Web Services); Tim Kraska (Amazon Web Services)
Model-Driven Right-Sizing of Offloading in Data Processing Pipelines
Faeze Faghih (Technical University of Darmstadt)*; Maximilian Hüttner (Technical University of Darmstadt); Florin Dinu (Huawei Munich Research Center); Zsolt István (Technical University of Darmstadt)
The Effectiveness of Compression for GPU-Accelerated Queries on Out-of-Memory Datasets
Hamish Nicholson (EPFL)*; Konstantinos Chasialis (EPFL); Antonio Boffa (EPFL); Anastasia Ailamaki (EPFL)
Breaking the Cycle - A Short Overview of Memory-Access Sampling Differences on Modern x86 CPUs
Roland Kühn (TU Dortmund University)*; Jan Mühlig (TU Dortmund University); Jens Teubner (TU Dortmund University)
G-ALP: Rethinking Light-weight Encodings for GPUs
Sven Hepkema (CWI)*; Azim Afroozeh (CWI); Charlotte Felius (CWI); Peter Boncz (CWI); Stefan Manegold (CWI)
Uncore your Queries: Towards CPU-less Query Processing
Alexander Baumstark (TU Ilmenau)*; Laurin Martins (TU Ilmenau); Kai-Uwe Sattler (TU Ilmenau)
TU Darmstadt, Germany
carsten.binnig@cs.tu-darmstadt.de
Oracle Labs
eric.sedlar@oracle.com
EPFL, Switzerland
anastasia.ailamaki@epfl.ch
CWI, Netherlands
boncz@cwi.nl
CWI, Netherlands
stefan.manegold@cwi.nl
Columbia University, USA
kar@cs.columbia.edu