SPONSORS

SAP logo

News

Call For Papers

This one-day workshop aims to bring together researchers interested in optimizing database performance on modern computing infrastructure by designing new data management techniques and tools.

Topics of Interest

The continued evolution of computing hardware and infrastructure imposes new challenges and bottlenecks to program performance. As a result, traditional database architectures that focus solely on I/O optimization increasingly fail to utilize hardware resources efficiently. Multi-core CPUs, GPUs, FPGAs, new memory and storage technologies (such as flash and non-volatile memory), and low-power hardware imposes a significant challenge to optimizing database performance. Consequently, exploiting the characteristics of modern hardware has become an essential topic of database systems research.

The goal is to make database systems adapt automatically to sophisticated hardware characteristics, thus maximizing performance transparently for applications. To achieve this goal, the data management community needs interdisciplinary collaboration with researchers from computer architecture, compilers, operating systems, and storage. This involves rethinking traditional data structures, query processing algorithms, and database software architectures to adapt to the advances in the underlying hardware infrastructure.

We seek submissions bridging database systems to computer architecture, compilers, and operating systems. We also invite submissions to our call for papers on hardware/software co-design for modern data-intensive workloads, including but not limited to machine learning training and inference, graph analytics, and similar tasks. As these workloads continue to grow in scale and complexity, innovative co-design approaches that tightly integrate hardware architectures and software systems are crucial to achieving breakthroughs in performance, energy efficiency, and scalability. In particular, submissions covering topics from the following non-exclusive list are encouraged:

  • database algorithms and data structures on modern hardware
  • cost models and query optimization for novel hierarchical memory systems
  • hardware systems for query processing
  • data management using co-processors
  • novel application of new storage technologies to data management
  • query processing using computing power in storage systems
  • database architectures for low-power computing and embedded devices
  • database architectures on multi-threaded and chip multiprocessors
  • performance analysis of database workloads on modern hardware
  • compiler and operating systems advances to improve database performance
  • new benchmarks for micro-architectural evaluation of database workloads
  • taking advantage of modern network capabilities for data processing
  • hardware/software co-design for modern data-intensive workloads (e.g., machine learning, graph analytics)

Submission Tracks

We invite submissions to two tracks:

  1. Full papers: A full paper must be no longer than six pages, excluding the bibliography. There is no limit on the length of the bibliography. Full papers describe a complete work in data management for new hardware. Accepted papers will be given 10 pages (plus a bibliography) for the camera-ready version and a long presentation slot during the workshop.

  2. Short Papers: Short papers must not exceed two pages, excluding the bibliography. Short papers describe very early-stage works or summaries of mature systems. Short papers will be included in the proceedings, given 4 pages (plus a bibliography) for the camera-ready version, and may be given a short presentation slot during the workshop.

All accepted papers (full and short) will also be presented as posters during a workshop poster session

Best of DaMoN 2025

This year all accepted DaMoN papers will be considered for a best paper award.

We intend to invite extended versions of a selection of DaMoN'25 papers for submission to the VLDB Journal. Extended papers that are accepted to the VLDB Journal will appear in a special section (“Best of DaMoN 2025”) within one of the regular VLDBJ issues.

Important Dates

Paper submission: March 14th, 2025 March 21st, 2025 (11:59pm AoE)

Notification of acceptance: April 28th, 2025

Camera-ready copies: May 23rd, 2025

Workshop: June 23rd, 2025

Submission Instructions

Authors are invited to submit original, unpublished research papers that are not being considered for publication in any other forum. Manuscripts should be submitted electronically as PDF files using the latest ACM paper format consistent with the ACM SIGMOD formatting guidelines to the DaMoN 2025 CMT site at https://cmt3.research.microsoft.com/DaMoN2025 (available from mid-February on). Submissions will be reviewed in a single-blind manner. Submissions that are two pages or shorter, excluding the bibliography, will be reviewed as short papers. Submissions that are six pages or shorter, excluding the bibliography, will be reviewed as full papers. Submissions that are longer than six pages, excluding the bibliography, will be desk-rejected.

Accepted papers will be included in the informal online proceedings at the website. Additionally, all accepted papers will be published online in the ACM Digital Library. Therefore, the papers must include the standard ACM copyright notice on the first page.

Workshop Program (Preliminary)

Monday (June 23, 2025), Room: Charlottenburg I/II


10:00-11:00 Session 1: Opening & Keynote

10:00-10:05 Opening by Workshop Organizers

10:05-11:00 Keynote Talk: (Thomas Neumann, TU Munich)


11:00 - 11:30 Coffee Break / Poster Session


11:30-13:00 Session 2: Memory & I/O

11:30-11:42 Fetch Me If You Can: Evaluating CPU Cache Prefetching and Its Reliability on High Latency Memory. Fabian Mahling (Hasso Plattner Institute, University of Potsdam)*; Marcel Weisgut (Hasso Plattner Institute, University of Potsdam); Tilmann Rabl (Hasso Plattner Institute, University of Potsdam).

11:42-11:54 Breaking the Cycle - A Short Overview of Memory-Access Sampling Differences on Modern x86 CPUs. Roland Kühn (TU Dortmund University)*; Jan Mühlig (TU Dortmund University); Jens Teubner (TU Dortmund University).

11:54 - 12:06 Exploiting Locality in Flat Memory with CXL for In-Memory Database Management Systems. MINSEON AHN (SAP Labs Korea)*; Thomas Willhalm (Intel Deutschland GmbH); Donghun Lee (SAP Labs Korea); Norman May (SAP SE); Jungmin Kim (SAP Labs Korea); Daniel Ritter (SAP SE); Oliver Rebholz (SAP SE).

12:06 - 12:18 A Wake-Up Call for Kernel-Bypass on Modern Hardware. Matthias Jasny (TU Darmstadt)*; Muhammad El-Hindi (TU Darmstadt); Tobias Ziegler (TU München); Carsten Binnig (TU Darmstadt).

12:18 - 12:30 Path to GPU-Initiated I/O for Data-Intensive Systems. Karl B. Torp (Samsung Denmark Research Center); Simon A. F. Lund (Samsung Denmark Research Center); Pinar Tozun (IT University of Copenhagen).

12:30-13:00 Invited Talk 1 (Fresh Thinking): (Michal Friedman, ETH Zurich)


13:00 - 14:30 Lunch Break


14:30-16:00 Session 3: Accelerators & Modern Workloads

14:30 - 14:42 Model-Driven Right-Sizing of Offloading in Data Processing Pipelines. Faeze Faghih (Technical University of Darmstadt)*; Maximilian Hüttner (Technical University of Darmstadt); Florin Dinu (Huawei Munich Research Center); Zsolt István (Technical University of Darmstadt).

14:42 - 14:54 Bang for the Buck: Vector Search on Cloud CPUs. Leonardo Kuffo (CWI)*; Peter Boncz (CWI).

14:54 - 15:06 The Effectiveness of Compression for GPU-Accelerated Queries on Out-of-Memory Datasets. Hamish Nicholson (EPFL)*; Konstantinos Chasialis (EPFL); Antonio Boffa (EPFL); Anastasia Ailamaki (EPFL).

15:06 - 15:18 G-ALP: Rethinking Light-weight Encodings for GPUs. Sven Hepkema (CWI)*; Azim Afroozeh (CWI); Charlotte Felius (CWI); Peter Boncz (CWI); Stefan Manegold (CWI).

15:18 - 15:30 ParaGraph: Accelerating Graph Indexing through GPU-CPU Parallel Processing for Efficient Cross-modal ANNS. Yuxiang Yang (Southern University of Science and Technology); Shiwen Chen (Southern University of Science and Technology); Yangshen Deng (AlayaDB AI & Southern University of Science and Technology); Bo Tang (Southern University of Science and Technology & AlayaDB AI).

15:30 - 16:00 Invited Talk 2 (Industry Talk): (Carlo Curino, Microsoft Jim Gray Labs)


16:00 - 16:30 Coffee Break / Poster Session


16:30 - 17:40 Session 4: New Architectures & Cloud

16:30 - 16:45 SAP: Sponsor Talk

16:45 -16:57 Uncore your Queries: Towards CPU-less Query Processing. Alexander Baumstark (TU Ilmenau)*; Laurin Martins (TU Ilmenau); Kai-Uwe Sattler (TU Ilmenau).

16:57 - 17:09 De²Dup: Extended Deduplication for Multi-Tenant Databases. Alexander Krause (TU Dresden); Jannis Kowalick (TU Dresden); Johannes Pietrzyk (TU Dresden); Dirk Habich (TU Dresden)*; Wolfgang Lehner (TU Dresden).

17:09 - 17:21 Insert-Optimized Implementation of Streaming Data Sketches. Pascal Pfeil (Amazon Web Services )*; Dominik Horn (Amazon Web Services); Orestis Polychroniou (Amazon Web Services); George Erickson (Amazon Web Services); Zhe Heng Eng (Amazon Web Services); Mengchu Cai (Amazon Web Services); Tim Kraska (Amazon Web Services).

17:21 - 17:33 An Analysis of AWS Nitro Enclaves for Database Workloads. Adrian Lutsch (TU Darmstadt)*; Christian Franck (TU Darmstadt); Muhammad El-Hindi (TU Darmstadt); Zsolt István (TU Darmstadt); Carsten Binnig (TU Darmstadt).

17:33 - 17:40 Closing by Workshop Organizers.


17:40 - 18:00 Poster Session


Keynote & Invited Talks

Keynote Talk

Title Coming Soon

Thomas Neumann, TU Munich

Abstract: Coming soon

About the SpeakerComing soon.

Fresh Thinking Talk

The Power of an Instruction

Michal Friedman, ETH Zurich

Potrait of David Patterson

Abstract: Over the past few decades, the evolution of instructions, serving as the primary interface exposed by new hardware architectures, has significantly influenced how we write and optimize software. These instructions have even become so deeply embedded in libraries and abstractions that they are often invisible to the typical programmer.

Yet, selecting and leveraging the right instructions can have a substantial impact on both performance and correctness of the systems built on top of them. In this talk, I will explore the influence of modern instruction set extensions such as AVX for vectorized computation, CLFLUSH for cache management, and PREFETCH for reducing memory latency, and how they shape algorithm design, data layout and system behaviour.

Furthermore, I will outline potential advancements including mechanisms like eliminating redundant cache flushes and introducing multi-word compare-and-swap operations. We will explore how such extensions could simplify synchronization and reduce overhead, particularly in disaggregated and high-concurrency environments. These proposals aim to push the boundaries of what low-level primitives can offer, demonstrating how the smallest units of computation can impact on how we design and reason about modern systems.

About the SpeakerMichal Friedman is an Assistant Professor at the Systems Group at the department of Computer Science of ETH Zurich. Her research interests include systems, concurrent computing, programming languages and sustainable computing. Her research focuses on designing system fundamentals, across software and hardware, to improve the performance and efficiency while guaranteeing correctness of next-generation computing platforms and emerging technologies. Prior to that, she did a postdoc at the System Group. She completed her Ph.D. in Computer Science at the Technion and was generously supported by the Azrieli Foundation Fellowship. She completed her BSc summa cum laude at the Computer Science Department at the Technion as well.

Industry Talk

Title Coming Soon

Carlo Curino, Microsoft Jim Gray Labs

Abstract: Coming soon

About the SpeakerComing soon.

Improving the Performance of the Vector Engine Index in SAP HANA

SAP

Abstract: SAP HANA Cloud features a Vector Engine for use in Retrieval-Augmented Generation (RAG) to improve the quality of response of AI-powered business applications using external knowledge. We share results of the micro-architectural analysis of the index construction time. Based on this analysis we share how tuning the code significantly improved the performance of index construction. Key to these improvements were NUMA optimizations and harnessing the memory prefetcher of modern Intel CPUs.

About the SpeakerComing soon.

Accepted Papers

Program Committee

Chairs

Potrait of Carsten Binnig

Carsten Binnig

TU Darmstadt, Germany
carsten.binnig@cs.tu-darmstadt.de

Potrait of Eric Sedlar

Eric Sedlar

Oracle Labs
eric.sedlar@oracle.com

Members

  • Anastasia Ailamaki, EPFL, Switzerland
  • Yannis Chronis, Google, USA
  • Muhammad El-Hindi, Technische Universität Darmstadt, Germany
  • Jana Giceva, Technische Universität München, Germany
  • Norman May, SAP SE, Germany
  • Beng Chin Ooi, National University of Singapore, Singapore
  • Orestis Polychroniou, Amazon Web Services, USA
  • Danica Porobic, Oracle, Switzerland
  • Tilmann Rabl, Hasso Plattner Institute/ University of Potsdam, Germany
  • Kenneth Ross, Columbia University, USA
  • Rathijit Sen, Microsoft, USA
  • Utku Sirin, Harvard University, USA
  • Rebecca Taft, Cockroach Labs, USA
  • Pinar Tozun, IT University of Copenhagen, Denmark
  • Tianzheng Wang, Simon Fraser University, Canada
  • Huanchen Zhang, Tsinghua University, China
  • Tobias Ziegler, Technische Universität München, Germany

Steering Committee

Potrait of Anastasia Ailamaki

Anastasia Ailamaki

EPFL, Switzerland
anastasia.ailamaki@epfl.ch

Potrait of Peter Boncz

Peter Boncz

CWI, Netherlands
boncz@cwi.nl

Potrait of Stefan Manegold

Stefan Manegold

CWI, Netherlands
stefan.manegold@cwi.nl

Potrait of Ken Ross

Ken Ross

Columbia University, USA
kar@cs.columbia.edu