Timeline of Invited Talks at DaMoN


Data Management Challenges on New Computer Architectures
Doug Carmean (Intel)


Information Management and System/Storage Technology — Evolution or Revolution?The role and design of systems responsible for the management of information in the enterprise is changing. The kind of information that is being managed is changing as is the way the information is analyzed and made available to users in the enterprise.
Berni Schiefer (IBM)Berni Schiefer is a DB2 Distinguished Engineer at IBM. He has responsibility for DB2 performance benchmarking and solutions development, including the BCU. He joined the IBM Toronto Lab in 1985 and has worked on SQL/DS and the Starburst experimental relational database at the IBM Almaden Research Lab, prior to working on DB2. His current focus is on introducing advanced technology into DB2 with particular emphasis on processors, performance, XML, Linux, Virtualization and Autonomics.


How Do DBMS Take Advantage of Future Computer Systems?Historically, CMOS scaling provides certain level of performance enhancement automatically. However, that “free” performance enhancement from device scaling will come to an end while CMOS scaling will continue for several more generations. Multi-core has been one architectural feature to improve chip level performance. Partially because of the power dissipation limit, each core of a multi-core chip becomes simpler/smaller and offers weaker single thread performance. In this talk, we will explain how to avoid potential performance bottlenecks when running typical DBMS software on a massive multi-core chip. For a high-end transaction system, the main memory cost is easily several times of CPU cost; the storage cost is even higher than the main memory cost. We will examine how potential future memory technologies (such as phase-change memory) may impact computer system architecture. A new class of high volume transaction systems is emerging. Each transaction is relatively simple. However, the potential revenue for each transaction may be very low. Thus, the transaction systems designed for banking-like applications may not be suitable for this new type of applications. We will describe the problem and encourage researchers and practitioners to come up with cost-effective solutions.
Honesty Young (IBM)Dr. Honesty Young earned his Ph.D. in Computer Science from University of Wisconsin-Madison. Currently he is the Deputy Director and the CTO of IBM China Research Lab. He helped build the first parallel database prototype inside IBM. He led an effort that achieved leadership TPC database benchmark results. He has initiated and managed projects in storage appliances and controllers. He spent a year at IBM Research Division Headquarters as a technical staff. Dr. Young has published more than 40 journal and conference papers, including one best paper and one invited paper. He was the Industrial Program Chair of the Parallel and Distributed Information Systems (PDIS), taught two tutorials at key conferences, and served on the program committees of eight conferences. He is an IBM Master Inventor.


Amorphous Data ParallelismClient-side applications running on multicore processors are likely to be irregular programs that deal with complex, pointer-based data structures such as graphs and trees. In her 2007 Turing award lecture, Fran Allen raised an important question about such programs: do irregular programs have data parallelism, and if so, how do we exploit it on multicore processors? In this talk, we argue using concrete examples that irregular programs have an amorphous data-parallelism that arises from the use of iterative algorithms that manipulate worklists of various sorts. We then describe the approach taken in the Galois project to exploit this parallelism. There are three main aspects to the Galois system: (1) a small number of syntactic constructs for packaging amorphous data-parallelism as iterations over ordered and unordered sets, (2) assertions about methods in class libraries, and (3) a runtime system for managing the exploitation of amorphous data-parallelism. We present experimental results that demonstrate that the Galois approach is practical, and discuss ongoing work on this system.
Keshav Pingali (University of Texas Austin)Keshav Pingali is the W.A."Tex" Moncrief Chair of Computing in the Computer Sciences department at the University of Texas, Austin. He received the B.Tech. degree in Electrical Engineering from IIT, Kanpur, India in 1978, the S.M. and E.E. degrees from MIT in 1983, and the Sc.D. degree from MIT in 1986. He was on the faculty of the Department of Computer Science at Cornell University from 1986 to 2006, where he held the India Chair of Computer Science. Pingali's research has focused on programming languages and compiler technology for program understanding, restructuring, and optimization. His group is known for its contributions to memory-hierarchy optimization; some of these have been patented. Algorithms and tools developed by his projects are used in many commercial products such as Intel's IA-64 compiler, SGI's MIPSPro compiler, and HP's PA-RISC compiler. In his current research, he is investigating optimistic parallelization techniques for multicore processors, and language-based fault tolerance. Among other awards, Pingali has won the President's Gold Medal at I.I.T. Kanpur (1978), IBM Faculty Development Award (1986-88), NSF Presidential Young Investigator Award (1989-94), Ip-Lee Teaching Award of the College of Engineering at Cornell (1997), and the Russell teaching award of the College of Arts and Sciences at Cornell (1998). In 2000, he was a visiting professor at I.I.T., Kanpur where he held the Rama Rao Chaired Professorship. Since 2007, he has been the co-Editor-in-chief of the ACM Transactions on Programming Languages and Systems (TOPLAS).


Sweet Sixteen: How well is Transactional Memory Aging?The term ``Transactional Memory'' was coined back in 1993, but even today, there is a vigorous debate about its merits. This debate sometimes generates more heat than light: terms are not always well-defined and criteria for making judgments are not always clear. In this talk, I will try to impose some order on the conversation. TM itself can encompass hardware, software, speculative lock elision, and other mechanisms. The benefits sought encompass simpler implementations of highly-concurrent data structures, better software engineering for concurrent platforms, enhanced performance, and reduced power consumption. We will look at various terms in this cross-product and evaluate how we are doing. So far.
Maurice Herlihy (Brown University)Maurice Herlihy has an A.B. in Mathematics from Harvard University, and a Ph.D. in Computer Science from M.I.T. He has served on the faculty of Carnegie Mellon University and the staff of DEC Cambridge Research Lab. He is the recipient of the 2003 Dijkstra Prize in Distributed Computing, the 2004 Gödel Prize in theoretical computer science, the 2008 ISCA influential paper award, the 2012 Edsger W. Dijkstra Prize, and the 2013 Wallace McDowell award. He received a 2012 Fulbright Distinguished Chair in the Natural Sciences and Engineering Lecturing Fellowship, and he is fellow of the ACM, a fellow of the National Academy of Inventors, the National Academy of Engineering, and the National Academy of Arts and Sciences.


Trends in Storage TechnologiesThis presentation highlights some of the leading-edge topics in storage technology research today. The main focus will be online storage and specifically solid-state-based storage-class memory (SCM), which has the potential of revolutionizing architectures for data storage. In this context, we will review the application of Flash to enterprise storage and discuss high-potential follow-on technologies in this space and their implications for the memory/storage hierarchy. Moreover, we will also look at the other extreme, namely storage technologies for archival data, and discuss how the explosive growth of such data and the subsequent ultra-high capacity requirements affect the incumbent technologies such as magnetic tape and optical archives. Finally, we'll touch upon the impact of these disruptive technologies on the tiered storage.
Dr. Evangelos Eleftheriou (IBM)Dr. Eleftheriou received a B.S. degree in electrical engineering from the University of Patras, Patras, Greece, in 1979, and the M.Eng. and Ph.D. degrees in electrical engineering from Carleton University, Ottawa, Canada, in 1981 and 1985, respectively. In 1986, he joined IBM Research. Zurich, where he currently manages the Storage Technologies Department, which focuses on phase-change memories, scanning-probe techniques and metrology, solid-state drive technology and systems as well as tape drive technology. In January 2002 Dr. Eleftheriou was elected Fellow of the IEEE. He was co-recipient of the 2003 IEEE Leonard G. Abraham Prize Paper Award and co-recipient of the Eduard Rhein Technology Award in 2005. The same year, he became an IBM Fellow and was inducted into the IBM Academy of Technology. In 2009 he was co-recipient of the IEEE Transactions on Control Systems Technology Outstanding Paper Award and the IEEE Control Systems Technology Award.


The Coming Revolution in Data-Centric Data CentersWe are entering an exciting era for systems design. Digital information is increasing at exponential rates and new applications are being developed to extract fresh insights from all this data. At the same time, we are seeing interesting inflection points in technology - faster, more complex, processors are being replaced by simpler, power-efficient multicores; traditional memory and storage technologies are being challenged by new non-volatile memories like phase-change RAM and Memristors; optics is replacing electrical communications. Consequently, traditional approaches to design "better, faster, cheaper" systems will need to be (and are being) rethought, both at the hardware and software levels. In this talk, I will discuss these recent trends, their implications for future hardware re-designs, and the immense opportunities ahead for new solutions that cross-cut the technology, architecture, and software layers.
Parthasarathy Ranganathan (HP)Partha Ranganathan is a distinguished technologist at Hewlett Packard Labs where he currently leads a research program on future data-centric data centers. His research interests are in energy efficiency and systems architecture and modeling. He has worked extensively in these areas including key contributions around energy-aware user interfaces, heterogeneous multi-core processors, power capping and federated enterprise power management, energy modeling and benchmarking, disaggregated blade server architectures, and most recently, storage hierarchy and systems redesign for non-volatile memory. Dr. Ranganathan's work has led to several commercial products and has been featured in the New York Times, Wall Street Journal, Slashdot, and several other venues. He was named one of the world's top young innovators by MIT Technology Review, and has received Rice University's Outstanding Young Engineering Alumni award. Dr. Ranganathan received his B.Tech degree from the Indian Institute of Technology, Madras and his M.S. and Ph.D. from Rice University, Houston.


Redrawing the Boundary Between Software and Storage for Fast Non-Volatile MemoriesThe emerging technology of fast non-volatile memories (NVMs) such as Phase Change Memory (PCM) and Spin-Torque Transfer Magnetic RAM (STT-RAM) promises to fill the gap between main memory and block-oriented storage, which has existed for over three decades. However, the performance characteristics and access mechanisms of NVMs differ significantly from those of traditional DRAM and storage devices, necessitating the redesign of software and storage systems to fully exploit their potential. In this talk, I will present an overview of our recent work on rethinking the boundary between software and storage systems to take advantage of fast NVMs. I will discuss new storage architectures, algorithms, and data structures that enable efficient and scalable use of NVMs in a variety of computing environments.
Steven Swanson (University of California San Diego)Steven Swanson is an Associate Professor in the Department of Computer Science and Engineering at the University of California, San Diego and the director of the Non-Volatile Systems Laboratory. His research interests include the systems, architecture, security, and reliability issues surrounding non-volatile, solid-state memories. He also co-leads the Science and Engineering of Non-Volatile Systems (SENS) center, a major research collaboration between industry and academia that is exploring the fundamental physics, materials, and designs underlying tomorrow's non-volatile memories. He is also a founder of the Non-Volatile Memories Workshop, the premier venue for research in these areas. He received his PhD from the University of Washington in 2006 and was a founder of the Center for Research in Emerging Technologies, a joint research center between the University of Washington and Microsoft Research. He is the recipient of a Sloan Fellowship, a Microsoft Research New Faculty Fellowship, an NSF CAREER Award, and a selection as one of Scientific American's 50 world-changing researchers.


Netezza Performance ArchitectureIn this talk I will explore hardware and software co-design as a strategy for developing novel, high value data management products on new hardware through the lens of the early history of Netezza Corporation. In a little over ten years, Netezza went from a standing start to a successful public offering and then a $1.7B acquisition by IBM. By combining field programmable gate arrays, loosely coupled multi-processing based on low power CPUs and data structures specialized for analytical query processing with low demand for administration and database tuning on the part of the user, Netezza successfully disrupted the market for data warehousing solutions. The key to this success was a novel combination of hardware and software that yielded market leading price-performance and market leading ease-of-use. Both of these characteristics were directly tied to key technical decisions that re-imagined both database management software and hardware, a process referred to here as 'deep codesign'.
Daniel J. Feldman (Netezza)Daniel J. "Dan" Feldman is a Senior Fellow and the former Chief Scientist at Netezza, where he worked on the design and optimization of data warehouse appliance systems. He has over thirty years of experience in computer systems architecture and performance analysis, and has worked on a wide range of systems including large-scale multiprocessors, network processors, and high-performance network systems. Before joining Netezza, he was a member of the research staff at the IBM Thomas J. Watson Research Center, where he worked on high-performance networking, network processors, and related topics. He has published over sixty technical papers and holds over twenty patents in these areas. He is a Senior Member of the IEEE, and received his Ph.D. in Electrical Engineering from Stanford University.


The Picosecond is Dead; Long Live the PicojouleFor decades, CMOS technology provided exponential improvements in transistor density and energy consumption. However, power constraints have become the primary challenge in scaling transistor performance, leading to the end of Dennard scaling. As a result, computer architects are increasingly focusing on energy efficiency rather than raw performance as the primary design goal. This talk will explore the implications of this shift for computer architecture, focusing on energy-efficient architectures, near-threshold computing, and the co-design of hardware and software for energy efficiency.
Christos Kozyrakis (Stanford University)Christos Kozyrakis is an Associate Professor of Electrical Engineering and Computer Science at Stanford University. His research interests are in computer architecture, systems, and energy-efficient computing. He has published more than 100 papers in major conferences and journals and is the recipient of an NSF CAREER Award, an IBM Faculty Award, Google Faculty Awards, the 2010 ACM SIGARCH Maurice Wilkes Award, and the 2018 IEEE Computer Society Edward J. McCluskey Technical Achievement Award. He has also received several best-paper awards and nominations at ASPLOS, ISCA, MICRO, and HPCA. He is an ACM Fellow and an IEEE Fellow.


Rethinking Memory System Design for Data-Intensive ComputingThe memory system is a fundamental performance and energy bottleneck in almost all computing systems. Recent system design, application, and technology trends that require more capacity, bandwidth, efficiency, and predictability out of the memory system make it an even more important system bottleneck. At the same time, DRAM and flash technologies are experiencing difficult technology scaling challenges that make the maintenance and enhancement of their capacity, energy-efficiency, and reliability significantly more costly with conventional techniques. In this talk, we examine some promising research and design directions to overcome challenges posed by memory scaling. Specifically, we discuss three key solution directions: 1) enabling new memory architectures, functions, interfaces, and better integration of the memory and the rest of the system, 2) designing a memory system that intelligently employs multiple memory technologies and coordinates memory and storage management using non-volatile memory technologies, 3) providing predictable performance and QoS to applications sharing the memory/storage system. If time permits, we might also briefly touch upon our ongoing related work in combating scaling challenges of NAND flash memory.
Onur Mutlu (Carnegie Mellon University)Onur Mutlu is the Strecker Early Career Professor at Carnegie Mellon University. His broader research interests are in computer architecture and systems, especially in the interactions between languages, system software, compilers, and microarchitecture, with a major current focus on memory systems. He obtained his PhD and MS in ECE from the University of Texas at Austin and BS degrees in Computer Engineering and Psychology from the University of Michigan, Ann Arbor. Prior to Carnegie Mellon, he worked at Microsoft Research, Intel Corporation, and Advanced Micro Devices. He was a recipient of the IEEE Computer Society Young Computer Architect Award, Intel Early Career Faculty Award, faculty partnership awards from various companies, including Facebook, Google, HP, Intel, IBM, Microsoft and Samsung, a number of best paper recognitions at various computer systems venues, and a number of "computer architecture top pick" paper selections by the IEEE Micro magazine. For more information, please see his webpage.


Looking Beyond Exaflops and ZettabytesWe are seeing an unprecedented convergence of massive compute with massive data. This confluence has the potential to significantly impact how we do computing and what computing can do for us. In this talk I will discuss some of the application-level opportunities and system-level data management challenges at the intersection of traditional high-performance computing and emerging data-intensive computing.
Pradeep Dubey (Intel)Pradeep Dubey is an Intel Fellow and Director of Parallel Computing Lab (PCL), part of Intel Labs. His research focus is computer architectures to efficiently handle new compute- and data-intensive application paradigms for the future computing environment. Dubey previously worked at IBM's T.J. Watson Research Center, and Broadcom Corporation. He has made contributions to the design, architecture, and application-performance of various microprocessors, including IBM® Power PC*, Intel® i386TM, i486TM, Pentium® Xeon®, and the Xeon Phi™ line of processors. He holds over 36 patents, has published over 100 technical papers, won the Intel Achievement Award in 2012 for Breakthrough Parallel Computing Research, and was honored with Outstanding Electrical and Computer Engineer Award from Purdue University in 2014.


Dr. Eng Lim Goh (HPE)Dr. Eng Lim Goh joined SGI in 1989, becoming a chief engineer in 1998 and then chief technology officer in 2000. After acquisition, HPE appointed him vice president and SGI chief technology officer. He oversees technical computing programs with the goal to develop the next generation computer architecture for the new many-core era. His current research interest is in the progression from data intensive computing to analytics, machine learning, artificial specific to general intelligence and autonomous systems. He continues his studies in human perception for user interfaces and virtual and augmented reality.

The Latest Advances in GPU Architectures and New Programming Model FeaturesGPU architectures are approaching a terabyte per second memory bandwidth that, coupled with high-throughput computational cores, creates an ideal device for data-intensive tasks. We'll discuss GPU accelerator fundamentals as well as the best practices when developing your applications for modern GPU architectures. Both the architecture and the programming model evolved over the past few years to help developers achieve high performance quicker and with less effort. New features, such as Unified Memory, have been introduced to simplify development on heterogeneous architectures and provide seamless processing of large out-of-core data workloads. New libraries, such as nvGraph, make it possible to build interactive and high throughput graph and data analytics applications. An overview of existing tools and libraries will be covered to help you get started with the GPU programming.
Nikolay Sakharnykh (Nvidia)Nikolay Sakharnykh is a senior developer technology engineer at NVIDIA, where he works on accelerating HPC and data analytics applications on GPUs. He joined NVIDIA in 2008 as a graphics engineer working on making video games run faster and enabling new visual effects. At the same time, CUDA started to pick up, and he got excited about the general compute capabilities of the GPUs. After a few years his professional interests shifted towards more serious applications in HPC. Now he's exploring GPU applications for graph and data analytics and new memory management techniques.


How Persistent Memory Changes the Server EnvironmentNew memory technologies bring with them an explosion in memory capacities, offering multiple terabytes per CPU socket. But more than that – this new, large capacity memory is persistent! Andy will describe how this technology changes the server environment seen in data centers and clouds. He will explain the value of persistent memory, what it means to applications such as databases, and summarize what application vendors are doing to prepare for it. Andy will describe the work done by SNIA, the Storage Networking Industry Association, to align the industry on a unified programming model for persistent memory. He’ll show libraries and applications that have built on that model and describe the value they’ve demonstrated.
Andy Rudoff (Intel)Andy Rudoff is a Senior Principal Engineer at Intel Corporation, focusing on Non-Volatile Memory programming. He is a contributor to the SNIA NVM Programming Technical Work Group. His more than 30 years industry experience includes design and development work in operating systems, file systems, networking, and fault management at companies large and small, including Sun Microsystems and VMware. Andy has taught various Operating Systems classes over the years and is a co-author of the popular UNIX Network Programming text book.

Active Heterogeneous Hardware and its Impact on System DesignThe rise of hardware heterogeneity and the potential to offload compute closer to data (e.g., storage and memory) or to push operations down to where data moves (e.g., on the networks or acceleration within the chip) opens both exciting opportunities and significant challenges for system software like databases that want to make efficient use of future hardware. One of the main questions is then, who absorbs that complexity especially as we move to the “noisy” cloud? In my talk, I will argue that addressing such a challenge requires an effort that is beyond what can be typically done within a single layer of the system stack. My proposal calls for a holistic approach by opening up the interfaces and customising the system stack for modern data processing workloads.
Jana Giceva (Imperial College)Jana Giceva is an assistant professor in the Department of Computing at Imperial College London, where she is part of the LSDS (Large Scale Data and Systems) group. Prior to that she completed her MSc and Phd in the Systems Group at ETH Zurich, where she was advised by Gustavo Alonso and co-advised by Timothy Roscoe. Her research interests are in systems support for Big Data and Data science to enable efficient use of modern and future hardware. The scope of her research spans multiple systems areas: from the data processing layer to operating systems, including hardware accelerators for data processing. She is the recipient of the ETH medal for her PhD dissertation awarded in 2017 and the Google European PhD Fellowship in operating systems in 2014.

Scaling Database Systems to High-performance ComputersAnalyzing massive datasets quickly requires scaling foundational data processing algorithms to the unprecedented compute, network and I/O concurrency of a modern datacenter. However, the software building blocks that are readily available today have largely been designed for high-performance computing applications and are profoundly unsatisfactory for I/O-intensive analytics. This talk highlights specific research challenges that need to be overcome to scale data processing to warehouse-scale computers, with particular focus on how to better utilize RDMA-capable networks, non-uniform network topologies, massively parallel file systems and NVMe-based storage in a disaggregated datacenter.
Spyros Blanas (Ohio State University)Spyros Blanas is an assistant professor in the Department of Computer Science and Engineering at The Ohio State University. His research interest is high-performance database systems, and his current goal is to build a database system for high-end computing facilities. He has received the IEEE TCDE Rising Star Award and a Google Research Faculty award. He received his Ph.D. at the University of Wisconsin-Madison and part of his Ph.D. dissertation was commercialized in Microsoft's flagship data management product, SQL Server, as the Hekaton in-memory transaction processing engine.

Designing Data Management Systems in the Age of Dark SiliconDennard scaling, which enables keeping the power density of the transistors constant, does not hold anymore. Even though we would be able to keep packing more cores in processors, we won't be able to power all of them up simultaneously. This trend is referred to as dark silicon and fundamentally alters the focus of hardware design. In this new era, the focus needs to shift toward optimizing energy per instruction. This talk focuses on the implications of dark silicon and emerging hardware on the design of data management systems.
Pinar Tözün (IT University of Copenhagen)Pınar Tözün is an Associate Professor at IT University of Copenhagen. Before ITU, she was a research staff member at IBM Almaden Research Center. Prior to joining IBM, she received her PhD from EPFL. Her research focuses on HTAP engines, performance characterization of database workloads, and scalability and efficiency of data management systems on modern hardware. She received a Jim Gray Doctoral Dissertation Award Honorable Mention in 2016.


Performance Scaling with Innovative Compute Architectures and FPGAsPerformance scaling and power efficiency with traditional computing architectures becomes increasingly challenging as next generation technology nodes provide diminishing performance and energy benefits. FPGAs with their reconfigurable circuits can tailor the hardware to the application through customized arithmetic and innovative compute and memory architectures, thereby exposing further potential for performance scaling. This has stimulated significant interest for their exploitation in compute intensive applications. During this talk, we discuss some examples of these innovative customized compute architectures in the context of data processing and show how these unleash new levels of performance scalability and compute efficiency.
Michaela Blott (Xilinx)Michaela Blott is a Distinguished Engineer at Xilinx Research, where she is heading a team of international scientists, driving research into new application domains for Xilinx devices, such as machine learning, in both embedded and hyperscale deployments. She graduated from the University of Kaiserslautern in Germany and brings over 25 years of experience in computer architecture, FPGA and board design, working in both research institutions (ETH Zurich and Bell Labs) as well as development organizations.

Dark Silicon — a Currency We Do Not ControlThe breakdown of dennard scaling changed the game of processor design: no longer can the entire die be filled with “always-on” components - some regions must be powered up and down at runtime to prevent the chip from overheating. Such “dim” or “dark” silicon is the new currency of chip design, raising the question: what functionality should be implemented in dark silicon? Viable candidates are any non-essential units that support important applications. Naturally, database researchers were quick to claim this resource, arguing that it should be used to implement instructions and primitives supporting database workloads. In this talk, we argue that, due to economic constraints, such a design is unlikely to be implemented in mainstream server chips. Instead, chip designers will spend silicon on high-volume market segments such as AI, Security or Graphics/AR which require a different set of primitives. Consequently, database researchers need to find uses for the actual functionality of chips rather than wishing for features that are economically infeasible. Let us develop innovative ways to exploit the “hardware we have, not the hardware we wish to have at a later time”. In the talk, we discuss examples of creative use of hardware for data management purposes such as TLBs for MVCC, Transactional Memory for statistics collection and hardware graphics shaders for data analytics. We also highlight some processor functionality that still calls for creative use such as many floating point instructions, integrated sound processors and some of the model-specific registers.
Holger Pirk (Imperial College)Holger Pirk is an assistant professor (“Lecturer” in traditional English terms) in the Department of Computing at Imperial College London. As such, he is a member of the Large-Scale Data and Systems Group. He is interested in all things data: analytics, transactions, systems, algorithms, data structures, processing models and everything in between. While most of his work targets “traditional” relational databases, his declared goal is to broaden the applicability of data management techniques. This means targeting new platforms like GPUs or FPGAs but also new applications like compilers, games and AI. Before joining Imperial, Holger was a Postdoc at the Database group at MIT CSAIL. He spent his PhD years in the Database Architectures group at CWI in Amsterdam resulting in a PhD from the University of Amsterdam in 2015.

Building Real Database Systems on Real Persistent Memory“Real” persistent memory, such as Intel Optane DC PMM, offers high density, persistence and speed in between flash and DRAM. This changes the way we deal with storage devices in database systems - it is byte-addressable like memory, yet it is also persistent. Systems researchers have been keen in exploring its use since more than 10 years ago, to build persistent indexes, new file systems, persistent queues, faster logging and better replication approaches. Yet almost all previous work had to be done in simulated environments. Now it is time to look back, rethink, and devise practical, innovative ways of exploiting real persistent memory in database systems. In this talk, we discuss our recent experience with real Optane DC PMMs and the implications and future roles of persistent memory in database systems. In particular, we highlight the challenges and issues that were not well understood in simulated environments, such as programming model and resource contention between DRAM and persistent memory.
Tianzheng Wang (Simon Fraser University)Tianzheng Wang is an assistant professor in the School of Computing Science at Simon Fraser University in Canada (since Fall 2018). He works on the boundary between software and hardware to build better systems by fully utilizing the underlying hardware. His current research focuses on database systems and related systems areas that impact the design of database systems, such as operating systems, distributed systems, and synchronization. He is also interested in storage, mobile and embedded systems. Tianzheng Wang received his Ph.D. in computer science from the University of Toronto in 2017, advised by Ryan Johnson and Angela Demke Brown. Prior to joining Simon Fraser University, he spent one year (2017-2018) at Huawei Canada Research Centre (Toronto) as a research engineer.


A Vision for Expandable Data Management Infrastructure and Acceleration with Heterogeneous Configurable SystemsCoherently attached FPGAs will unlock the full potential of their configurable fabric by enabling expandable memory and bringing together the four components of a data management system - storage, memory, compute, and network. In this configuration, the CPU can access the memory or storage attached to the FPGA coherently, while the reconfigurable fabric is able to support look-aside and inline acceleration for CPU/storage, CPU/memory, CPU/network, network/storage paths. Computational storage as well as computational memory will be facilitated by the same fabric that resides at the core of the heterogeneous compute systems. In this talk, we will present a vision of how heterogeneous compute systems centered around FPGAs can help with TCO, performance, and power density and the use cases that support such vision.
José Roberto Alvarez (Intel)José Roberto Alvarez is Senior Director at Intel Programmable Solutions Group in San Jose, California, where he leads the Technology and Innovation CTO Office, defining and implementing long-term FPGA research strategy and roadmaps. He started his career at Philips Laboratories and throughout his career he has been deeply engaged in architecting, designing and implementing technology products for a variety of industries including broadcast, embedded, consumer, post-production and computer graphics for companies including Philips, S3, Broadcom, Maxim, Xilinx, and four successful start-ups in Silicon Valley.


Introduction to the Arm Neoverse N and V Series: Cloud-to-Edge Infrastructure SoCsIn this talk I will present the Neoverse IP roadmap, detailing some of the characteristics of our IPs that make them a great choice for developing high-performance SoCs from power-constrained edge appliances all the way up to systems targeting HPC and cloud deployments. Additionally, this talk will touch upon the state of cloud and SW applications, and will provide some pointers to users that want to extract more performance and value from Arm Neoverse instances that are now easily accessible in various cloud environments.
Andrea Pellegrini (Arm)Andrea Pellegrini leads the performance and workloads team for the Infrastructure Line of Business at Arm. Andrea is based in Austin, TX, USA and joined Arm to work on Arm servers in 2016, after spending 3 years at Intel, where he was an architect for IO virtualization. Andrea obtained a PhD in computer architecture from the University of Michigan, Ann Arbor, and holds a Master and a Bachelors degree in computer engineering from the Universita' di Bologna, Italy.

Extend, Not Just Accelerate!The hardware in today's datacenters and clouds is changing at a dizzying pace. Heterogeneity, accelerators and disaggregated architectures are becoming commonplace and change the way we design and operate databases. It has never been so easy to add an accelerator to a database but, in this talk, I will make the case that there is an alternative approach to be considered. Instead of building yet another analytics accelerator, we should use specialized hardware to offer new functionality in databases; functionality that makes them more secure, private and reliable! By using specialized hardware, the cost of such new functionality could be hidden, making future databases just as fast as today's while offering added benefits.
Zsolt István (IT University of Copenhagen)Zsolt István is an Associate Professor at the IT University of Copenhagen. Before that, he was an Assistant Research Professor at the IMDEA Software Institute in Madrid. Zsolt works in the intersection of databases, distributed systems, and FPGA programming. He has a PhD in Computer Science from the Systems Group at ETH Zurich, Switzerland.

Cloud-native Databases: Opportunities and ChallengesOrganizations are moving their databases to the cloud due to lower cost, elastic resource allocation, availability, etc. Cloud brings unique opportunities and challenges that are unprecedented in conventional databases, which require us to revisit the software and hardware stacks of a DBMS to fully exploit the performance and cost potential. This talk focuses on a particular architectural feature of cloud-native databases — storage disaggregation, where computation and storage are independently managed and connected through the network. Disaggregation enables independent scaling of resources, but incurs long latency and bandwidth bottlenecks for IO since the storage is remote. I will share our recent papers (choosing cloud-DBMS [VLDB'19], pushdownDB [ICDE'20], and FlexPushdownDB [VLDB'21]) addressing these challenges. I will also share my thoughts on the potential research questions and solutions in this domain.
Xiangyao Yu (University of Wisconsin-Madison)Xiangyao Yu is an Assistant Professor at the University of Wisconsin-Madison. His research interests include (1) transactions and HTAP, (2) new hardware for databases, and (3) cloud-native databases. Before joining UW-Madison, he finished his postdoc (2019) and PhD (2017) at MIT and his bachelor degree (2012) at Tsinghua University.


In Computer Architecture, We Don't Change the Questions, We Change the AnswersWhen I was a new professor in the late 1980s, my senior colleague Jim Goodman told me, "On the computer architecture PhD qualifying exam, we don't change the questions, we only change the answers." More generally, I now augment this to say, "In computer architecture, we don't change the questions, application and technology innovations change the answers, and it's our job to recognize those changes." Eternal questions this talk will sample are how best to do the following interacting factors: compute, memory, storage, interconnect/networking, security, power, cooling and one more. The talk will not provide the answers but leave that as an audience exercise. I will dive a little more into compute and memory as in-progress trends provide both challenges and opportunities for creating tremendous value from (large) data.
Mark D. Hill (Microsoft Azure and University of Wisconsin-Madison)Mark D. Hill is Partner Hardware Architect with Microsoft Azure (2020-present) where he leads software-hardware pathfinding. He is also the Gene M. Amdahl and John P. Morgridge Professor Emeritus of Computer Sciences at the University of Wisconsin-Madison (https://www.cs.wisc.edu/~markhill/), following his 1988-2020 service in Computer Sciences and Electrical and Computer Engineering.

Accelerating Video Database Systems using Emerging Hardware TechnologiesOver the last decade, advances in deep learning have led to a resurgence of interest in automated analysis of videos at scale. This approach poses many challenges, ranging from the high computational overhead associated with deep learning models to the types of queries that the user may ask. In this talk, I will present EVA, an end-to-end video database system that we are developing at Georgia Tech, for tackling these challenges using novel query optimization and machine learning techniques. I will then discuss about opportunities for the community to help accelerate video database systems using their expertise in leveraging emerging hardware technologies.
Joy Arulraj (Georgia Institute of Technology)Joy Arulraj is an Assistant Professor of Computer Science at the Georgia Institute of Technology. His research focuses on developing systems for efficiently and effortlessly querying video datasets by synthesizing techniques from data systems and machine learning. His research has been recognized with the IEEE TCDE Early Career Award (2022) and the ACM SIGMOD Jim Gray Doctoral Dissertation Award (2019).

What the Primacy of Economics Means for Hardware And SoftwareUsing several historical examples, I will argue that both hardware and software is downstream from economics. Economics has been called "the dismal science" and harsh economic realities can prevent technological breakthroughs. At the same time, however, economic thinking can also help overcome seemingly inescapable tradeoffs that we face when building software systems. It may also be our only hope for managing the proliferation of complex heterogeneous hardware, in particular in the cloud.
Viktor Leis (Friedrich-Alexander-Universität Erlangen-Nürnberg)Viktor Leis is a Professor for Data Management at Friedrich-Alexander University Erlangen-Nürnberg, Germany. His research revolves around designing high-performance data management systems and includes core database topics such as query processing, query optimization, index structures, and storage.


Memory: The DaMoN DemonMemory technology limitations bedevil current computing systems and data management does not escape. As silicon scaling delivers ever faster compute, memory falls further behind exposing capacity, bandwidth, and power deficiencies in our systems. Seeing these issues, computer architects propose memory hierarchy changes only to find most applications shrink-wrapped to the current hierarchy and unable to change. Data management applications provide an innovation bright-spot with researchers and developers ready to co-optimize from application-to-hardware to deliver improved performance. Hierarchy improvements often matter here first. Sitting squarely at this confluence, DaMoN serves to engender such co-optimizations. With this in mind, we will set a memory technology baseline using memory silicon trends and constraints. Additionally, we will set a memory system baseline summarizing current system memory architectures and issues. Next, we will look at the undeniable influence AI is exerting on the hierarchy. Taking memory technology, systems, and applications together, we will speculate on memory hierarchy changes to expect - potentially creating opportunities for future data management applications. One such change is already visible in CXL-enabled memory hierarchies envisioned to deliver higher capacity and perhaps more. Finally, we will speculate further on system optimizations around memory that are the subject of current research, like memory sharing and near memory computing. Active audience engagement is encouraged, as the goal of this presentation is a maximally productive DaMoN focused on vanquishing the memory demon.
Frank Hady (Intel)Frank Hady is an Intel Fellow responsible for memory and storage hierarchy innovation within Intel's Office of the CTO Systems Architecture and Engineering Group. He is a long-time system researcher happiest when delivering innovations that span hardware and software. Over Frank's three-decade career, he has contributed to the creation, delivery, and proliferation of fundamental systems technologies.

Cost-Intelligent Data Analytics in the CloudFor decades, database research has focused on optimizing performance under a fixed amount of resources. As more and more database applications move to the public cloud, we argue that it is time to make cost a first-class citizen when solving database optimization problems. In this talk, I will introduce the concept of “cost intelligence” and then sketch the architecture of a cloud data warehouse designed toward this goal. The project is in its early stages, and we would appreciate your valuable feedback.
Huanchen Zhang (Tsinghua University)Huanchen Zhang is an Assistant Professor in the IIIS (Yao Class) at Tsinghua University. His research interest is in database management systems with particular interests in indexing, data compression, and cloud databases.


NVMe and Data Systems: A Decade and CountingNVMe is synonymous with modern storage. It was introduced as a means to efficiently expose Solid-State Drives as PCIe 3.0 peripherals. With NVMe, I/Os were no longer the bottleneck. Initially, the challenge for operating system and database system designers was to accomodate radically faster storage devices. Then, SSDs evolved to meet a range of cost/performance requirements. Accordingly, NVMe 2.0 introduced new transport models, storage models and cross-layer optimizations. This diversity introduced new challenges. Today, NVMe passthru and Flexible Data Placement enable data systems designers to shape how data is stored, instead of designing their systems around the characteristics of opaque storage devices. Computational storage was supposed to further improve the ability of system designers to specialize storage devices to fit their workloads. However, device memory management became a challenge. We discuss the proposed standard and speculate on the role NVMe may play in future data systems, in a context where CXL emerges, PCIe 7.0 is being standardized and power consumption is the bottleneck.
Philippe Bonnet (IT University of Copenhagen)Philippe Bonnet is professor at DIKU, the department of Computer Science of the University of Copenhagen. He contributed to the uFlip Benchmark, the Linux multiqueue block layer, the Linux framework for Open-Channel SSDs, the OX architecture for computational storage, the xNVMe library and Delilah, a prototype for eBPF offload on computational storage. Philippe is co-author of the book “Principles of Database and Solid State Drive Co-Design” with Alberto Lerner. He is currently trustee of the VLDB endowment and chair of the ACM EIG on Reproducibility and Replicability.

Computer Architecture in Flux: The Central Processing Unit Is No Longer CentralWe start with a review of the instability of modern hardware, given the slowing of Moore’s Law, the end of Dennard scaling, and the rise of the demand for AI cycles versus traditional applications. Data is becoming more critical than compute due to its increasing cost and slowing capacity curves for memory and storage. Data location and movement are now central to cost and performance. To build robust systems in light of these changes, we must shift the focus of hardware and software design from processing to the memory, storage, and network components.
David Patterson (University of California Berkeley)David Patterson is a UC Berkeley Pardee professor emeritus, a Google distinguished engineer, and the RISC-V International Vice-Chair. His most influential Berkeley projects likely were RISC (Reduced Instruction Set Computer) and RAID (Redundant Array of Inexpensive Disks). His best-known book is Computer Architecture: A Quantitative Approach. He and his co-author John Hennessy shared the 2017 ACM A.M Turing Award and the 2022 NAE Charles Stark Draper Prize for Engineering. The Turing Award is often referred to as the “Nobel Prize of Computing” and the Draper Prize is considered a “Nobel Prize of Engineering.”

Effortless Locality Through On-the-fly Data TransformationWhat if we could access any layout and ship only the relevant data through the memory hierarchy by transparently converting rows to (arbitrary groups of) columns? We capitalize on the reinvigorated trend of hardware specialization to propose Relational Fabric, a near-data vertical partitioner that allows memory or storage components to perform on-the-fly transparent data transformation. By exposing an intuitive API, Relational Fabric pushes vertical partitioning to the hardware, which has a profound impact on the process of designing and building data systems. (A) There is no need for data duplication and layout conversion, making hybrid systems viable using a single layout. (B) It simplifies the memory and storage manager. (C) It reduces unnecessary data movement through the memory hierarchy allowing for better hardware utilization and, ultimately, better performance. In this talk, I will introduce the Relational Fabric vision and present our initial results on in-memory systems. I will also share some of the challenges of building this hardware and the opportunities it brings for simplicity and innovation in the data system software stack, including physical design, query processing, and concurrency control, and conclude with ongoing work for data transformation for general workloads including matrix and tensor processing.
Manos Athanassoulis (Boston University)Manos Athanassoulis is an Assistant Professor of Computer Science at Boston University, Director and Founder of the BU Data-intensive Systems and Computing Laboratory and co-director of the BU Massive Data Algorithms and Systems Group. His research is in the area of data management focusing on building data systems that efficiently exploit modern hardware (computing units, storage, and memories), are deployed in the cloud, and can adapt to the workload both at setup time and, dynamically, at runtime. Before joining Boston University, Manos was a postdoctoral researcher at Harvard School of Engineering and Applied Sciences. Manos obtained his PhD from EPFL, Switzerland, and spent one summer at IBM Research, Watson. Manos’ work is published in top conferences and journals of the community, like ACM SIGMOD, PVLDB, ACM TODS, VLDBJ, and others, and has been recognized by awards like “Best Demonstration” in VLDB 2023, “Best of SIGMOD” in 2017, “Best of VLDB” in 2010 and 2017, and “Most Reproducible Paper” at SIGMOD in 2016. Manos has been acting as a program committee member and technical reviewer in top data management conferences and journals for the past 12 years, having received the “Distinguished PC Member Award” for SIGMOD 2018 and SIGMOD 2023. He is currently an associate editor for ACM SIGMOD Record, co-chair of ACM SIGMOD 2023 Availability and Reproducibility, and co-chair of ICWE 2023 Industrial Track. His work is supported by several awards, including an NSF CRII award, an NSF CAREER award, a Facebook Faculty Research Award, multiple RedHat Collaboratory Research Incubation Awards, and a Cisco Research Award.