Imperial College London Department of Computing GiMMiK - Generating Bespoke Matrix Multiplication Kernels for Various Hardware Accelerators; Applications in High-Order Computational Fluid Dynamics Author: Bartosz D. Wozniak Supervisor: Prof. Paul H. J. Kelly Co-Supervisor: Dr. Peter E. Vincent June 7, 24
Abstract Matrix multiplication is a fundamental linear algebra routine ubiquitous in all areas of science and engineering. Highly optimised BLAS libraries (cublas and clblas on GPUs) are the most popular choices for an implementation of the General Matrix Multiply (GEMM) in software. However, performance of library GEMM is poor for small matrix sizes. In this thesis we consider a block-by-panel type of matrix multiplication, where the block matrix is typically small (e.g. dimensions of 96 64), motivated by an application in PyFR the most recent implementation of Flux Reconstruction schemes for high-order fluid flow simulations on unstructured meshes. We show how prior knowledge of the operator matrix can be exploited to generate highly performant kernel code, which outperforms the cublas and clblas GEMM implementations. We present GiMMiK a generator of bespoke matrix multiplication kernels for the CUDA and OpenCL platforms. GiMMiK generates code by fully unrolling the matrix-vector product. The generated kernels embed values of the operator matrix directly in the code to benefit from the use of the constant cache and compiler optimisations. Further, we reduce the number of floating-point operations by removing multiplications by zeros. We are able to achieve speedups for individual PyFR matrices of up to 9.98 (2.2) times on the Tesla K4c and 63.3 (3.7) times on the GTX 78 Ti in double (single) precision. Using GiMMiK as the matrix multiplication kernel provider allows us to achieve a speedup of up to.72 (2.4) for an example simulation of an unsteady flow over a cylinder executed with PyFR in double (single) precision on the Tesla K4c. A general paper GiMMiK- Generating Bespoke Matrix Multiplication Kernels for Various Hardware Accelerators; Applications in High-Order Computational Fluid Dynamics by Bartosz D. Wozniak, Freddie D. Witherden, Peter E. Vincent and Paul H. J. Kelly has been prepared for submission to the Computer Physics Communications journal, based on the findings in this report. It is available upon request. i
Acknowledgements I would like to thank Prof. Paul Kelly, Dr. Peter Vincent and Freddie Witherden for their dedication and the amount of time they have contributed to this project. I am grateful to Dr. David Ham and Dr. Francis Russell for providing the initial guidance to stir the project in the right direction. Lastly, I would like to acknowledge and thank my friends and family for their unrelenting support and guidance throughout my time at Imperial. iii
Contents Abstract Acknowledgements i iii Introduction. Context..................................... 2.2 Objectives.................................... 3.3 Contributions.................................. 4 2 Background 5 2. Flux Reconstruction.............................. 5 2.. Flux Reconstruction Approach in D................. 5 2..2 Flux Reconstruction Approach in 2D................. 8 2.2 Matrix Representation of FR Schemes.................... 2.3 Characteristics of Operator Matrices in FR................. 3 2.4 General-Purpose Computing on GPUs.................... 4 2.4. CUDA Programming Model...................... 4 2.4.2 OpenCL Programming Model..................... 7 2.4.3 Basic Performance Optimisation Strategies for GPGPU...... 9 2.5 cublas, clblas and cusparse....................... 2 3 Related Work 23 3. Anatomy of High-Performance Matrix Multiplication............ 23 3.2 Automatically Tuned Linear Algebra Software................ 25 3.3 SD++ Solver................................... 26 3.4 Basic Linear Algebra Compiler........................ 27 3.5 Other Work................................... 28 4 Methodology 29 4. Benchmarking Infrastructure......................... 3 v
4.2 Optimisations Assessment........................... 3 4.2. Value Embedding............................ 3 4.2.2 Common Sub-Expression Elimination................ 33 4.2.3 Elimination.......................... 35 4.2.4 Cleanup Code.............................. 36 4.3 GiMMiK..................................... 38 5 Evaluation 4 5. Kernels Benchmarking............................. 4 5.2 Quality Assessment............................... 47 5.3 Hardware Performance Assessment...................... 48 5.4 Performance Improvements of PyFR..................... 5 5.5 Limitations................................... 53 6 Conclusions & Future Work 57 6. Summary.................................... 57 6.2 Future Work.................................. 59 Bibliography 63 A Characteristics of the Operator Matrices 67 B Results for Individual Optimisations 7 C Plots of Results for Individual Optimisations 85 D Benchmarking Results for GiMMiK Kernels 9 E Plots of Benchmarking and Profiling Results for GiMMiK Kernels 7 F Speedups of Individual Matrices Stacked Together to Mimic PyFR 7 G Performance Bounds for GiMMiK Kernels 2 vi
List of Tables 4. Specification of the experimental hardware.................. 3 5. Speedups of a PyFR simulation in single and double precision....... 52 A. Characteristics of quadraliteral matrices................... 68 A.2 Characteristics of hexahedral matrices.................... 69 A.3 Characteristics of triangular matrices..................... 7 A.4 Characteristics of tetrahedral matrices.................... 7 B. Hypothesis testing, double precision, Tesla K4c............... 72 B.2 Hypothesis testing, single precision, Tesla K4c................ 75 B.3 Hypothesis testing, double precision, GTX 78 Ti.............. 78 B.4 Hypothesis testing, single precision, GTX 78 Ti............... 8 D. Benchmarking results, double precision, Tesla K4c............. 92 D.2 Benchmarking results, single precision, Tesla K4c.............. 95 D.3 Benchmarking results, double precision, GTX 78 Ti............ 98 D.4 Benchmarking results, single precision, GTX 78 Ti............. D.5 Benchmarking results, double precision, FirePro W9........... 4 F. Speedups of individual matrices stacked together to mimic PyFR...... 8 vii
List of Figures. Block-by-panel matrix multiplication..................... 2. CUDA and OpenCL platform models..................... 6 2.2 Mapping CUDA and OpenCL execution model onto the platform model.. 8 3. GEMM decomposition into GEBP....................... 24 4. Results of value embedding, double precision, β =............ 32 4.2 Results of common sub-expression elimination, double precision, β =.. 34 4.3 Results of sparsity elimination, double precision, β =........... 36 4.4 Cleanup code for partial tiles.......................... 37 4.5 Effects of cleanup code in cublas, double precision, β =........ 37 4.6 GiMMiK s interface............................... 39 4.7 Sample kernel code generated by GiMMiK for the CUDA platform..... 39 5. Performance of GiMMiK, double precision, β =, CUDA, Tesla K4c.. 43 5.2 Performance of GiMMiK, single precision, β =, CUDA, Tesla K4c... 44 5.3 Performance of GiMMiK, double precision, β =, CUDA, GTX 78 Ti. 45 5.4 Performance of GiMMiK, single precision, β =, CUDA, GTX 78 Ti.. 46 5.5 Performance limiting factors for GiMMiK, double precision........ 49 5.6 The mesh used for the simulation of an unsteady flow over a cylinder... 5 5.7 Results of the simulation of an unsteady flow over a cylinder........ 52 5.8 Effects of useful memory bandwidth on speedup achieved by GiMMiK... 54 C. Results of value embedding........................... 86 C.2 Results of common sub-expression elimination................ 87 C.3 Results of sparsity elimination......................... 88 C.4 Results of cleanup code avoidance....................... 89 E. Performance of GiMMiK, double precision, β, CUDA, Tesla K4c.. 8 E.2 Performance of GiMMiK, single precision, β, CUDA, Tesla K4c... 9 E.3 Performance of GiMMiK, double precision, β, CUDA, GTX 78 Ti. ix
E.4 Performance of GiMMiK, single precision, β, CUDA, GTX 78 Ti.. E.5 Performance of GiMMiK, double precision, β, OpenCL, Tesla K4c. 2 E.6 Performance of GiMMiK, single precision, β, OpenCL, Tesla K4c.. 2 E.7 Performance of GiMMiK, double precision, β, OpenCL, GTX 78 Ti 3 E.8 Performance of GiMMiK, single precision, β, OpenCL, GTX 78 Ti. 3 E.9 Performance of GiMMiK, double precision, β, OpenCL, Tesla K4c. 4 E. Performance of GiMMiK, single precision, β, OpenCL, Tesla K4c.. 4 E. Performance of GiMMiK, double precision, β, OpenCL, GTX 78 Ti 5 E.2 Performance of GiMMiK, single precision, β, OpenCL, GTX 78 Ti. 5 E.3 Performance of GiMMiK, single precision, β =, OpenCL, FirePro W96 F. Speedups of 3 rd order matrices stacked together to mimic PyFR...... 9 G. Performance limiting factors for GiMMiK, single precision......... 22 x
Chapter Introduction Matrix multiplication is ubiquitous in all spheres of science and engineering, hence the need for efficient and performant implementations of such operations in software. A lot of effort has been put into building and optimising Basic Linear Algebra Subprograms (BLAS) libraries. General Matrix Multiplication (GEMM) subroutine of level-3 BLAS is among the most popular choices for an implementation of the matrix product. However, GEMM is very generic and usually performs best with large problem sizes [6,, 7, 2]. In situations where the matrices are known a priori, a faster implementations can be achieved. In this thesis we are interested in developing a highly performant matrix product routine for a block-by-panel (see Figure.) type of matrix multiplication, where the operator matrix is typically small (e.g. 96 64 elements). This is motivated by an application in Flux Reconstruction [9] schemes for high-order fluid flow simulations on unstructured grids. However, we believe that the usefulness of our research goes beyond the area of Computational Fluid Dynamics (CFD) and can impact other fields of engineering as well. In this thesis we present GiMMiK a generator of matrix multiplication kernels. GiM- MiK analyses a given operator matrix and generates optimised and highly performant CUDA and OpenCL kernel code that can run across a variety of hardware accelerators. C A B Figure.: Diagram representing block-by-panel type of matrix multiplication. In this type of matrix product the operator matrix is typically small and square, while the operand and output matrices are fat.
.. CONTEXT 2. Context Within the area of Computational Fluid Dynamics (CFD) there is a growing need for efficient and accurate high-order schemes for numerical simulations. High-order methods can potentially deliver better accuracy at a similar computational cost to low-order methods. Unfortunately, existing high-order methods are less robust and harder to implement than their low-order counterparts, which prevents their adaptation in industry and to a lesser extent in academia. In 27 Huynh [9] presented the Flux Reconstruction (FR) approach, a unifying mathematical framework allowing an efficient development of high-order schemes. The details of this approach are described in Section 2.. Huynh showed how well-known high-order schemes such as Discontinuous Galerkin (DG) methods and Spectral Difference (SD) methods can be cast within the Flux Reconstruction framework. In 29 Huynh [] showed how FR can be applied to diffusion problems. Most importantly, the FR framework allows for development of new schemes with favourable properties. Their main advantages over traditional high-order schemes are improved robustness, accuracy, stability and simplicity of implementation. Further work by Vincent et al. [2] and Castonguay et al. [3] resulted in a new class of energy-stable schemes for solving conservation laws problems for quadrilateral, hexahedral and triangular element meshes. Williams et al. [22] have extended the schemes to tetrahedral elements. Castonguay et al. [4] in 2 were the first to present a high-order compressible viscous flow solver for mixed unstructured grids based on the Flux Reconstruction approach, designed to run on clusters of GPUs. The most recent development in the area of Flux Reconstruction is PyFR, an opensource framework for solving advection-diffusion type problems on streaming architectures [23]. The very nature of Flux Reconstruction methods allows to cast many of the computation steps into matrix-matrix multiplication operations as described in Section 2.2. For this reason a GPU implementation of these schemes is very attractive, as the devices exhibit inherently high floating-point performance and memory bandwidth, suitable for the arithmetically intensive linear algebra operations. There is a number of highly optimised BLAS libraries, which can be employed to compute the required matrix products. NVIDIA cublas GEMM [4] or the OpenCL clblas GEMM [] are the obvious candidates for the GPU platform. However, already in 2 Castonguay et al. [4] identified the need for bespoke matrix multiplication kernels to achieve high performance of their solver. The problem with available, highly optimised BLAS libraries is not their sub-optimal implementation but rather their general nature. Experimental data suggests that BLAS GEMM performs especially well and achieves near peak performance for large, square matrices [6, 7].
CHAPTER. INTRODUCTION 3 Each time step of the simulation performed by PyFR amounts to a repeated application of the same set of five operator matrices to the data, combined with some element-local transformations [23]. The exact characteristics of these matrices depend on many numerical method choices i.e. the shape and dimensionality of the mesh elements, the desired order of accuracy and the type of equations used to solve the problem. Casting the computation steps to matrix multiplication operations enables us to navigate all the numerical scheme choices freely, without incurring any performance penalty due to an unoptimised implementation of the solver. The parameters of the numerical schemes dictate the size of the operator matrices, which is typically small. For hexahedral meshes it ranges from (4 8) to (96 64) and up to (29 343) for the first, third and sixth order of accuracy correspondingly. The full specification of the characteristics of the matrices used by the PyFR solver across 6 orders of accuracy is available in Appendix A and further discussed in Section 2.3. The operator matrices stay constant for the duration of the simulation and are also known in advance, which opens up an opportunity to analyse them and generate bespoke, highly specialised kernels for each matrix to improve over the performance of state-of-the-art BLAS libraries..2 Objectives The aim of this project is to investigate the performance improvement achievable through the use of bespoke matrix multiplication kernels over the state-of-the-art BLAS GEMM implementations for a block-by-panel type of matrix multiplication characteristic to PyFR. We pick CUDA and OpenCL as the development platforms of choice and will evaluate the performance of our optimisations on two modern industry-grade GPUs: Tesla K4c form NVIDIA and FirePro W9 from AMD and also on a consumer-grade NVIDIA GeForce GTX 78 Ti to further explore the applicability of our optimisations on commodity hardware. In Chapter 4 we describe the methodology used to generate our bespoke multiplication kernels and the various software optimisations we have attempted. We have taken a systematic approach to evaluate each of the proposed optimisations in order to incorporate the successful ones into GiMMiK. The studied techniques involve loop unrolling, sparsity elimination, and common sub-expression elimination. We have also investigated the relative advantages and drawbacks of using different types of available memory to store the operator matrix. We aim to present a comprehensive set of evidence for the success or failure of the proposed optimisations obtained through benchmarking of our kernels on a well-diversified suite of matrices with different sizes and sparsity patterns. Chapter 5 presents the empirical analysis of GiMMiK s kernels and gives the final
.3. CONTRIBUTIONS 4 assessment of the performance improvements our kernels are able to realise over cublas and clblas GEMM for individual matrices. As the last step we demonstrate the usefulness of our proposed solution by plugging our kernels into PyFR and investigating the final performance improvement the solver is able to achieve during an example compressible fluid flow simulation on an unstructured mesh..3 Contributions The following list summarizes the contributions of this thesis: We show how, with a prior knowledge of the operator matrix, we are able to generate matrix multiplication kernel code, which performs better than state-of-the-art cublas and clblas GEMM. We achieve speedups of up to 9.98 (2.2) times on the Tesla K4c and 63.3 (3.7) times on the GTX 78 Ti in double (single) precision for individual PyFR matrices in the block-by-panel type of product. We present GiMMiK an open-source Python library for generating matrix multiplication kernels for CUDA and OpenCL platforms available for download at https://github.com/bartwozniak/gimmik. We propose a series of software optimisation techniques, which we speculate can bring performance improvements when applied to our CUDA and OpenCL kernels and incorporate the successful ones into GiMMiK. In a systematic way each optimisation is exhaustively evaluated on a well-diversified set of operator matrices extracted from the PyFR solver. The benchmark spans a range of matrix sizes and sparsity patterns. By incorporating GiMMiK into PyFR we are able to grant significant performance improvements of up to.72 (2.4) in double (single) precision on a single Tesla K4c, which allows us to reduce the computational time of an exemplary compressible unsteady flow simulation from a matter of weeks to a matter of days. Through this performance improvement we can further influence the numerical method choices and allow for better quality results. A general paper GiMMiK - Generating Bespoke Matrix Multiplication Kernels for Various Hardware Accelerators; Applications in High-Order Computational Fluid Dynamics by Bartosz D. Wozniak, Freddie D. Witherden, Peter E. Vincent and Paul H. J. Kelly has been prepared for submission to the Computer Physics Communications journal, based on the findings in this report. It is available upon request. We believe that the methodology applied in this study can give a valuable insight into efficient implementations of small-scale linear algebra kernels on GPUs.
Chapter 2 Background In this chapter we summarize the basic principles behind the Flux Reconstruction approach to high-order fluid flow simulations implemented by PyFR, which is the motivating subject of our investigation. In Section 2.2 we show how operations performed in the FR schemes can be cast into matrix multiplication problems. Later, in Section 2.3 we characterise the operator matrices used in PyFR and explain how they vary with the numerical method choices made for each simulation. At the end of this chapter, we give an introduction to General Purpose Computing on GPUs and explain the basics of CUDA and OpenCL programming models (Section 2.4). Lastly, we describe the three state-of-the-art BLAS libraries, which are predominantly used on the GPU platform and will serve as the benchmark for our investigation. 2. Flux Reconstruction The following subsections will describe the basic steps underlying the Flux Reconstruction approach and demonstrate how it can be applied to one and multi dimensional domains as described by Castonguay et al. [4]. 2.. Flux Reconstruction Approach in D For the purpose of D domains let us consider the following D scalar conservation law equation within an arbitrary domain Ω: u t + f x = (2.) where x is a spatial coordinate, t is time, u = u(x, t) is a conserved scalar quantity and f = f(u, u x ) is the flux in the x direction. Now consider partitioning Ω into N 5
2.. FLUX RECONSTRUCTION 6 non-overlapping elements Ω n Ω = N Ω n. (2.2) n= Within each element Ω n we will represent the exact solution u by a function u δ n = u δ n(x, t) which is a degree p polynomial within the element and zero outside. Similarly, we will represent the exact flux f by a function f δ n = f δ n(x, t) which is a degree p + polynomial within the element and zero outside. Thus, the total approximate solution u δ and flux f δ over the domain Ω can be written as u δ = N u δ n u, f δ = n= N fn δ f. (2.3) To simplify the implementation, it is advantageous to cast each Ω n to a standard element n= Ω S = {ξ ξ } via an invertible mapping Θ n (ξ). ( ) ξ x = Θ n (ξ) = x n + 2 ( + ξ 2 ) x n+ (2.4) This mapping allows us to solve the following transformed equation within the standard element where and J n = (x n+ x n )/2. û δ t + ˆf δ ξ = (2.5) û δ = û δ (ξ, t) = J n u δ n(θ n (ξ), t), ˆf δ = ˆf δ (ξ, t) = f δ n(θ n (ξ), t) The Flux Reconstruction approach can be applied to equation (2.5) and consists of seven stages. The first stage defines a set of p + solution points within the standard element Ω S and specifies the form of the approximate solution û δ as a polynomial of degree p of the form p+ û δ = û δ i l i (2.6) i= where l i is the D Lagrange polynomial associated with the i th solution point and û δ i represent the value of û δ at the solution point ξ i. In the second stage a common interface solution at the two ends of an element are calculated. To do this we calculate the approximate solution at the boundaries using equation (2.6). The common interface solution denoted û δi can be computed using values form both sides of the interface. The exact methodology for calculating the interface solutions depends on the nature of the equations that are being solved. The third stage involves the construction of a corrected solution gradient ˆq δ, which
CHAPTER 2. BACKGROUND 7 approximates the solution gradient within the element. In order to define ˆq δ we start by considering correction functions g L (ξ) and g R (ξ), which in some sense approximate zero within Ω S and satisfy the following g L ( ) =, g L () =, g R ( ) =, g L () =, g L (ξ) = g R ( ξ). The corrected gradient ˆq δ is defined as ˆq δ = ûδ ξ + (ûδi L û δ L) g L ξ + (ûδi R û δ R) g R ξ (2.7) where û δi L and ûδi R are the transformed common solutions at the left and right interfaces obtained in the previous step and û δ L = ûδ ( ) and û δ R = ûδ () are the values of the approximate solution at the left and right interfaces obtained from equation (2.6). The exact form of g L and g R will not be considered in the thesis. The fourth stage is concerned with the definition of the approximate transformed discontinuous flux within element Ω S denoted ˆf δd. It is defined at the solution points described in the first stage and can be computed as ˆf δd i ˆf δd = p+ i= ˆf δd i l i (2.8) where the coefficient is the value of the transformed flux at the solution point ξ i evaluated from the approximate solution û δ and the corrected gradient ˆq δ. The fifth stage involves calculating the numerical interface fluxes at either end of the standard element Ω S. To do so we must first obtain the approximate solution and the corrected gradient at these points within each element using equations (2.6) and (2.7). The exact way of computing the common interface fluxes using values from both sides of the interface again depends on the nature of the equations that are being solved. In the sixth step, correction flux ˆf δc is added to the discontinuous flux ˆf δd. We require the summation to be equal to the interface flux found in the previous stage. For this we define correction functions h L (ξ) and h R (ξ) analogous to those from stage three. Likewise, they need to satisfy h L ( ) =, h L () =, h R ( ) =, h L () =,
2.. FLUX RECONSTRUCTION 8 The correction flux can now be defined as ˆf δc δi = ( ˆf L h L (ξ) = h R ( ξ). δd δi ˆf L )h L + ( ˆf R ˆf δd R )h R, (2.9) where ˆf δd L = ˆf δd ( ) and can be constructed as follows ˆf δd R = ˆf δd (). The approximate total transformed flux ˆf δ ˆf δ = ˆf δd + ˆf δc = ˆf δd δi + ( ˆf L δd δi ˆf L )h L + ( ˆf R ˆf δd R )h R. (2.) The last stage involves calculating divergence of ˆf δ at the solution points using the expression ˆf δ ξ (ξ p+ i) = j= ˆf j δd l j ξ (ξ δi i) + ( ˆf L δd ˆf L ) h L ξ (ξ δi i) + ( ˆf R δd ˆf R ) h R ξ (ξ i), (2.) which can be used to approximate the evolution of û δ in time by using a suitable temporal discretionary of dû δ i dt 2..2 Flux Reconstruction Approach in 2D = ˆf δ ξ (ξ i). (2.2) This section will describe how Flux Reconstruction can be applied to quadrilateral elements as it was first detailed by Vincent et al. [2]. The extension to hexahedral elements is straightforward. For the purpose of demonstrating the Flux Reconstruction approach to 2D quadrilateral elements let us consider the 2D scalar conservation law u t + xy f =, (2.3) with an arbitrary domain Ω, where f = (f, g), f = f(u, u) is the flux in the x direction and g = g(u, u) is the flux in the y direction. Again, we partition the domain into N non-overlapping quadrilateral elements Ω n such that Ω = N Ω n. (2.4) n= Similarly to the D case we will transform the element into a standard element with a mapping ( ) x = y ( K M i (ξ, η) i= x i y i ), (2.5)
CHAPTER 2. BACKGROUND 9 where K is the number of points used to define the shape of the physical element, (x i, y i ) are the coordinates of the points and M i (ξ, η) are the shape functions. After this transformation the evolution of u δ n within Ω n can be solved using where and J n, x ξ, x η, y ξ equation (2.5). û δ t + ξη ˆf δ =, (2.6) û δ = J n u δ n(θ n (ξ, η), t), (2.7) ( ˆf δ = ( ˆf y δ, ĝ δ ) = η f n δ x η gδ n, y ξ f n δ x ) ξ gδ n (2.8) and y η depend on the shape of element n and can be evaluated using Next, for quadrilateral elements, (p + ) 2 solution points are defined within the standard element and (p+) flux points on each edge (total of 4(p+)). The approximate solution can be written as p+ û δ = û δ i,jl i (ξ)l j (η), (2.9) i= j= where l i (ξ) and l j (η) are the D Lagrange polynomials associated with the D solution point at (ξ i, η i ). The corrected gradient ˆq δ = (ˆq δ ξ, ˆqδ η) consists of a component in each direction ξ and η and is obtained using the D correction functions (g L, g R ) and (g B, g T ) as ˆq δ ξ (ξ i, η j ) = ûδ ξ (ξ i, η j ) + (û δi L û δ L) g L ξ (ξ i) + (û δi R û δ R) g R ξ (ξ i) ˆq η(ξ δ i, η j ) = ûδ η (ξ i, η j ) + (û δi B û δ B) g B η (η j) + (û δi T û δ T ) g T η (η j), (2.2) where û δi R, ûδi L, ûδi T and ûδi B are the transformed common interface values of the approximate solution at the flux points located along the lines ξ = ξ i and η = η j. The values of the solution at the flux points within each element (û δ L, ûδ R, ûδ B and ûδ T ) are computed using equation (2.9). The corrected gradient in the entire element is then constructed as ˆq δ = p+ i= j= ˆq δ i,jl i (ξ)l j (η). (2.2) Values for discontinuous flux at the solution points ( ˆf δd i,j ) can be found directly from the approximate solution û δ and the corrected gradient ˆq δ and hence we obtain the
2.. FLUX RECONSTRUCTION formula for discontinuous flux ˆf δd (ξ, η) = p+ i= j= The divergence of the discontinuous flux is thus ξη ˆf δd (ξ, η) = = ˆf δd ξ p+ i= j= + ĝδd η ˆf δd i,j l i (ξ)l j (η). (2.22) ˆf i,j δd l i (ξ) ξ l p+ j(η) + i= j= The divergence of the transformed correction flux ξη ˆf δc = ˆf δd i,j l i (ξ) l j(η) η. (2.23) ˆf δc ξ + ĝδc η solution point (ξ i, η j ) is computed with the D methodology in each direction as at the ˆf δc ξ (ξ δi δd i, η j ) = ( ˆf L ˆf L ) h L ξ (ξ δi i) + ( ˆf R ĝ δc η (ξ i, η j ) = (ĝb δi ĝb δd ) h B η (η j) + (ĝt δi δd ˆf R ) h R ξ (ξ i) ĝt δd ) h T η (η j), (2.24) where ˆf δi L, ˆf δi R, ĝδi B and ĝδi T are the transformed common interface fluxes computed at the flux points located along lines ξ = ξ i and η = η j. The transformed discontinues fluxes at the flux points within each element ( using equation (2.22). ˆf δd L, ˆf δd R, ĝb δd and ĝδd T ) are computed Analogous to the D case, the total transformed flux is found as a sum of a discontinuous component ˆf δd and a correction component ˆf δc, ˆf δ = ˆf δd + ˆf δc. (2.25) Using equations (2.25), (2.23) and (2.24) we can progress the solution in time with dû δ i,j dt ( = ˆf ) δ ξ (ξ i, η j ) + ĝδ η (ξ i, η j ). (2.26) For brevity, the case of triangular elements will not be discussed in this thesis, but can be found in [3].
CHAPTER 2. BACKGROUND 2.2 Matrix Representation of FR Schemes The aim of this section is to illustrate how Flux Reconstruction schemes can be cast to a form allowing for an efficient implementation on GPUs using matrix multiplications. For a more detailed explanation see [4, 23]. For this purpose let us use the compressible Navier-Stokes equations as an example. Consider the matrix [Ûs] of dimensions N s 5 (where N s is the number of solution points per element and 5 is the number of Navier- Stokes equations), which stores the approximate solution at the solution points. [Ûs] = ˆρ δ ˆρ δ 2. ˆρ δ N s ρu ˆ δ ρu ˆ δ 2. ρu ˆ δ N s ρv ˆ δ ρv ˆ δ 2. ρv ˆ δ N s ρw ˆ δ ˆρe δ ρw ˆ δ 2 ˆρe δ 2.. ρw ˆ δ N s ˆρe δ N s (2.27) Further, consider the matrix [Ûf ] of dimensions N f 5 (where N f is the number of flux points per cell) that stores the approximate solution at the flux points. [Ûf ] = ˆρ δ ˆρ δ 2. ˆρ δ N f ρu ˆ δ ρu ˆ δ 2. ρu ˆ δ N f ρv ˆ δ ρv ˆ δ 2. ρv ˆ δ N f ρw ˆ ˆρe δ ρw ˆ 2 ˆρe δ 2. ρw ˆ δ N f. ˆρe δ N f (2.28) The first step in the Flux Reconstruction schema is to compute the discontinuous approximate solution û δ at the flux points from the solution points. We can do this in the following operation [Û D f ] = M [Ûs], (2.29) where M is of dimension N f N s and depends on the type of elements used and is the same for all elements of the same type. To find the values of the common solution at the cell interfaces we need to loop over all the flux point pairs and compute the value û δi. Allow [Û f I ] to denote the transformed common interface solution for all elements. Now, to compute the corrected gradient ˆq δ we yet need to find the discontinuous solution gradient ˆ û δ. Consider defining a matrix [ ˆQ D s ] of dimension (3N s ) 5 to store the discontinuous gradient at the solution points, which can be obtained from [ ˆQ D s ] = M 4 [Ûs], (2.3) where M 4 is of dimension (3N s ) N s and again depends on the type of elements used and is the same for all elements of the same type. After the transformed common approximation value ([Û f I ]) has been calculated for each flux point pair, the corrected
2.2. MATRIX REPRESENTATION OF FR SCHEMES 2 gradient can be computed for the solution points using the following equation where matrix M 6 is of dimension 3N s N f. [ ˆQ s ] = M 6 ( [Û I f ] [Û D f ] ) + [ ˆQ D s ], (2.3) Next, we want to compute the transformed discontinuous flux ˆf δd at the solution points. This operation depends on the approximate solution û δ and the corrected gradient ˆq δ, which were computed in the previous steps. It is computed independently for each point in the mesh and the results are stored in a matrix denoted by [ ˆF D s ]. The discontinuous flux is then used to compute the divergence using where M is of dimension N s (3N s ). [(div ˆF ) D s ] = M [ ˆF D s ], (2.32) In order to evaluate the common interface flux we require the discontinuous solution and the corrected gradient at the flux points within each element. The corrected gradient at the flux points can be computed with the M matrix as follows M [ ˆQ f ] = M 5 [ ˆQ s ] where M 5 = M. (2.33) M To find the common values of the interface flux we need to loop over all the flux point pairs. The result is stored in matrix [ ˆF I f ]. To compute the correction flux ˆf δc we need to find the discontinuous flux at the flux points denoted by the matrix [ ˆF D f where M 2 is of dimension N f 3N s. ], which can be obtained from [ ˆF D f ] = M 2[ ˆF D s ], (2.34) In the penultimate step, before the solution is progressed in time, the divergence of the total approximate flux is computed using the following formula where M 3 is of dimension N f N s. [(div ˆF ) s ] = M 3 ( [ ˆF I f ] [ ˆF D f ] ) + [(div ˆF ) D s ], (2.35) Witherden et al. [23] in their paper on PyFR show that these steps can be easily extended and applies to any number of dimensions and other element types. Further, they demonstrate how some of the matrix operations can be reordered and grouped together to achieve better performance in their implementation. Consider equation (2.3), which
CHAPTER 2. BACKGROUND 3 can be rewritten using equations (2.29) and (2.3) in the following way [ ˆQ ) s ] = M 6 ([Û f I ] M [Ûs] + M 4 [Ûs], (2.36) and simplified to [ ˆQ s ] = M 6 [Û I f ] + (M 4 M 6 M ) [Ûs]. (2.37) This results in a new matrix M 46 = M 4 M 6 M, which can be computed before the simulation and reduces the number of arithmetic operations required to produce the desired solution. Similarly, equation (2.35) can be rewritten using equations (2.32) and (2.34) in the following way [(div ˆF ) s ] = M 3 ([ ˆF f I ] M 2[ ˆF ) s D ] + M [ ˆF s D ], (2.38) and simplified to [(div ˆF ) s ] = M 3 [ ˆF I f ] + (M M 3 M 2 ) [ ˆF D s ]. (2.39) This again results in a new matrix M 32 = M M 3 M 2, which can be found prior to the computation and hence reduce the number of required arithmetic operations. Labelling of the operator matrices M - M 6 is the naming convention used in PyFR and will be used throughout the rest of this thesis. 2.3 Characteristics of Operator Matrices in FR Consider matrix multiplication of the form: C αab + βc, where A is the operator matrix, B is the data and C is the output. The operator matrices described in Section 2.2 all exhibit a similar set of characteristics. Their size depends on the exact equations that are being solved, the type of the elements they are evaluated on and the degree of accuracy (number of solution or flux points within each element). For quadrilateral and hexahedral elements the operator matrices are sparse due to the tensor product formulation on the solution points within each element, while for triangular and tetrahedral elements they are dense. Furthermore, these operator matrices are known a priori and remain constant for the duration of the computation. They are typically small and square, the size ranges from 6 3 to 29 343 across 6 orders of accuracy. From a practical point of view, the most relevant matrices are those for the third order of accuracy with dimensions 96 64. The width of the B matrix depends on the number of elements in the mesh. In this investigation we have used B
2.4. GENERAL-PURPOSE COMPUTING ON GPUS 4 of 5, elements wide for all benchmarking runs. This is representative of a small, non-trivial fluid flow simulation. Therefore, the entire multiplication takes the form of a block-by-panel operation. 2.4 General-Purpose Computing on GPUs In the early days of General-Purpose Computing on Graphics Processing Units (GPGPU) it was impossible to write efficient, compute-bound matrix multiplication routines due to the lack of developed memory hierarchy on the devices. This changed with the introduction of NVIDIA Compute Unified Data Architecture (CUDA) in 27, which along other changes brought a fully fledged memory hierarchy introduced to the GPUs. Further, around the same time NVIDIA released their Tesla series products targeted specifically at High Performance Computing (HPC), offering very high single and double precision floating-point performance. Shortly after, in 28 ATI/AMD released their series of GPUs implementing the non-proprietary OpenCL standard also targeting the HPC sector [2]. GPUs are specialized in compute-intensive, highly parallel computation, as opposed to more general purpose CPUs. They are particularly well suited for stream processing, where the same kernel function can be executed independently on many different data elements in parallel. High arithmetic intensity of GPU computations and high degree of parallelism make it possible to hide memory latency with computation rather than a hierarchy of caches. There are many parameters associated with GPU programming, which need to be carefully chosen to ensure that there is always some computation available to be scheduled on the GPU processing units, while fetching data from memory. The following subsections gives an introduction to the two dominant general-purpose GPU computing platforms: CUDA and OpenCL. 2.4. CUDA Programming Model Compute Unified Data Architecture (CUDA) is a proprietary technology available only on NVIDIA GPUs. It comes with a programming environment which uses C with a minimal set of extensions as a high-level implementation language allowing developers to access CUDA resources. More detail on the language can be found in the CUDA C Programming Guide [5]. CUDA GPUs are characterised by their Compute Capability. The GPUs used for the purpose of my investigation (Tesla K4c and GTX 78 Ti) are of Compute Capability 3.5.
CHAPTER 2. BACKGROUND 5 Kernels, Blocks and Threads CUDA developers implement kernels C functions which are executed N times by N different CUDA threads. For convenience, CUDA threads are grouped together into one-, two- or three-dimensional structures called blocks. Blocks, similarly to threads, are organised into a one-, two- or three-dimensional structure called the grid. Each thread within a block can be identified using the thread index. The block index and the block dimension are also accessible to each thread and can be used to compute a global index of each thread within the grid. The number of blocks in the grid is usually dictated by the size of the problem a programmer tries to solve. The number of threads in a block is limited by the amount of resources requested by the kernel. The number of blocks in the grid and threads in the block used to execute a given kernel is called the execution configuration. Carefully selecting values for these parameters can have a large effect on the overall performance of the code. Platform Model The CUDA Platform Model is depicted in Figure 2.. NVIDIA GPUs are built from an array of multithreaded Streaming Multiprocessors (SMs). Each SM consists of a large number of CUDA cores (92 on the Kepler architecture). When a kernel is launched, each block is allocated to a single SM and remains there for its lifetime. Blocks execute concurrently and each SM executes a large number of threads at the same time. Instructions are pipelined to leverage instruction-level parallelism within a single thread. The mapping between the execution model and the platform model is depicted in Figure 2.2. Threads execute in groups of 32 called warps. Multiple warps can execute in parallel on a single SM. All threads in a warp start at the same program address and execute the same instructions, but maintain individual program counters and their execution paths can diverge. This is known as the Single Instruction, Multiple Threads (SIMT) paradigm. In the case when threads diverge, all execution paths are serialised until they converge again. It is very beneficial for performance reasons to avoid divergence by writing code with the minimum number of branching instructions. The execution context (program counters, registers, etc.) for each warp is stored on-chip for the lifetime of the warp, which allows for warp scheduling to happen at no cost. The number of warps resident on the multiprocessor is limited by the number of registers and shared memory requested by the kernel. Devices with Compute Capability 3.x have a dedicated L cache for each multiprocessor as well as an L2 cache shared between all SMs. Additionally, each multiprocessor has a read-only data cache of 48KB to speed up reads from device memory. Devices with Compute Capability 3.5 can access it directly, while devices with lower Compute
2.4. GENERAL-PURPOSE COMPUTING ON GPUS 6 Host Compute Unit CUDA Streaming Multiprocessor Processing Element CUDA Core. Compute Device CUDA-Enabled GPU Figure 2.: CUDA (labelled in green) and OpenCL (labelled in black) platform models. Reproduced from [7]. Capability access it through a texture unit (the cache is hence referred to as the texture cache). The multiprocessors have also a read-only constant cache that is shared by all functional units. Memory Hierarchy Global memory resides in device memory and is visible to every thread. It is accessed via 32-, 64-, or 28-byte memory transactions. When a warp of threads reads or writes global memory, the accesses are coalesced into a number of transactions depending on the size of words accessed and the scatter of memory addresses. The less transactions are issued the higher the bandwidth utilisation. Therefore, it is important to maximise coalescing by following the most optimal access patterns and using data types of sizes meeting the alignment requirements. The best performance is achieved when a warp collectively reads consecutive memory locations aligned at a 28-byte boundary. On devices of Compute Capability 3.x global memory reads are not cached in the L cache. In addition, global memory can be also cached in a designated read-only data cache. Local memory is private to each thread and is typically used when register spilling is necessary or when variables are too large to fit in the register space (e.g. arrays). Local memory resides in the device memory and has the same high latency and low bandwidth as global memory. On devices of compute capability 2.x and higher, local memory accesses are always cached in the L and L2 level caches. Further, local memory undergoes the same alignment requirements as global memory. Shared memory has a much lower latency than global memory, because it is located on-chip. It is visible to each thread within a given block. To achieve high bandwidth,
CHAPTER 2. BACKGROUND 7 shared memory has been divided into equally-sized memory banks, which can be accessed simultaneously by a number of threads. Thus, the most optimal memory access pattern for a warp is when each thread accesses data from a different bank. Otherwise, a bank conflict occurs and the accesses have to be serialised. A special case occurs when all threads in a warp access the same data element, which can be broadcast and accesses do not need to be serialized. There are two addressing modes for the shared memory (64-bit and 32-bit), which allow successive words of 64- or 32-bits to map to successive banks. Shared memory is equivalent to a user-managed cache. Constant memory is read-only, resides in the device memory and is cached in the constant cache. It can be accessed by all threads. Texture memory is also read-only, resides in the device memory and is cached in the texture cache. The texture cache is optimised for 2D spatial locality. It can be accessed by all threads. 2.4.2 OpenCL Programming Model Open Computing Language (OpenCL) [7] is a framework providing developers a language based on C99 to implement kernels and execute them across heterogeneous systems of CPUs, GPUs, FPGAs and other hardware accelerators. OpenCL is a unified platform for parallel programming on devices such as high-performance servers, personal computers or even mobile phones. Kernels, Work-Groups and Work-Items Alike the CUDA platform, OpenCL developers partition their problem into coarse grain sub-problems and implement kernels to solve them. Submitting a kernel for execution on an OpenCL device defines an index space s.t. one kernel instance, known as the work-item, is executed for each point in this space. Work-items are aggregated into work-groups, which correspond to CUDA blocks. Work-items in a given work-group execute concurrently on the processing elements of a single compute unit. Similarly to CUDA, the index space can be -, 2-, or 3-dimensional, and work-items can be identified by either their global ID or the combination of a work-group ID and their local ID within the group. The developer specifies the number of work-items required for the given computation. OpenCL allows the programmer to further define the division of work-items into workgroups (the explicit model useful when work-items need to share Local Memory) or leave this decision to the OpenCL implementation (the implicit model).
2.4. GENERAL-PURPOSE COMPUTING ON GPUS 8 Platform Model The OpenCL Platform Model (largely similar to the CUDA one as depicted in Figure 2.) consists of the host, which executes the host program according to its native model and submits commands (memory transfers, synchronisation barriers or kernel launches) to the OpenCL devices it is connected to. Each of the compute devices can contain multiple compute units, which execute a single stream of instructions as Single Instruction, Multiple Data (SIMD) units. Further, each compute units can have multiple processing elements, which can execute as Single Program, Multiple Data (SPMD) units maintaining their own program counter (equivalent to CUDA SIMT). The host maintains a command-queue to coordinate execution of kernels on the devices. The mapping of the execution model onto the platform model is shown in Figure 2.2. Work-Item CUDA Thread Processing Element CUDA Core Work-Group CUDA Block Compute Unit CUDA Streaming Multiprocessor Kernel execution instance CUDA Grid Compute Device CUDA-Enabled GPU Figure 2.2: Mapping the kernel execution model onto the platform model. CUDA labelled in green and OpenCL labelled in black. Reproduced from [24]. OpenCL is designed for heterogeneous computing and hence does not put any requirements on how the model is implemented. For the purposes of our investigation it is worth taking a look at the AMD Hawaii architecture (corresponding to the tested FirePro W9). The Hawaii devices have 4 Shader Engines of compute units each. Each compute unit consists of 64 shaders (corresponding to the processing elements).
CHAPTER 2. BACKGROUND 9 Each compute unit has its dedicated on-chip L cache and all 4 Shader Engines share an L2 cache [5], which is largely similar to what CUDA offers as well. Memory Hierarchy Global Memory is available to all work-items across all work-groups and corresponds to CUDA global memory. Depending on the capabilities of the device the accesses to this memory may be cached. Constant Memory remains constant through the kernel execution and can only be allocated by the host. It corresponds to CUDA constant memory and similarly can be accessed by all work-items. Local Memory is private to each work-group. Depending on the device it might be implemented as dedicated regions of memory or be mapped onto sections of global memory. It corresponds to the CUDA shared memory. Private Memory is the OpenCL equivalent of CUDA local memory and is private to each work-item. 2.4.3 Basic Performance Optimisation Strategies for GPGPU Maximising Resource Utilisation. It is important to select the execution parameters in such a way to enable allocation of a large number of CUDA blocks (workgroups) simultaneously on the device. This includes partitioning the workload into enough chunks and splitting each into an appropriate number of threads (work-items) to keep the resources requirements of each block as small as possible. The SMs (compute units) rely on thread-level parallelism to maximise utilisation of their functional units, therefore it is crucial that the scheduler always has a warp ready to execute. The most common reason why a warp is not ready to execute is when its operands need to be fetched from memory and are not yet available. It can also be limited by dependencies between instructions (one thread waiting for another thread to write some data). By ensuring that a large number of blocks can reside simultaneously on each compute unit, we explicitly facilitate latency hiding through providing a large pool of warps ready for execution. Number of registers and the amount of shared memory (local memory) used by each thread (work-item) can also limit the number of warps that can reside simultaneously on a multiprocessor. Maximising Memory Throughput. The techniques for optimising memory throughput rely on an efficient use of shared memory (local memory in OpenCL), L/L2 caches, texture cache and constant cache as well as minimising the number of reads from device memory. Further, to fully utilise the available memory bandwidth one needs to coalesce
2.5. cublas, clblas AND cusparse 2 and align all accesses to device memory in order to issue the smallest possible number of memory transactions and reduce the overhead of instruction replays. For CUDA devices with Compute Capability 2.x and higher the same on-chip memory is used for both the shared memory and L cache. The proportion of memory dedicated to each space can be configured for each kernel call. Maximising Instruction Throughput. Instructions throughput can be increased by minimising threads divergence within warps, reducing the number of instructions and by trading precision for speed by using single rather than double precision arithmetic or intrinsic functions (less accurate but faster versions of standard arithmetic functions). In this thesis we intend to investigate both cases of single and double precision, but are not in a position to make an argument whether precision of a fluid flow simulation can be traded off for speed. 2.5 cublas, clblas and cusparse The most recent release of PyFR targets the CUDA and OpenCL platforms and defaults to the use of cublas and clblas GEMM for matrix multiplication operations when executing on GPUs. These libraries are widely considered to be the best available GPU BLAS alternatives and hence are the subject of this investigation. cublas is the highly optimised NVIDIA s dense BLAS library [4], which is the most popular choice for CUDA GPUs. Regretfully, cublas is not open-source and the implementation details are hidden from the developers. However, through profiling one can develop an intuition about the basic techniques used by cublas to provide a fast GEMM implementation. As expected, we notice a high utilisation of shared memory by cublas, which suggests the routine employs some form of tiling to perform the matrix product. clblas is the popular OpenCL dense BLAS implementation [], which can run across a variety of hardware accelerators. It is open-source and therefore provides the developers with complete information regarding the implementation of the matrix multiplication routine. clblas is particular interesting as it can tune its performance to a particular system. The auto-tuning utility comes packaged with the library and works by executing different variants of the GEMM routine in the search for the one that performs best on the given system. In our investigation we have tuned the clblas library prior to running the benchmarking suite for all devices.
CHAPTER 2. BACKGROUND 2 cusparse is the NVIDIA s sparse version of the BLAS library for CUDA [6]. It is designed to provide the best performance for GEMM in the cases where the matrices are largely sparse. It requires the matrices to be stored in a compressed format such as Compressed Sparse Row (CSR) or Coordinate List (COO), but this does not pose a significant overhead as the matrices in our problem remain constant for the duration of the simulation. One might think that cusparse should be preferred over cublas as the provider of matrix multiplication kernels in PyFR for simulations running on hexahedral and quadrilateral meshes (these element types correspond to sparse operator matrices). However, Georgiou [7] found that the use of cusparse for the case of small matrices alike these used in PyFR delivers worse performance than the dense cublas GEMM. For this reason we will not consider cusparse in this investigation.
Chapter 3 Related Work General Matrix Multiplication is a well-studied problem and many techniques exist that deliver near peak performance for the typical use cases. We give an overview of these techniques in Section 3.. It is far less common for someone to challenge the implementation of BLAS libraries in order to deliver higher performance GEMM routines. Our investigation is motivated by a particular application in PyFR, which means that the operator matrices and the type of the matrix product have certain characteristics (size, sparsity, etc.) that are known in advance and can be exploited in order to outperform the state-of-the-art GEMM implementations. Sections 3. and 3.2 present some of the most fundamental optimisation techniques underlying the implementation of modern high-performance BLAS libraries on various platforms. Section 3.3 outlines an approach taken in the development of the SD++ solver (the first solver for Flux Reconstruction schemes), where the authors develop bespoke matrix multiplication kernels to grant performance improvements to their application. Section 3.4 summarizes an approach to implementing high-performance, small-scale BLAS by generating code for fixed-size linear algebra expressions, where the matrices in question are also know in advance. Section 3.5 gives examples of other work in the field of linear algebra on the GPU platform, which suggest a series of techniques for achieving high performance matrix multiplication kernels. The methodology used in our investigation will be further discussed in Chapter 4 and the evaluation of our findings in Chapter 5. 3. Anatomy of High-Performance Matrix Multiplication A large amount of work on General Matrix Multiplication (GEMM) routines has been based on the findings of Goto and van de Geijn [8] from 28. Although their paper is the most applicable to CPUs, it gives some invaluable insights into how matrix product can be efficiently engineered on any platform. The authors illustrate a layered approach 23
3.. ANATOMY OF HIGH-PERFORMANCE MATRIX MULTIPLICATION 24 C A B Figure 3.: The diagram illustrates two (out of six) ways suggested by Goto, how general matrix multiplication can be successively decomposed. The two ways shown in this diagram use block-by-panel type of matrix product at the last level of decomposition. Reproduced from [8]. to implementing GEMM, capable of optimally amortizing the additional cost of moving data between different levels of memory hierarchy. They show how the implementation of matrix multiplication can be decomposed into multiplications with submatrices (known as blocking). The computation is cast into multiple calls to the inner-kernels, which perform the multiplication of blocks. The authors identify six inner-kernels that are considered for building blocks for high-performance GEMM, one of which is the blockby-panel type of matrix product. Figure 3. illustrates a decomposition of GEMM in two (out of total six) ways suggested by Goto, which utilise block-by-panel matrix product kernels at their lowest levels. The idea presented in Goto s paper is that if the lowest level kernels can attain high performance, then so will the main case of GEMM. Further, other linear algebra operations can often be cast in terms of these special shape matrix multiplication inner-kernels. In the aforementioned paper the authors explain how the size of the matrices used in the inner-kernels should be chosen to allow for high utilisation of the memory caches. Packing of submatrices into contiguous memory is shown to improve the utilisation of the TLB and serves as a mean for bringing the entries of the matrices into the memory cache. Unfortunatly, these optimisations are of little relevence to the GPU platform as the GPUs do not use virtual memory and the L2 cache is so highly contested by thousands of parallel threads that it is difficult to predict its perfromance. However, the GPUs shared memory (known as local memory for OpenCL) can be used as an explicitly managed cache. Further, tiling for register reuse is also considered in the paper and demonstrated to bring performance gains to the kernels. The paper by Goto and van de Geijn identifies a series of key factors for achieving high-performance GEMM implementation on any platform and stresses the importance of the six inner-kernels building blocks of GEMM in the implementation of various other routines such as level-3 BLAS.
CHAPTER 3. RELATED WORK 25 3.2 Automatically Tuned Linear Algebra Software Hand-optimised BLAS libraries are expensive and time consuming to produce, hence they only exist where there is a large enough market to justify the costs. To build a high-performance BLAS library a programmer needs to know all the details about the memory hierarchy, cache sizes, functional units, registers, etc. of the targeted platform. Unfortunately, some vendors do not expose these details, making it much harder to write high-performance code. Clint Whaley and Jack Dongarra in 998 suggested a solution to this problem the Automatically Tuned Linear Algebra Software (ATLAS) [2]. The authors suggest to use code generation coupled with timing routines to perform a search for the optimal implementation of GEMM on a given hardware platform. The methodology for building GEMM is similar to the one given by Goto in [4] and described in Section 3.. Whaley suggests that GEMM routine can be implemented out of smaller inner-kernels, which need to be optimised for a given platform. ATLAS performs the generation of innerkernels during the library installation process (auto-tuning). The process times execution of a number of different kernels to find the one that performs best in the given scenario. As discussed in Section 2.5, clblas comes with an auto-tuning software, which is able to adjust the library s performance to the given hardware platform. According to Whaley, code generation needs to account for numerous parameters such as the blocking factors, loop unrolling depths, software pipelining strategies, loop ordering, register allocations and instruction scheduling. Further, it needs to be aware of the hardware characteristics in order to guide its search through the parameter space. The hardware specification can be given by users or approximated through microbenchmarking. The authors identify a set of problems associated with this approach. Firstly, it is impractical for the auto-tuning facility to exhaustively test all kernels in the parameter space, hence the most optimal solution may not always be found and depends on the information the installer has about the hardware characteristics. Also, they stress the problem with executing GEMM on very small problem sizes, where the naive 3-loop implementation can outperform the optimised code. Lastly, the authors discuss the strategies for generating the cleanup code to take care of data falling outside of the tiling boundaries. The size of the tiles is known during code generation, hence it is possible to generate cleanup code for all possible shapes of the tiles with dimensions smaller than the blocking size. However, this approach does not scale well and would produce large binaries. The proposed solution compromises between the generation of a number of optimised routines for assumed shapes of the tiles and a complementing set of generic routines accepting general tile parameters, but exhibiting worse performance.
3.3. SD++ SOLVER 26 3.3 SD++ Solver Castonguay et al. in their paper [4] describe the development of a GPU-enabled compressible viscous flow solver, which utilises the Flux Reconstruction approach. The authors found that custom kernels for matrix multiplications gave 4% performance increase compared to the cublas GEMM library. The proposed solution made use of the texture memory (cached on chip) and shared memory to benefit from data reuse. According to Castonguay the win over cublas can be attributed to the three following factors: The usage of custom kernels allowed to reduce the total number of fetches from the global memory. The kernels could have been modified to perform multiple algebraic transformations on the entries of B and C without the need to launch any separate kernels. This, however, did not maintain the GEMM interface. Since the size of operator matrices is small and they remain constant throughout the computation, they could have been loaded into the texture memory granting a very fast access time. Due to heavy data reuse, elements of B were loaded into shared memory prior to each multiplication. Shared memory is significantly faster than global memory and can act as a fast user-managed cache. The proposed algorithm worked in the following way. The B matrix was naturally partitioned along its width into cells. Each block of threads was assigned at least one cell. Each thread within a block was responsible for computing an entire row of the output matrix C. As claimed by the authors, this approach, as opposed to an intuitive thread per element of C, increased the register pressure but decreased the number of fetches of A from global memory and also allowed for higher degree of instruction level parallelism. Two different methods were adopted to perform the multiplication depending on whether the operator matrix was dense or sparse. As an optimisation in the dense case, matrix A was stored in the texture memory in column-major form. This ensured that all threads within a warp accessed sequential locations of the texture memory. In the sparse case, ELLPACK format was used to pack the operator matrices into contiguous memory and allowed to reduce the number of floating-point operations perform by the kernels by eliminating the zero entries. Castonguay did not investigate the possibility to increase the performance of their kernels through hierarchical tiling. We believe, that tiling the matrix product could allow to reduce the number of registers required by each kernel and hence increase the
CHAPTER 3. RELATED WORK 27 occupancy, as the demand for resources would drop. This optimisation has the potential to decrease the register pressure while simultaneously keeping the number of memory fetches the same and maintaining the same degree of instruction level parallelism. 3.4 Basic Linear Algebra Compiler Spampinato and Püschel [9] have identified the need for high-performance, small-scale basic linear algebra computation, which is currently unattainable with the available state-of-the-art vendor BLAS libraries, which perform best with large problem sizes. The authors present the Basic Linear Algebra Compiler (LGen), which takes as input a fixed-size linear algebra expression and outputs a highly optimised C function. The code generation consists of three steps. First, a DSL description of the problem is input and a tiling decision (possibly hierarchical) is made based on the sizes of the matrices. The input gets translated into a second DSL, which contains the tiling decisions and makes the access patterns explicit. In step two loop-level optimisations are performed. These optimisations do not require any sophisticated analysis since they utilise the mathematical representation of the computation. Loop-level optimisations employed in LGen include loop merging and exchange. The second DSL representation is next translated into a C intermediate representation (C-IR). In the third stage code-level optimisations such as loop unrolling and translation into SSA form are performed. Lastly, the C-IR code is unparsed into C. The performance results obtained using the generated code are fed back to the generator, which uses auto-tuning to refine its initial tiling decisions and perhaps produce even faster code. The performance of the code produced by LGen for small matrices is competitive and often better than the performance of other available libraries. Knowing the size and structure of the matrices in question prior to the computation step directly corresponds to the problem addressed in this thesis. The authors of LGen suggest that exploring the structure of the matrices can have a significant effect on performance, but leave this claim without any empirical evidence. Further, the optimisations employed in LGem are largely designed for the CPU platform and hence cannot be directly ported onto the GPUs. Nevertheless, the authors present a number of insights, which can prove invaluable when building a matrix multiplication kernel generator for our purposes. For the purposes of the FR schemes and PyFR it is particularly desirable to optimise the sole matrix-matrix multiplication routine, rather than a combination of linear algebra expressions. Nevertheless, as already identified by Castonguay [4], there is scope to combine the applications of the matrix product with some of the point-local operations performed by PyFR during each time step of a simulation. While this might expose an
3.5. OTHER WORK 28 opportunity for numerous software optimisations to be applied and bring performance improvements to the solver, it can have an unwanted effect of largely reducing the clarity of the implementation. 3.5 Other Work In 2 Nath et al. [2] identified the necessary factors for improving performance of cublas GEMM routine once the Fermi architecture has been introduced. We recognise that our study is being performed on a newer Kepler architecture, which differs significantly to the one considered by Nath, hence their suggested implementation may not be faster than the up-to-date version of cublas. Nevertheless, the points made by the authors remain valuable and relevant. Nath suggests that since registers are much faster than shared memory it might be beneficial to tile for them and hence increase the reuse of data that already resides in the fastest available memory. This might be even more so on the newer architecture with a largely increased number of registers available to each thread. The authors also recognise the benefits of using texture memory to store entries of the operator matrix. Despite solving a different problem, Jhurani et al. [] also recognise the issue with reaching peak performance using cublas GEMM when multiplying small matrices, due to the reduced reuse of elements once they have been copied form global memory into shared memory. This insight further reinforces the idea that in the case of small operator matrices, the tiling choices are the most crucial and can have a large impact on performance. It is critical to maximise data reused once it is copied into registers. In this investigation we will explore software optimisations techniques, which should realise this objective. This insight further suggests that selecting an appropriate tiling scheme with multiple levels of blocking, can prove invaluable in developing a fast, high-performance matrix multiplication routine for small matrices.
Chapter 4 Methodology Due to a vast domain of numerical method choices available for each PyFR simulation, which dictate the characteristics of the used operator matrices (see Section 2.3), we require the matrix multiplication kernels to be auto-generated rather than written and optimised by hand. To accomplish this we have developed GiMMiK an open-source Python library, capable of generating matrix multiplication kernel code for CUDA and OpenCL. As part of the preliminary analysis we have considered a series of software optimisations, which we speculated would bring performance improvements to our kernels and enable us to improve over the performance of cublas and clblas. These optimisations include: using the constant memory to store the operator matrix, reducing common sub-expressions, sparsity elimination, and avoidance of the cleanup code. We have taken a systematic approach to evaluate each of the proposed optimisations in order to find the successful ones and incorporate them into GiMMiK. By design, each of our kernels computes an entire matrix-vector product. Loop unrolling serves as a basis for all of the applied optimisations. We note that the desired matrix product is of the form C αab + βc and values of α and β need to be handled with care. Generating kernels with embedded values of the operator matrix allows us to pre-multiply them with α to reduce the required 29
4.. BENCHMARKING INFRASTRUCTURE 3 number of floating-point operations. A special case occurs when β =, where we do not need to load entries of the output matrix at all. For this reason, each of the experiments mentioned in this thesis is run twice to investigate the effects of our optimisations in both cases when β = and β. In Section 4.2 we propose and experimentally evaluate the four aforementioned ways in which we believe our kernels can outperform BLAS GEMM. Subsequently, we pick the successful optimisations and incorporate them into GiMMiK as described in Section 4.3. Section 4. describes the experimental setup used to benchmark our kernels. 4. Benchmarking Infrastructure We have developed a C++/Python benchmarking infrastructure in order to evaluate the performance of our bespoke kernels and compare it with cublas and clblas GEMM implementations. The set of matrices used for benchmarking were extracted form the PyFR solver for quadrilateral, hexahedral, triangular and tetrahedral meshes across 6 orders of accuracy. The full specification of the matrices is available in Appendix A. The memory layout, the operator matrices and the problem size were setup to mirror those in PyFR. All matrices used in benchmarking are stored in a row-major order and are padded to ensure coalesced and aligned memory accesses by all threads. The operand and output matrices were selected to be 5, elements wide to provide a representative problem size. Throughout the simulation, PyFR keeps the operand matrix in device memory at all times. It is copied onto the device only at the beginning of the simulation, hence, the cost of memory transfers from the host to the device can be neglected for the purpose of our investigation. Further, the time required to generate and compile our bespoke kernels can also be neglected as it is significantly smaller than the time required to complete any meaningful fluid flow simulation. The timing of the kernels is done in a standard way using CUDA and OpenCL profiling events. Every reported value in this investigation is an average of 3 kernel executions and is reproducible within 2%. The average was taken to reduce the random error in our observations. 3 un-timed runs were executed before each timing run to eliminate bias due to idle under-clocking of the GPUs. The benchmarks were executed on the NVIDIA Tesla K4c and GeForce GTX 78 Ti, which both share the same Kepler (compute capability 3.5) architecture, and the AMD FirePro W9. The full specification of the devices, including the versions of the OpenCL and CUDA runtime is given in Table 4.. To provide a comprehensive suite of results, each benchmarking run was executed in single and double precision with β = and β.
CHAPTER 4. METHODOLOGY 3 Tesla K4c GTX 78 Ti FirePro W9 Architecture Kepler Kepler Hawaii Compute Capability 3.5 3.5 N/A ECC ON N/A N/A Peak Memory Bandwidth 288 GB/s 336 GB/s 32 GB/s Peak Double-Precision.43 TFLOPs 2 GFLOPs 2.62 TFLOPs Peak Single-Precision 4.29 TFLOPs 5.4 TFLOPs 5.24 TFLOPs CUDA Version 6. 6. N/A OpenCL Version 2. 2. 2. Table 4.: Specification of the experimental hardware. 4.2 Optimisations Assessment The aim of this section is to systematically evaluate four ways in which we believe our kernels can outperform cublas and clblas GEMM in a block-by-panel type of matrix multiplication. For each of these optimisations we have designed an experiment to test whether it can grant the speculated performance benefit. The generated kernels were benchmarked using the infrastructure described in Section 4., with the exception that data was only gathered for the CUDA platform on NVIDIA GPUs. Unfortunately, NVIDIA has discontinued support for profiling OpenCL kernels on CUDA GPUs in the latest versions of the toolkit. After the evaluation of the experimental results we are able to select the successful optimisations and incorporate them into our final solution GiMMiK. 4.2. Value Embedding Hypothesis We speculate that encoding values of the operator matrix directly in the kernel code, as opposed to loading them from memory, will result in a performance improvement of our kernels. It is expected to reduce the required number of accesses to global memory and expose the opportunity for the compiler to efficiently reuse values of the operator matrix by retaining them in registers (i.e. eliminate the latency due to memory access entirely). Further, embedding values in the code is realized through storing them in the constant memory. This is advantageous as the compiler has explicit knowledge about the usage pattern of those values and can find an optimal memory storage layout to minimize bank conflicts and best utilise the constant cache. With this approach, for the case of small matrices, the entries of the column of the operand matrix required by each thread are also likely to remain in registers and be reused efficiently while computing the entire column of the output matrix.
4.2. OPTIMISATIONS ASSESSMENT 32 Experiment To verify our assumption we generate and benchmark two types of kernels. The first type reads the operator matrix from global memory. On the Kepler architecture global memory loads are not cached in the L cache making the read-only data cache the only suitable way of caching data. This cache is known as the texture cache on older architectures and has to be explicitly managed by the programmer. The CUDA compiler (nvcc) will convert all loads into loads cached in the read-only data cache whenever it can. Hence, to better control experimental variables, we use it explicitly. The second kernel has values embedded in the code and relies on the constant memory and the constant cache to bring them efficiently into registers. Discussion Experimental results show that for a majority of benchmark matrices we achieve a significant speedup due to embedding values directly into the kernel code. The cases where the reported execution time of the non-embedding kernels are smaller are mostly within the 2% reproducibility margin and hence considered insignificant. Figure 4. illustrates the relative performance of these two types of kernels in double precision for β =. The complementing set of graphs for single precision and β is available in Figure C.. Raw experimental data for this experiment is available in Appendix B..8.8.6.4.6.4.2.2 (a) Tesla K4c, β = (b) GTX 78 Ti, β = Figure 4.: Comparison of kernels embedding values the operator matrix in the code and those accessing it through the texture unit for double precision and β =. Positive and neutral effects of value embedding on the set of benchmark matrices are indicated as green. Unfortunately, NVIDIA does not provide any details on the specification of the constant cache, hence the explanation of these results can be at best speculative. The experimental data suggests that the constant cache offers better performance and locality for the type of memory accesses we require. This is most likely realized through the
CHAPTER 4. METHODOLOGY 33 compiler, which is able to pick the most optimal memory storage layout for the constants in order to avoid memory bank conflicts and fully utilise the constant cache. Profiling reveals that whenever we achieve a particularly large speedup through embedding values in the code, the non-embedding kernels attain a low hit rate in the read-only data cache. This might be due to a particularly unfavourable sparsity pattern, which causes the data to be frequently evicted from the cache. To neutralise the effects of the particular sparsity patterns on the performance of the cache, we could attempt to pack the operator matrix into a contiguous chunk of memory, as suggested by Goto [8]. However, the experimental data for purely dense matrices shows that even when the matrix is stored in a contiguous block of memory the performance of the constant memory/cache is superior to the performance of the read-only data cache. 4.2.2 Common Sub-Expression Elimination Hypothesis Due to the mathematical formulation of the solution points in various element types, values along the rows of the operator matrix occasionally repeat. This exposes the opportunity for us to reduce common sub-expressions and save on the number of floating-point operations required to compute the matrix product. This can be achieved by summing up elements of the operand matrix corresponding to the repeated values prior to multiplication by the common term. This is illustrated by the following example: c ij = + a b xj + + a b yj +... = + a (b xj + b yj ) +... which shows how one multiplication by a common term can be eliminated. The reduction of common sub-expression can bring further performance gains to our kernels. Experiment To examine whether common sub-expression reduction is beneficial we benchmark two types of kernels, which embed all values from the operator matrix directly in the code and eliminate all multiplications by zeros. Only the second kernel reduces common sub-expressions by summing up entries of B prior to the multiplication by the value from A. Discussion Experimental data revealed that common sub-expression elimination did not bring the expected effect of improving performance of our kernels. The majority of the kernels carrying out the elimination performs worse or within the 2% reproducibility margin of those that do not. Figure 4.2 illustrates the relative performance of these two types of kernels in double precision for β =. The complementing set of graphs for
4.2. OPTIMISATIONS ASSESSMENT 34 single precision and β is available in Figure C.2. Raw experimental data for this experiment is available in Appendix B..8.8.6.4.6.4.2.2 (a) Tesla K4c, β = (b) GTX 78 Ti, β = Figure 4.2: Comparison of kernels eliminating common sub-expressions from the operator matrix and those performing no such optimisation for double precision and β =. Positive effects of common sub-expression elimination on the set of benchmark matrices are indicated as green. When explicitly eliminating floating-point multiplications from the kernel by combining common sub-terms together we trade-off the possibility for the compiler to efficiently reuse elements of the operand matrix. To illustrate this, consider the following operator matrix and an arbitrary constant a:........... a a A =........... a a........... When eliminating common sub-expressions we would generate the following two subterms when computing the col column of the output: subterm_ = b[ * bstride + col] + b[3 * bstride + col] subterm_2 = b[ * bstride + col] + b[ * bstride + col] The two sums have a common term b[ * bstride + col], which will be loaded from memory twice, unless it is found in the cache or the compiler kept it in a register when the second sum is computed. Profiling has revealed that reduction of common subexpressions increased the memory traffic and the number of global memory load and
CHAPTER 4. METHODOLOGY 35 store instructions executed by our kernels. This suggests that the compiler does not change the order in which these sums are computed to increase the likelihood of hitting in the cache and neither does it retain sub-expressions in registers for later reuse. Further, we note an increased number of subterms generated for kernels with reduced common sub-expressions, which increases the register pressure and causes the kernels to demand more resources from the GPU. This, again, has a negative impact on performance and further explains why this optimisation is not a successful one. However, Figures 4.2 and C.2 show that on the GTX 78 Ti a significantly larger fraction of the kernels benefit from common sub-expression elimination. This is the case as the card offers higher memory bandwidth than the Tesla K4c, hence it is more forgiving of the extra memory loads this optimisation generates. Further, in the case of double precision, the performance cap placed on the floating-point rate means that any reduction of the number of executed arithmetic operations has a potential to bring large performance gains. 4.2.3 Elimination Hypothesis Thanks to full unrolling we can eliminate all sparsity from the operator matrix by considering only the non-zero entries for the multiplication. In PyFR, quadrilateral and hexahedral element meshes correspond to matrices with large sparsity factors due to the tensor-product formulation of the solution points. We hypothesise that while BLAS GEMM is typically limited by the floating-point performance of the device, decreasing the number of required arithmetic operations (combined with an efficient memory access pattern) can decrease the running time of our kernels. Experiment To verify the effectiveness of sparsity elimination we compare two types of kernels, which embed all values from the operator matrix directly in the code. All multiplications by zeros were eliminated from the second kernel only. Both types of kernels are benchmarked using the earlier mentioned infrastructure to verify the effectiveness of the proposed optimisation. Discussion In the case of sparse matrices, elimination of zero entries allows us to greatly decrease the number of floating-point operations performed by the kernel. Figure 4.3 illustrates the effectiveness of sparsity elimination in double precision for β =. The complementing set of graphs for single precision and β is available in Figure C.3. We note that this optimisation has no effect on dense kernels. Raw experimental data for this experiment is available in Appendix B. This optimisation increases the utilisation of the available memory bandwidth and allows our kernels to execute significantly faster. Profiling has revealed that through the
4.2. OPTIMISATIONS ASSESSMENT 36.8.8.6.4.6.4.2.2 (a) Tesla K4c, β = (b) GTX 78 Ti, β = Figure 4.3: Comparison of kernels eliminating sparsity from the operator matrix and those performing all the multiplications by zeros for double precision and β =. Positive and neutral effects of sparsity elimination on the set of benchmark matrices are indicated as green. removal of unnecessary floating-point operations our kernels become fully bound by the available memory-bandwidth in both cases of single and double precision. Further, some of our benchmark matrices contain whole columns of zeros, which effectively means that the dimensionality of these matrices decreases and entire rows of the operand matrix do not need to be loaded from memory at all. This not only further decreases the number of arithmetic operations but also reduces the amount of memory that has to be read by our kernels. Naturally, fully dense cases of triangular and tetrahedral element matrices do not benefit from this optimisation. Raw experimental data indicates an existence of a strong positive correlation between the size and sparsity factor of the matrices and the speedup achieved through the removal of multiplications by zero. 4.2.4 Cleanup Code Hypothesis Lastly, we recognise that the tiling scheme used by cublas and clblas might be a limiting factor. GEMM implementations in BLAS libraries often utilise some form of tiling to efficiently reuse data from the cache, registers or in the case of GPUs shared memory. Poor tiling choices result in the need to execute cleanup code over the elements of the matrix that fall outside of the tile boundaries (as illustrated in Figure 4.4). This cleanup code is known to be poorly optimised. For the case of small operator matrices such as these used in PyFR, the performance penalty due to poor tiling choices can be particularly significant. Our fully unrolled kernels do not incur any of these costs.
CHAPTER 4. METHODOLOGY 37 C A B Figure 4.4: Diagram of a blocked matrix product. The red area represents partial tiles, whose dimensions are not a multiple of the blocking factor. These partial tiles are multiplied using the cleanup code. Experiment To examine what effects the cleanup code in cublas has on performance, we benchmark fully unrolled kernels with sparsity eliminated and values of the operator matrix embedded in the code, against the BLAS GEMM and against a naive 3-loop implementation of the matrix product. The naive implementation serves as a reference to determine the extent to which performance of BLAS is dominated by the cleanup code. Discussion Figure 4.5 illustrates the performance of cublas GEMM and the naive 3-loop implementation in double precision for β =. The complementing set of graphs for single precision and β is available in Figure C.4 and all experimental data for this experiment is available in Appendix B. We see that for a large fraction of the smaller benchmark matrices the performance of cublas GEMM is often worse than that of a naive matrix product. The analysis of the experimental data has confirmed our conjecture that performance of BLAS GEMM is heavily affected by the poorly optimised cleanup code for cases of small matrices..8.8.6.4.6.4.2.2 (a) Tesla K4c, β = (b) GTX 78 Ti, β = Figure 4.5: Comparison of NVIDIA cublas and the naive 3-loop matrix multiplication kernel for double precision and β =. Cases when cublas performs better are indicated as green.
4.3. GiMMiK 38 Additionally, we observe that our bespoke kernels are able to achieve very impressive speedups over BLAS for the smallest of the benchmarked matrices (see Chapter 5). This provides further evidence for the hypothesis that the cleanup due to inefficient tiling in cublas and clblas GEMM for small matrix sizes results in a large overhead and hence allows our bespoke kernels to perform significantly better. 4.3 GiMMiK Having performed the experiments described in Section 4.2 we are in a position to generate highly performant kernels for use in the PyFR solver. We have incorporated the successful optimisations into GiMMiK. We conclude that to achieve the best performance of our kernels we will eliminate sparsity from the operator matrix to reduce the number of floating-point operations. Through the preliminary analysis we found that this optimisation was highly beneficial for the matrices with high sparsity factors (corresponding to hexahedral and quadrilateral meshes). Further, GiMMiK will embed all non-zero values directly in the kernel code to benefit from the fast constant cache and compiler optimisations. This optimisation is expected to increase the performance of kernels for all element types. Lastly, we avoid the need to execute any cleanup code through loop unrolling, which shows to be beneficial especially for the smallest matrices (low orders of accuracy) across all element types. GiMMiK s kernels will not reduce common sub-expressions as this optimisation was found to have a negative effect on their performance due to the increased number of memory loads necessary to compute the subterms and an increased register pressure due to larger number of temporary variables needed in the code. GiMMiK was built as a stand-alone Python package with a simple interface (see Figure 4.6), which can be readily installed and used across a number of systems. It incorporates all of the successful optimisations discussed in this thesis and turns them on by default. Common sub-expression elimination can also be applied on demand, but as discussed previously, is not expected to benefit the performance of our kernels. Figure 4.7 shows a sample code generated by GiMMiK for the CUDA platform for a given operator matrix and β =. One of the design goals was to make GiMMiK s kernels interchangeable with BLAS GEMM, however, the very nature of some of the applied optimisation meant that we would not be able to maintain the same interface. GiMMiK includes support for α and β coefficients as well as allows the matrices to be strided in memory. The only difference between GiMMiK and BLAS GEMM is that the later allows the A and B matrices to be optionally transposed inside the routine. Chapter 5 aims to present a final set of benchmarking results obtained through the
CHAPTER 4. METHODOLOGY 39 import gimmik. g e n e r a t o r as gen from gimmik. platform import Platform 3... 5 # Generate k e r n e l 7 k e r n e l = gen. g e n e r a t e K e r n e l ( data, alpha =2., 9 beta =3., double=true, # P r e c i s i o n reduced=false, # CSE platform=platform.opencl) # Platform 3 Figure 4.6: GiMMiK s interface. A =...599769.63448574....79878.9594663 (a) Operator matrix input to GiMMiK. g l o b a l void 2 gimmik mm ( c o n s t double r e s t r i c t b, double r e s t r i c t c, 4 c o n s t i n t width, c o n s t i n t b s t r i d e, 6 c o n s t i n t c s t r i d e ) { 8 i n t index = blockdim. x blockidx. x + threadidx. x ; i f ( index < width ) { c o n s t double b l o c a l = b + index ; 2 double c l o c a l = c + index ; 4 c o n s t double subterm = b l o c a l [ 2 b s t r i d e ] ; c o n s t double subterm = b l o c a l [ b s t r i d e ] ; 6 c o n s t double subterm 2 = b l o c a l [ b s t r i d e ] ; 8 c l o c a l [ c s t r i d e ] =.5997695358467 subterm ; c l o c a l [ c s t r i d e ] =.63448574767476 subterm ; 2 c l o c a l [ 2 c s t r i d e ] =.9594662866473 subterm +. 79878527597 subterm 2 ; 22 } } 24 (b) Kernel code generated by GiMMiK. Figure 4.7: Sample kernel code generated by GiMMiK for the CUDA platform for the given operator matrix and β =.
4.3. GiMMiK 4 use of GiMMiK to generate bespoke matrix multiplication kernels for a set of PyFR matrices. We will present the final speedups our kernels achieve over cublas and clblas. Further, we will critically assess their performance in terms of the percentage of peak capabilities of the hardware they are able to utilise. Lastly, we will plug GiMMiK into PyFR to investigate the absolute performance improvements to the solver attainable through the use of our package as a matrix multiplication kernel provider for fluid flow simulations.
Chapter 5 Evaluation At this stage we have systematically evaluated a series of software optimisations and incorporated the successful ones into GiMMiK. As the next step we use the same experimental setup as described in Section 4. to evaluate the final performance of our bespoke kernels. All details on the benchmarked operator matrices are available in Appendix A. All experimental results obtained for individual matrices across all element types are available in Appendix D and are discussed in Section 5.. Further, in Section 5.2 we assess the quality of our solution by taking a look at how much of the peak performance of the device our kernels are able to achieve. As the final step in Section 5.4 we explore the performance benefits the use of GiMMiK grants to our motivating application in PyFR and evaluate the limitations of our solution in Section 5.5. 5. Kernels Benchmarking We find that our bespoke CUDA kernels are able to outperform cublas GEMM in nearly all cases for quadrilateral, hexahedral and triangular element matrices in double and single precision on both the GTX 78 Ti and Tesla K4c with β = as well as β. These matrices are either sparse or small and dense. In the case of larger tetrahedral matrices the library implementation proves superior. This is expected as these matrices are dense and sufficiently large for cublas to perform well. Additionally, in the case of single precision we find a small number of large hexahedral matrices, where cublas is slightly more performant than our kernels, as the library GEMM implementation can fully utilise the inherently higher single precision floating-point performance of the devices without it being a limiting factor. Similarly, we find that our OpenCL kernels are able to outperform clblas GEMM in all cases for quadrilateral, hexahedral and triangular element matrices in double and single precision with β = as well as β on both the GTX 78 Ti and Tesla K4c. We 4
5.. KERNELS BENCHMARKING 42 have only benchmarked our kernels in double precision and β = on the FirePro W9 and found the a significantly smaller portion of the benchmarked matrices was able to achieve a speedup over clblas GEMM. On the FirePro W9 some of the larger sparse matrices and mid-range dense matrices, which showed performance improvements on the CUDA platform, proved to perform worse than clblas. The top plots in Figures 5. to 5.4 illustrate the achieved speedups across a variety of matrix sizes and sparsity patterns corresponding to the benchmark matrices from PyFR for 6 orders of accuracy and β = on the CUDA platform. The complementing set of plots for the OpenCL platform and β is available in Appendix E. We conclude that sparse matrices and particularly small matrices benefit the most from optimisations employed by GiMMiK. This is expected while GiMMiK reduces the number of floatingpoint operations required to compute the product as well as avoids the poorly optimised cleanup code, which dominates performance of GEMM for small matrices. Further, we observe cases of matrices with the exact same size and sparsity, that achieve different speedups. These matrices differ in their width and height, which has an effect on how well BLAS performs, due to the inefficiencies in the tiling choices it is able to make for these particular dimensions. It has little effect on performance of our kernels. In the case of double precision on the GTX 78 Ti we note particularly impressive speedups due to the artificial cap imposed by NVIDIA on the peak double precision floating-point rate on the consumer-grade card. Through the use of GiMMiK we are able to significantly reduce the number of floating-point operations required and hence better utilise the available resources. Lastly, we want to compare the relative performance of the CUDA and OpenCL kernels when executed on the same platform (GTX 78 Ti and Tesla K4c). We observe that in double precision, on both the GTX 78 Ti and Tesla K4c our CUDA kernels generally perform better than OpenCL kernels for a majority of the benchmark matrices, apart from cases for larger dense matrices corresponding to higher-orders tetrahedral meshes. This is, however, within the performance region where cublas and clblas operate better than our bespoke kernels. In single precision our CUDA kernels perform worse than OpenCL for the majority of larger dense and sparse matrices, corresponding to tetrahedral and hexahedral meshes respectively. The lack of ability to profile OpenCL kernels on CUDA devices limits our understanding of the performance differences between these two platforms. We speculate, that because NVIDIA s OpenCL implementation is 32-bit (CUDA is 64-bit) and hence uses less registers to perform address calculations and to store pointers, it might be able to reduce the register pressure affecting performance of our kernels for large matrices. Through this observation we develop a general belief that the CUDA kernels should be favoured on the NVIDIA s platform.
CHAPTER 5. EVALUATION 43 Speedup.8.6.4.2 e+6 3 9 7 5 3.8 % FLOPS 8.6.4.2 e+6 6 4 2.8 % Memory Bandwidth 8.6.4.2 e+6 6 4 2 Figure 5.: Plots illustrating the speedup of GiMMiK s CUDA kernels over cublas, the achieved percentage of the peak floating-point rate and the achieved percentage of the peak memory bandwidth. The metric of interest is represented through the size and colour intensity of the data points. Speedups smaller than are denoted with crosses. These plots are for double precision β = on Tesla K4c.
5.. KERNELS BENCHMARKING 44 Speedup.8.6.4.2 e+6 3 9 7 5 3.8 % FLOPS 8.6.4.2 e+6 6 4 2.8 % Memory Bandwidth 8.6.4.2 e+6 6 4 2 Figure 5.2: Plots illustrating the speedup of GiMMiK s CUDA kernels over cublas, the achieved percentage of the peak floating-point rate and the achieved percentage of the peak memory bandwidth. The metric of interest is represented through the size and colour intensity of the data points. Speedups smaller than are denoted with crosses. These plots are for single precision β = on Tesla K4c.
CHAPTER 5. EVALUATION 45 Speedup.8.6.4.2 e+6 6 5 4 3 2.8 % FLOPS 8.6.4.2 e+6 6 4 2.8 % Memory Bandwidth 8.6.4.2 e+6 6 4 2 Figure 5.3: Plots illustrating the speedup of GiMMiK s CUDA kernels over cublas, the achieved percentage of the peak floating-point rate and the achieved percentage of the peak memory bandwidth. The metric of interest is represented through the size and colour intensity of the data points. Speedups smaller than are denoted with crosses. These plots are for double precision β = on GTX 78 Ti.
5.. KERNELS BENCHMARKING 46 Speedup.8.6.4.2 e+6 6 5 4 3 2.8 % FLOPS 8.6.4.2 e+6 6 4 2.8 % Memory Bandwidth 8.6.4.2 e+6 6 4 2 Figure 5.4: Plots illustrating the speedup of GiMMiK s CUDA kernels over cublas, the achieved percentage of the peak floating-point rate and the achieved percentage of the peak memory bandwidth. The metric of interest is represented through the size and colour intensity of the data points. Speedups smaller than are denoted with crosses. These plots are for single precision β = on GTX 78 Ti.
CHAPTER 5. EVALUATION 47 5.2 Quality Assessment Performance of BLAS libraries is typically bound by the floating-point capabilities of the hardware [2], because GEMM routines do not exploit any sparsity or redundancy in the data. To assess the quality of GiMMiK we want to empirically determine the performance limiting factor for each of the generated kernels. Firstly, we calculate the ratio of the peak floating-point performance and peak memory bandwidth for each device using values reported by its vendor. Next, for each kernel on each device we calculate the ratio of the achieved floating-point rate and memory bandwidth. We compare the two ratios to determine the limiting factor for the performance of the kernel: a = achieved flops achieved bandwidth p = peak flops peak bandwidth such that when a < p the kernel is bound by the available memory bandwidth, else by the floating-point performance of the device. The achieved floating-point rate is defined in terms of the number of floating-point operations needed to perform the matrix product and the time taken to execute the kernel. We use the NVIDIA profiler (nvprof) to count the number of floating-point operations executed by our kernels. This is more accurate than computing this number algebraically, due to the reduction of the number of operations resulting from the compiler optimisations and elimination of sparsity. The achieved memory bandwidth is computed in terms of the time taken to execute the matrix product and the total amount of memory needed to be read and written by the kernel. Similarly to the floating-point rate, the achieved memory bandwidth was obtained through profiling. We note that the metric reported by the compiler is inclusive of any traffic to and from local memory. Profiling of our kernels has revealed that through the applied optimisations all kernels for sparse matrices (hexahedral and quadrilateral meshes) become bound by the available memory bandwidth. Small dense kernels do not require enough floating-point operations to utilise a large percentage of the available floating-point rate and hence are also bound by the available memory bandwidth. Kernels for larger sized dense matrices, are limited by the floating-point performance of the device. These results hold in both single and double precision across all devices on the CUDA platform. Due to the artificially reduced double precision floating-point rate on the GTX 78 Ti, a larger portion of our dense kernels turn out to be bound by the available floating-point rate on that device. We also find a small number of larger matrices, which despite being predominantly sparse, do not contain enough zeros to overcome the performance cap. Figure 5.5 illustrates these findings for kernels in double precision and β = ; complementing figures for single precision are available in Appendix G. Unfortunately, we were unable to profile
5.3. HARDWARE PERFORMANCE ASSESSMENT 48 the OpenCL kernels on CUDA GPUs as NVIDIA discontinued support for OpenCL profiling in the latests releases of their toolkit. Nevertheless, we expect the results to conform to the described trends. The bottom two plots in Figures 5. to 5.4 and Appendix E illustrate the achieved percentage of the peak floating-point rate and memory bandwidth by GiMMiK s kernels for various size and sparsity patterns corresponding to the benchmarked PyFR matrices. By the analysis of these plots we notice that the utilisation of the available memory bandwidth dramatically drops for large dense matrices (also for very large sparse matrices) and results in performance degradation of GiMMiK s kernels up to the point where they perform worse than library GEMM. Further, not unexpectedly, dense matrices achieve higher utilisation of the available floating-point rate than sparse matrices. We observe a significantly higher percentage utilisation of the double precision floating-point rate on the GTX 78 Ti due to the artificial cap imposed by NVIDIA on their consumer-grade hardware. 5.3 Hardware Performance Assessment Having developed an understanding of the factors limiting the performance of our kernels, we can now evaluate the performance of the three pieces of hardware we have used for our investigation. However, comparing the absolute performance of the three devices with each other would not be very meaningful, as they all have different specifications. One could normalize the achieved results by the market price of each GPU and its running cost in order to find what hardware should be used to build a cost-optimal system for a given application. This investigation, however, is beyond the scope of this thesis. Nevertheless, it is insightful to compare the case of the consumer-grade and the industry-grade cards together. Our study shows that the largest speedups are obtained for the consumer-grade GPU for both cases of dense and sparse matrices. This is expected for the memory bandwidth bound kernels as the GTX 78 Ti offers higher memory bandwidth than Tesla K4c. Further, in the dense cases the artificial FLOPS limit in double precision on the GTX 78 Ti poses a big disadvantage to BLAS GEMM, and hence results in a much larger speedup achieved by our kernels. In terms of the absolute performance, GTX 78 Ti executes GiMMiK s kernels faster than the Tesla K4c in all cases of sparse matrices in double precision, which are bound by the available memory bandwidth. Further, the GTX 78 Ti is superior in all cases in single precision due to the higher hardware specification.
CHAPTER 5. EVALUATION 49.8.6.4.2 e+6 (a) Tesla K4c, β =.8.6.4.2 e+6 (b) GTX 78 Ti, β = Figure 5.5: Memory bandwidth bound (blue) and floating-point rate bound (red) double precision, β = kernels for a set of benchmark matrices.
5.4. PERFORMANCE IMPROVEMENTS OF PyFR 5 5.4 Performance Improvements of PyFR In order to investigate the effects of the applied optimisations on the performance of PyFR as a whole, we will first take a look at the way the matrix multiplication kernels are applied to the data at each time step of the simulation (we will only consider the the CUDA platform in this study). For this purpose we will use the same notation as in Section 2.2 to refere to the operator matrices. Each of the M32, M3, M46 and M6 is applied to the data once, while M is applied 3 times on a 2D mesh and 4 times on a 3D mesh. In PyFR all applications of M, M32 and M46 use β =, while the applications of M3 and M6 use β =. As explained previously in Section 2.3, the operator matrices exhibit different characteristics for quadrilateral, hexahedral, triangular and tetrahedral elements, hence, there is a need for a distinct set of kernels for each element type present in the mesh. We use the experimental data obtained through benchmarking of the individual matrices to construct a series of stacked plots, from which we can obtain an expected speedup of each of the matrix multiplication steps executed by PyFR for each element type. The calculated results and plots for the third order of accuracy are available in Appendix F. This however, does not translate directly to the final speedup of a simulation as it does not account for a series of other operations performed by the solver (e.g. point-local transformations, writing solution files, checking for NaNs, etc.). The percentage of the time PyFR spends on matrix multiplication increases with the desired order of accuracy and amounts to approximately 56%, 66% and 8% for the second, third and fourth order simulations [23]. Hence, we expect to achieve larger speedups for higher orders of accuracy through the use of GiMMiK. The aforementioned stacked plots also include theoretical limits under which GEMM cannot perform they are defined in terms of the minimum number of floating-point operations and amount of memory traffic any dense matrix multiplication routine has to perform. In the case where the operator matrix A has dimensions M K and the operand matrix B has dimensions K N, GEMM needs to perform approximately 2 M N K floating-point operations. It also needs to write M N elements of C and read K N elements of B. In the case when β we also need to read C and perform an additional M N floating-point multiplications with the β coefficient. We can now combine the theoretical amount of memory that needs to be moved and the number of floating-point operations that need to be performed with the peak memory bandwidth and floatingpoint rate of each device to compute the theoretical lower bounds on the performance of any GEMM routine. By analysing the stacked plots and the data in Table F. we see that through the use of GiMMiK for the sparse cases we are sometimes able to perform the matrix multiplication step of a PyFR simulation under the theoretical limits of the hardware. We
CHAPTER 5. EVALUATION 5 observe this phenomenon for a majority of kernels executed in double precision on the GTX 78 Ti, but also a small number of kernels on the Tesla K4c. This is possible when the multiplication step is composed of individual operator matrices that contain whole columns of zeros, which effectively means that the dimensionality of these matrices decreases and entire rows of the operand matrix do not need to be loaded from memory at all. It also decreases the amount of floating-point operations required to be executed by the kernel. To investigate the effective performance improvement that our bespoke kernels bring to PyFR we have executed an example real-world simulation of a compressible unsteady flow over a cylinder. For our purpose we have not dealt with the physical feasibility of the solution, but solely with the performance of the solver. For full details regarding the setup of the simulation please refer to the original paper on PyFR [23]. The simulation was executed over an unstructured mesh of 46, 6 hexahedral elements depicted in Figure 5.6 on the Tesla K4c and GTX 78 Ti GPUs. Achieved speedups of PyFR for this simulation for 6 orders of accuracy and across our two devices on the CUDA platform are summarized in Table 5.. Isosurfaces of density captured during this simulation executed using GiMMiK as a matrix multiplication kernel provider for PyFR are illustrated in Figure 5.7. Through the use of bespoke matrix multiplication kernels, we are able to grant significant performance benefits to PyFR. We note speedups between.35 and.72 for an example fluid flow PyFR simulation executed in double precision on a tetrahedral mesh across 6 orders of accuracy on a single Tesla K4c. In practical terms it means that if one had a simulation which takes hours to execute in the fourth order of accuracy on a tetrahedral mesh in double precision, they could expect to execute it in as little as 58 hours using GiMMiK. Our results have the potential to influence the numerical method choices made for various types of fluid simulations, while the performance in- Figure 5.6: Cross section in the x y plane of the unstructured cylinder mesh used to execute the simulation of an unsteady flow over a cylinder. Y Z X
5.4. PERFORMANCE IMPROVEMENTS OF PyFR 52 Figure 5.7: Isosurfaces of density captured during a simulation of an unsteady flow over a cylinder executed with PyFR using GiMMiK as the matrix multiplication kernel provider. Order 2 3 4 5 6 Tesla K4c single double 2.4.5.66.35.5.4.69.72.66.58.54.64 GTX 78 Ti single double 2.29 6.99.77 6.94.6 7.4.85.38.78 8.95.6. Table 5.: Achieved speedups of a PyFR simulation of an unsteady flow over a cylinder across 6 orders of accuracy in single and double precision. crease, to some extent, can allow for use of higher order meshes and finer time steps. This would improve the physical properties of the simulation and increase the quality of the produced solutions. Unfortunately, this is not very straightforward. Increasing the order of accuracy of a simulation requires other parameters to change as well (e.g. the granularity of the time step needs to be finer for higher orders of accuracy). In order to develop a good understanding of how the numerical method choices can be influenced by the performance increase, we would need to perform a rigorous experimental investigation accounting for the physical soundness and feasibility of the produced solutions. This, however, is beyond the scope of this project.
CHAPTER 5. EVALUATION 53 5.5 Limitations Figures 5. to 5.4 and Appendix E identify a set of GiMMiK s kernels which do not benefit from our proposed optimisations and fail to achieve any speedup over cublas GEMM. These kernels correspond to larger sized dense benchmark matrices in cases of both single and double precision. Additionally, in single precision we note decreased performance of our kernels for large sparse matrices. Through profiling we observe that kernels which fail to achieve a speedup, attempt to utilise a very large (often the maximum available) number of registers, which decreases their occupancy on the multiprocessors and increases register pressure due to a large number of temporary variables used. Further, CUDA GPUs employ register scoreboarding to facilitate out-of-order execution [3], which promotes the usage of a larger number of registers to avoid data hazards. Increased register pressure can lead to a large amount of register spillage into memory. In the case of GiMMiK s kernels for larger matrices this spillage is so severe that the data cannot be retained in the L or even L2 level caches. As a consequence of this, it needs to be written and read from local memory. This increases memory traffic, decreases occupancy and as a consequence decreases performance of our kernels. It also explains why the achieved fraction of the peak memory bandwidth and floating-point rate of our kernels for large matrices drops. We believe that to overcome this limitation we can tile the matrix product in such a way to bring the register pressure down and hence achieve higher occupancy and reduce spillage of registers into local memory. Figure 5.8 further illustrates the effects of the achieved memory bandwidth and the amount of data spillage into local memory on the speedups of GiMMiK over cublas. It shows that the speedups over cublas shrink as the observed useful memory bandwidth decreases (device memory bandwidth exclusive of traffic due to local memory). In the case of double precision on the GTX 78 Ti we see that the observed useful memory bandwidth is often lower than in the remaining cases due to the cap placed on the floating-point rate, which becomes a significant factor affecting the performance of our kernels. From the plots we see that the the smallest speedups are obtained by kernels which utilise a smaller percentage of the available memory bandwidth. Useful memory bandwidth was calculated using the total device memory bandwidth and local memory overhead as reported by nvprof.
5.5. LIMITATIONS 54 Speedup 9 8 7 6 5 4 3 2 2 3 4 5 6 7 8 Speedup 4 2 8 6 4 2 2 3 4 5 6 7 8 Useful memory bandwidth [% of peak] Useful memory bandwidth [% of peak] (a) Double precision, Tesla K4c, β = (b) Single precision, Tesla K4c, β = 7 4 6 2 5 Speedup 4 3 2 Speedup 8 6 4 2 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 Useful memory bandwidth [% of peak] Useful memory bandwidth [% of peak] (c) Double precision, GTX 78 Ti, β = (d) Single precision, GTX 78 Ti, β = Figure 5.8: Plots of speedup against useful memory bandwidth (exclusive of local memory traffic) expressed as a percentage of the peak memory bandwidth for kernels generated for the set of benchmark matrices and β =. Speedups less than are denoted red.
CHAPTER 5. EVALUATION 55 Speedup 4 3.5 3 2.5 2.5.5 2 3 4 5 6 7 8 Speedup 9 8 7 6 5 4 3 2 2 3 4 5 6 7 8 Useful memory bandwidth [% of peak] Useful memory bandwidth [% of peak] (e) Double precision, Tesla K4c, β (f) Single precision, Tesla K4c, β Speedup 25 2 5 5 2 3 4 5 6 7 8 Speedup 9 8 7 6 5 4 3 2 2 3 4 5 6 7 Useful memory bandwidth [% of peak] Useful memory bandwidth [% of peak] (g) Double precision, GTX 78 Ti, β (h) Single precision, GTX 78 Ti, β Figure 5.8: Plots of speedup against useful memory bandwidth (exclusive of local memory traffic) expressed as a percentage of the peak memory bandwidth for kernels generated for the set of benchmark matrices and β. Speedups less than are denoted red.
Chapter 6 Conclusions & Future Work The aim of this chapter is to give a summary of our findings and address the initial list of objectives set for this investigation. In Section 6.2 we discuss the scope for further investigation of the performance improvements GiMMiK can deliver to other applications, we suggest a series of software optimisations our bespoke kernels could benefit from and attempt to address the limitations of our solution described in Section 5.5. 6. Summary The aim of this project was to investigate the performance improvement over the stateof-the-art BLAS GEMM achievable through the use of bespoke matrix multiplication kernels for a block-by-panel type of matrix multiplication characteristic to PyFR. In Chapter 2 we provide details about the context of our study and describe the implementation platform our our choice CUDA and OpenCL. Subsequently, in Chapter 3 we give an overview of the work that lays foundations to many high-performance implementations of GEMM in software. We also investigate other research in the field of small-scale linear algebra and draw insights from the described techniques to apply them in our investigation. In Chapter 4 we detail the methodology used in this study. We describe the benchmarking infrastructure used and provide a systematic evaluation of a series of software optimisations in order to find the successful ones and incorporate them into GiMMiK. Chapter 5 provides an empirical evaluation of the performance GiMMiK s kernels are able to achieve for the set of benchmark matrices extracted from the PyFR solver. Lastly, we plug GiMMiK into PyFR and investigate the performance benefits it grants for an example fluid flow simulation. In this thesis we have made the following contributions: We have demonstrated that through generation of bespoke matrix multiplication kernels with a prior knowledge of the operator matrix we are able to outperform 57
6.. SUMMARY 58 highly optimised state-of-the-art cublas and clblas libraries and grant speedups of up to 9.98 (2.2) times on the Tesla K4c and 63.3 (3.7) times on the GTX 78 Ti in double (single) precision for individual PyFR matrices. We present a series of software optimisation techniques which allow us to achieve this result for the block-by-panel type of matrix multiplication. We evaluate each of the proposed optimisations in a systematic way by benchmarking it across a wide variety of matrices with different sizes and sparsity patterns in order to assess its usefulness and success. We have presented GiMMiK an open-source Python library for generating highly performant matrix multiplication kernel code for the CUDA and OpenCL platforms, which incorporates all of the successful optimisation discussed in this thesis. The software is available for download at https://github.com/bartwozniak/ GiMMiK. The generated kernels in our proposed solution consist of a fully unrolled matrix inner product with reduced number of floating-point operations through the removal of sparsity. Further, our kernels embed the operator matrix directly in the code to benefit from the constant cache and compiler optimisations. This design allows us to avoid execution of any cleanup code due to inadequate tiling choices and to efficiently handle values of the α and β coefficients. GiMMiK, by default, does not attempt to reduce common sub-expressions because this optimisation was found to grant negative effects on performance. We show how, through the use of bespoke matrix multiplication kernels, we are able to grant significant performance benefits to applications within the area of CFD. We report speedups up to.72 (2.4) for an example fluid flow PyFR simulation executed on a tetrahedral mesh in double (single) precision on a single Tesla K4c. These results can influence the numerical method choices made for various types of fluid simulations, while the performance increase, to some extent, can allow for use of higher order meshes and finer time steps, improving the physical properties and increasing the quality of the results of the simulation. A general paper GiMMiK - Generating Bespoke Matrix Multiplication Kernels for Various Hardware Accelerators; Applications in High-Order Computational Fluid Dynamics by Bartosz D. Wozniak, Freddie D. Witherden, Peter E. Vincent and Paul H. J. Kelly has been prepared for submission to the Computer Physics Communications journal, based on the findings in this report. It is available upon request. We believe that numerous other applications, possibly outside of the area of CFD, can also benefit
CHAPTER 6. CONCLUSIONS & FUTURE WORK 59 from the use of GiMMiK as the kernel provider for matrix multiplication operations performed on the CUDA and OpenCL platforms. Further, we are confident that the methodology applied in this study can give a valuable insight into efficient implementations of small-scale linear algebra kernels on GPUs. 6.2 Future Work CPU implementation As the initial steps in our investigation we have considered the CPU as an implementation platform for our kernels, however, keeping in mind the particular application of our kernels in Flux Reconstruction schemes, the GPU platform became the subject of our study. Nevertheless, there also exists scope for performance improvement over the CPU BLAS libraries such as ATLAS, GOTOBLAS or Intel MKL. Further, GiMMiK already supports the OpenCL platform and one might be interested to evaluate its performance across a set of hardware accelerators such as the Intel Xeon Phi. Instruction Reordering In Section 5.5 we have already described the problem our kernels encounter for larger matrix sizes. The dramatically increasing number of registers requested by our kernels leads to large amounts of memory spillage into local memory and harms the overall performance. The Kepler architecture statically schedules instructions within the warp [8]; it is the NVIDIA s compiler that attempts to find the most efficient schedule for instructions. From the disassembled cubin files we see that the nvcc compiler is not capable of efficiently interleaving the arithmetic operations performed by our kernels with the memory writes and reads. We can reorder instructions explicitly in the code and reuse temporaries to aid the compiler in allocating a smaller number of registers. Further, we speculate that an appropriate tiling scheme could prove successful in reducing the register pressure as well. A B Figure 6.: Proposed simple split-in-two tiling scheme.
6.2. FUTURE WORK 6 Register Tiling Through Graph Partitioning We have also suggested the use of tiling to overcome the problems introduced by reduction of common sub-expressions. As discussed in Section 3. blocking is one of the fundamental techniques used by many BLAS implementations in order to increase data reuse from the memory caches and registers. Simple tiling, such as splitting the operand matrix into upper and lower halves (see Figure 6.) would reduce the number of subterms in the generated code and hence decrease the number of registers our kernels require. However, this scheme would not solve the problem with repeated loads of elements of B and might even reduce the scope for reduction of common sub-expressions. To aid this, we could use graph partitioning algorithms (e.g. hypergraph partitioning) to find the most efficient and balanced split of the entries in the operator matrix across two separate threads. Consider a matrix with the sparsity pattern depicted in Figure 6.2. An efficient way to tile this product would be to assign columns,, 5, 6 to one thread and 2, 3, 4 to another. Such an allocation would allow us to load each entry of the B operand matrix exactly once, reduce common sub-expressions and leave each thread with a similar workload in terms of the amount of memory reads, writes and the number of arithmetic operations the thread needs to perform. 2 3 4 5 6 2 3 4 Figure 6.2: Example sparsity pattern suitable for graph partitioning tiling. Arithmetic Operations Aggregation We have described in Section 3.3 the approach Castonguay et al. took in the development of their bespoke kernels for use in the SD++ solver. They were able to combine multiple arithmetic transformations into a single kernel to increase data reuse and reduce the required memory traffic. Such an optimisation is very dependent on the particular application one would like to use GiMMiK for. In order to further investigate the available performance improvements to PyFR we could attempt a similar optimisation, but would simultaneously need to weigh the benefits it brings and the costs due to reduced clarity of the codebase. Further, to make this optimisation more generic, we could use techniques such as delayed evaluation to first build up an entire arithmetic transformation and then optimise it and generate highly performant code. We could look into some of the techniques used by LGen (see Section 3.4) to realise this proposition.
CHAPTER 6. CONCLUSIONS & FUTURE WORK 6 In-Place Multiplication The design of our kernels allows for an in-place matrix multiplication i.e. C αac + βc. A discussion with the authors of PyFR suggested that this might be useful to reduce the memory usage of the solver, while such operations are not uncommon and BLAS GEMM cannot perform in-place multiplication. Further, such type of matrix product can be considered an optimisation for the CPU platform, as it reduces the cache footprint. To our knowledge, this path of an investigation has not been considered before and might be an interesting one to explore. Using GiMMiK to Build GEMM Goto and van de Geijn [8] in their paper illustrate how block-by-panel matrix multiplication kernels can be used as a building block for a general matrix product where all of the used matrices are large. We have summarized this claim in Section 3.. To use GiMMiK as the inner-kernel of GEMM we would need to give up its highly specialized nature. In order to maintain the high performance of our kernels we could embed the sparsity pattern in the kernel, but rely on the read-only data cache to maintain entries of the operator matrix quickly accessible. In Section 4.2 we have demonstrated that embedding values of the operator matrix directly in the code and utilising the constant cache to bring them into registers is not the game-changing optimisation used by our kernels, hence the performance penalty should not be severe. This would not make our GEMM implementation completely general but suitable for use in applications repeatedly multiplying operator matrices with the same sparsity pattern. Commodity Hardware Lastly, we have demonstrated that we are able to achieve impressive speedups over cublas GEMM on commodity hardware. In order to answer the question whether consumer-grade GPUs could be used to perform high-order fluid flow simulations instead of expensive industry-grade cards, we would need to perform a detailed benchmarking of a number of simulations, taking into account their physical feasibility and accuracy, and considering the exploitation costs of the hardware.
Bibliography [] clblas: OpenCL BLAS. http://clmathlibraries.github.io/clblas/, August 23. [Online; accessed -May-24]. [2] History of CUDA, OpenCL, and the GPGPU. https://www.olcf.ornl.gov/kb_ articles/history-of-the-gpgpu/, November 23. [Online; accessed 23-May- 24]. [3] P. Castonguay, P. E. Vincent, and A. Jameson. A New Class of High-Order Energy Stable Flux Reconstruction Schemes for Triangular Elements. J. Sci. Comput., 5():224 256, April 22. [4] P. Castonguay, D. Williams, P.E. Vincent, M. López, and A. Jameson. On the Development of a High-Order, Multi-GPU Enabled, Compressible Viscous Flow Solver for Mixed Unstructured Grids. June 2. [5] Charlie Demerjian. A long look at AMD s Hawaii and Volcanic Islands architecture. http://semiaccurate.com/23//23/ long-look-amds-hawaii-volcanic-islands-architecture/, October 23. [Online; accessed 23-May-24]. [6] P. Estival and L. Giraud. Performance and accuracy of the matrix multiplication routines : CUBLAS on NVIDIA Tesla versus MKL and ATLAS on Intel Nehalem. Technical report, March 2. [7] M. Georgiou. Developing Optimized Matrix Multiplication Kernels for CPU and GPU Platforms: Application to High Order CFD. Master s thesis, Imperial College of Science, Technology and Medicine, September 23. [8] K. Goto and R.A. van de Geijn. Anatomy of high-performance matrix multiplication. ACM Trans. Math. Softw., 34(3):2: 2:25, May 28. [9] H.T. Huynh. A Flux Reconstruction Approach to High-Order Schemes Including Discontinuous Galerkin Methods. AIAA paper, 479, 27. 63
BIBLIOGRAPHY 64 [] H.T. Huynh. A Reconstruction Approach to High-Order Schemes Including Discontinuous Galerkin for Diffusion. AIAA paper, 43, 29. [] C. Jhurani and P. Mullowney. A GEMM interface and implementation on NVIDIA GPUs for multiple small matrices. CoRR, abs/34.753, 23. [2] R Nath, S. Tomov, and J. Dongarra. An improved MAGMA GEMM for Fermi graphics processing units. International Journal of High Performance Computing Applications, 24(4):5 55, 2. [3] NVIDIA. Kepler GK whitepaper. http://www.nvidia.com/content/pdf/ kepler/nvidia-kepler-gk-architecture-whitepaper.pdf, 22. [Online; accessed 24-May-24]. [4] NVIDIA. cublas Library User Guide. http://docs.nvidia.com/cuda/cublas/ index.html, February 24. [Online; accessed 25-May-24]. [5] NVIDIA. CUDA C Programming Guide. http://docs.nvidia.com/cuda/pdf/ CUDA_C_Programming_Guide.pdf, February 24. [Online; accessed 23-May-24]. [6] NVIDIA. cusparse Library User Guide. http://docs.nvidia.com/cuda/ cusparse/index.html, February 24. [Online; accessed 25-May-24]. [7] Khronos Opencl and A. Munshi. The OpenCL Specification. http://www.khronos. org/registry/cl/specs/opencl-2..pdf, February 24. [Online; accessed 23- May-24]. [8] Ryan Smith. Nvidia geforce gtx 68 review: Retaking the performance crown. http://www.anandtech.com/show/5699/nvidia-geforce-gtx-68-review, March 22. [Online; accessed 23-May-24]. [9] D.G. Spampinato and M. Püschel. A Basic Linear Algebra Compiler. In International Symposium on Code Generation and Optimization (CGO), 24. [2] P. E. Vincent, P. Castonguay, and A. Jameson. A New Class of High-Order Energy Stable Flux Reconstruction Schemes. J. Sci. Comput., 47():5 72, April 2. [2] R.C Whaley and J.J. Dongarra. Automatically Tuned Linear Algebra Software. In Proceedings of the 998 ACM/IEEE Conference on Supercomputing, SC 98, pages 27, Washington, DC, USA, 998. IEEE Computer Society. [22] D.M. Williams and A. Jameson. Energy stable flux reconstruction schemes for advection-diffusion problems on tetrahedra. Journal of Scientific Computing, pages -39, 23.
BIBLIOGRAPHY 65 [23] F.D. Witherden, A.M Farrington, and P.E. Vincent. PyFR: An Open Source Framework for Solving Advection-Diffusion Type Problems on Streaming Architectures using the Flux Reconstruction Approach. arxiv preprint arxiv:32.638, 23. [24] C. Woolley. Introduction to OpenCL. http://www.cc.gatech.edu/~vetter/ keeneland/tutorial-2-4-4/6-intro_to_opencl.pdf, April 2. [Online; accessed 4-May-24].
Appendix A Characteristics of the Operator Matrices 67
APPENDIX A. CHARACTERISTICS OF THE OPERATOR MATRICES 68 zeros m k Matrix m k M 8 4.5 M32 4 8.5 M3 4 8.5 M46 8 4.5 M6 8 8.75 (a) Order = zeros m k Matrix m k M 2 25.8 M32 25 5.8 M3 25 2.8 M46 5 25.8 M6 5 2.9 (d) Order = 4 zeros m k Matrix m k M 2 9.67 M32 9 8.7 M3 9 2.67 M46 8 9.7 M6 8 2.83 (b) Order = 2 zeros m k Matrix m k M 24 36.83 M32 36 72.83 M3 36 24.83 M46 72 36.83 M6 72 24.92 (e) Order = 5 zeros m k Matrix m k M 6 6.75 M32 6 32.75 M3 6 6.75 M46 32 6.75 M6 32 6.88 (c) Order = 3 zeros m k Matrix m k M 28 49.86 M32 49 98.86 M3 49 28.86 M46 98 49.86 M6 98 28.93 (f) Order = 6 Table A.: Height, width and sparsity factor of quadraliteral matrices obtained using the Gauss-Legendre method. zeros m k Matrix m k M 2 9.89 M32 9 8.7 M3 9 2.67 M46 8 9.7 M6 8 2.83 (g) Order = 2 zeros m k Matrix m k M 6 6.94 M32 6 32.78 M3 6 6.75 M46 32 6.78 M6 32 6.88 (h) Order = 3 zeros m k Matrix m k M 2 25.96 M32 25 5.82 M3 25 2.8 M46 5 25.82 M6 5 2.9 (i) Order = 4 zeros m k Matrix m k M 24 36.97 M32 36 72.85 M3 36 24.83 M46 72 36.85 M6 72 24.92 (j) Order = 5 zeros m k Matrix m k M 28 49.98 M32 49 98.87 M3 49 28.86 M46 98 49.87 M6 98 28.93 (k) Order = 6 Table A.: Height, width and sparsity factor of quadraliteral matrices obtained using the Gauss- Legendre-Lobatto method.
APPENDIX A. CHARACTERISTICS OF THE OPERATOR MATRICES 69 zeros m k Matrix m k M 24 8.75 M32 8 24.75 M3 8 24.75 M46 24 8.75 M6 24 24.92 (a) Order = zeros m k Matrix m k M 5 25.96 M32 25 375.96 M3 25 5.96 M46 375 25.96 M6 375 5.99 (d) Order = 4 zeros m k Matrix m k M 54 27.89 M32 27 8.9 M3 27 54.89 M46 8 27.9 M6 8 54.96 (b) Order = 2 zeros m k Matrix m k M 26 26.97 M32 26 648.97 M3 26 26.97 M46 648 26.97 M6 648 26.99 (e) Order = 5 zeros m k Matrix m k M 96 64.94 M32 64 92.94 M3 64 96.94 M46 92 64.94 M6 92 96.98 (c) Order = 3 zeros m k Matrix m k M 294 343.98 M32 343 29.98 M3 343 294.98 M46 29 343.98 M6 29 294.99 (f) Order = 6 Table A.2: Height, width and sparsity factor of hexahedral matrices obtained using the Gauss-Legendre method. zeros m k Matrix m k M 54 27.96 M32 27 8.9 M3 27 54.89 M46 8 27.9 M6 8 54.96 (g) Order = 2 zeros m k Matrix m k M 96 64.98 M32 64 92.95 M3 64 96.94 M46 92 64.95 M6 92 96.98 (h) Order = 3 zeros m k Matrix m k M 5 25.99 M32 25 375.96 M3 25 5.96 M46 375 25.96 M6 375 5.99 (i) Order = 4 zeros m k Matrix m k M 26 26.99 M32 26 648.98 M3 26 26.97 M46 648 26.98 M6 648 26.99 (j) Order = 5 zeros m k Matrix m k M 294 343.99 M32 343 29.98 M3 343 294.98 M46 29 343.98 M6 29 294.99 (k) Order = 6 Table A.2: Height, width and sparsity factor of hexahedral matrices obtained using the Gauss-Legendre- Lobatto method.
APPENDIX A. CHARACTERISTICS OF THE OPERATOR MATRICES 7 zeros m k Matrix m k M 6 3. M32 3 6.33 M3 3 6. M46 6 3.33 M6 6 6.33 (a) Order = zeros m k Matrix m k M 5 5. M32 5 3.4 M3 5 5. M46 3 5.4 M6 3 5.33 (d) Order = 4 zeros m k Matrix m k M 9 6. M32 6 2. M3 6 9. M46 2 6. M6 2 9.33 (b) Order = 2 zeros m k Matrix m k M 8 2. M32 2 42.2 M3 2 8. M46 42 2.2 M6 42 8.33 (e) Order = 5 zeros m k Matrix m k M 2. M32 2.4 M3 2. M46 2.4 M6 2 2.33 (c) Order = 3 zeros m k Matrix m k M 2 28. M32 28 56.2 M3 28 2. M46 56 28.2 M6 56 2.33 (f) Order = 6 Table A.3: Height, width and sparsity factor of triangular matrices obtained using the Williams-Shunn method. zeros m k Matrix m k M 2 4. M32 4 2.5 M3 4 2. M46 2 4.5 M6 2 2.5 (a) Order = zeros m k Matrix m k M 6 35. M32 35 5.7 M3 35 6. M46 5 35.7 M6 5 6.5 (d) Order = 4 zeros m k Matrix m k M 24. M32 3.6 M3 24. M46 3.6 M6 3 24.5 (b) Order = 2 zeros m k Matrix m k M 84 56. M32 56 68.5 M3 56 84. M46 68 56.5 M6 68 84.5 (e) Order = 5 zeros m k Matrix m k M 4 2. M32 2 6.9 M3 2 4. M46 6 2.9 M6 6 4.5 (c) Order = 3 zeros m k Matrix m k M 2 84. M32 84 252.4 M3 84 2. M46 252 84.4 M6 252 2.5 (f) Order = 6 Table A.4: Height, width and sparsity factor of tetrahedral matrices obtained using the Shunn-Ham method.
Appendix B Results for Individual Optimisations The tables in this appendix give the raw experimental data obtained through benchmarking a set of kernels as described in Section 4.2. The results for kernels with the optimisations switched off are identical to GiMMiK s kernels, and are not duplicated in the following tables. See Appendix D for the benchmarking results for GiMMiK. 7
Order = Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β TEX CSE SPA NAI TEX CSE SPA NAI M.33.27.3.69.57.49.55.69 M32.35.3.35.6.47.4.47.6 M3.36.3.35.6.47.4.47.6 M46.3.27.3.69.57.49.55.69 M6.43.37.42.5.67.59.68.5 M.59.49.55.88.94.8.93.88 M32.8.68.78.289.4.88.3.289 M3.6.5.58.24.82.74.86.24 M46.72.62.7.295.26.7.24.295 M6.77.7.79.376.3.4.33.376 M.23.75.85.49.74.7.34.49 M32.22.7.4.746.229.55.97.747 M3.94.75.86.49.34.5.34.49 M46.232.4.32.89.32.9.22.8 M6.26..32.89.222.89.29.89 M.299.8.26.739.234.6.9.74 M32.683.86.23.845.55.265.48.846 M3.7.6.22.792.233.66.2.792 M46.874.82.2.868.52.35.365.867 M6.83.62.94.538.334.283.344.537 M.42.48.7.238.446.228.332.238 M32.34.268.48 3.64.896.386.8 3.62 M3.3.44.69.27.328.23.279.27 M46.675.265.3 3.674.324.52.793 3.674 M6.248.225.267 2.53.486.4.49 2.528 M.674.95.223.935.599.283.446.936 M32 2.46.382.72 6.78.578.678.849 6.779 M3.4.86.223 2.3.432.37.397 2.32 M46 2.8.368.43 6.779 2.33.7.233 6.783 M6.35.299.353 3.994.648.53.75 3.997 (a) Quadrangles, Gauss-Legendre Method Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β TEX CSE SPA NAI TEX CSE SPA NAI M.52.46.55.88.88.77.93.88 M32.82.68.77.289.3.87.3.289 M3.6.5.58.24.82.74.86.24 M46.72.62.7.295.26.8.25.295 M6.77.7.79.375.3.4.33.376 M.74.65.84.49.9.5.33.49 M32.2.9.4.746.22.56.239.747 M3.92.75.86.49.33.5.33.49 M46.2.3.32.89.286.9.29.89 M6.26..32.89.22.89.29.8 M.94.84.22.739.49.32.9.739 M32.584.88.236.846.52.267.49.844 M3.28.7.23.792.2.7.2.79 M46.738.8.2.867.497.34.365.868 M6.82.6.95.538.334.283.344.538 M.6..7.238.8.6.334.238 M32.7.278.455 3.62.853.486.4 3.63 M3.37.44.69.272.329.23.279.27 M46.485.265.334 3.674.236.5.793 3.675 M6.25.225.267 2.529.483.399.489 2.529 M.37.23.224.935.23.89.445.935 M32 2.244.378.222 6.774.478.678.843 6.78 M3.34.9.224 2.3.45.345.397 2.32 M46 2.349.367.494 6.782.865.697.234 6.783 M6.35.297.353 3.995.648.547.76 3.995 (b) Quadrangles, Gauss-Legendre-Lobatto Method Table B.: Results of experiments conducted to verify hypothesis stated in Section 4.2. TEX corresponds to kernels using the texture cache (Section 4.2.), SPA to kernels without sparsity elimnation (Section 4.2.3), CSE to kernels with common sub-expression elimination (Section 4.2.2) and NAI is the naive 3-loop implementation (Section 4.2.4). Results for quadrilateral element matrices in double precision on Tesla K4c. Reported values are averages of 3 runs reproducible within 2% in [ms]. APPENDIX B. RESULTS FOR INDIVIDUAL OPTIMISATIONS 72
Order = Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β TEX CSE SPA NAI TEX CSE SPA NAI M.82.74.86.333.5.34.53.333 M32.9.82.93.292.2.99.7.292 M3.93.8.92.292.2.99.6.292 M46.8.74.83.333.5.34.53.333 M6.28..32.85.97.69.29.85 M.485.2.226 2.6.697.376.397 2.58 M32.477.286.438 3.95.535.4.955 3.95 M3.328.2.32 2.76.44.285.463 2.76 M46.595.263.36 3.22.983.532.556 3.2 M6.36.324.48 6.66.684.522.2 6.63 M.333.398.56 8.53.45.962 2.446 8.497 M32 3.772.823 5.82 6.5 2.427.982 27.4 6.52 M3.785.47.6 8.464.8.733 2.293 8.464 M46 3.537.632.963 6.96 3.89.795 4.76 6.96 M6.824.723.627 25.26 2.22.779 6.293 25.25 M 5.95.723 4.8 25.64 3.465.67 7.279 25.66 M32 4.4 4.4 29.73 62.89 6.958 3.846 27.2 62.86 M3 2.269.69 6.697 25.39 2.8.337 6.6 25.4 M46 9.58.496 6.23 63.77 7.692 3.682 7.25 63.77 M6 2.39.32 4.687 75.53 4.25 3.44 5.4 75.45 M.5 4.793 62.28 7.547 5.29 62.24 M32 33.49.7 86.3 5.32 9.426 85.99 M3 5.728.39 62.28 4.3 2.37 62.26 M46 26.33 5.67 86.56 6.7 9.8 86.6 M6 5.83 2.2 86.53 7.647 5.838 86.6 M 2.8.37 34.34 3.43.37 34.23 M32 69.47 22.79 468.79 3.6 7.65 468.8 M3.57 3.652 34.43 7.264 4.644 34.49 M46 52.36.72 469.8 29.72 8.5 468.92 M6.8 3.9 42. 3.32 9.22 42.26 (c) Hexahedra, Gauss-Legendre Method Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β TEX CSE SPA NAI TEX CSE SPA NAI M.29.9.228 2.59.48.322.397 2.58 M32.59.29.437 3.97.545.423.955 3.94 M3.335.2.33 2.76.442.284.464 2.75 M46.595.263.36 3.22.983.533.557 3.2 M6.36.323.47 6.65.684.522.3 6.65 M.425.376.52 8.499.39.665 2.45 8.495 M32 2.87.773 6.97 6.5 2.6.97 28.27 6.5 M3.87.42.623 8.462.22.734 2.288 8.464 M46 2.62.632.83 6.96 2.666.777 4.696 6.96 M6.87.76.639 25.27 2.83.3 6.29 25.26 M.696.62 2.287 25.65.67.63 7.34 25.64 M32 2.92 3.549 3.22 62.85 6.77 3.45 2.55 62.83 M3 2.5.674 6.265 25.39 2.3.329 6.52 25.4 M46 8.65.44 8.54 63.76 7.38 3.69 7.27 63.75 M6 2.39.39 4.925 75.5 4.249 3.43 49.98 75.5 M.6.95 2.6 62.32 2.462.996 3.26 62.28 M32 29.7.5 86.4 4.54 8.43 86.2 M3 6.23.345 62.26 4.242 2.366 62.32 M46 2.23 4.959 86.6 5.3 8.639 86.54 M6 5.48.97 86.53 7.43 5.759 86.54 M 2.86.273 34.23 3.56 2.749 34.26 M32 59.75 22.9 468.82 28.78 6.54 468.77 M3 8.839 3.235 34. 6.885 4.32 34.3 M46 47.44.8 469.2 28.9 6.74 468.94 M6.8 3.4 42.2 3.32 9.235 4.8 (d) Hexahedra, Gauss-Legendre-Lobatto Method Table B.: Results of experiments conducted to verify hypothesis stated in Section 4.2. TEX corresponds to kernels using the texture cache (Section 4.2.), SPA to kernels without sparsity elimnation (Section 4.2.3), CSE to kernels with common sub-expression elimination (Section 4.2.2) and NAI is the naive 3-loop implementation (Section 4.2.4). Results for hexahedral element matrices in double precision on Tesla K4c. Reported values are averages of 3 runs reproducible within 2% in [ms]. APPENDIX B. RESULTS FOR INDIVIDUAL OPTIMISATIONS 73
Order = Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β TEX CSE SPA NAI TEX CSE SPA NAI M.27.2.23.52.44.38.43.52 M32.27.25.27.44.36.32.36.44 M3.29.24.28.44.37.32.36.44 M46.2.23.52.42.38.43.52 M6.37.27.3.83.54.45.5.83 M.5.34.39.6.72.6.66.6 M32.6.46.53.44.73.62.69.44 M3.52.37.42.4.66.54.6.4 M46.59.42.47.35.88.72.8.35 M6.69.49.55.88.96.8.9.88 M.4.5.57.24.23.84.95.24 M32.273.77.87.342.247.3.4.342 M3.4.53.59.22.5.8.9.22 M46.88.69.78.337.92.2.37.336 M6.58.75.84.394.69.26.44.394 M.328.7.8.374.264.2.27.374 M32.8.7.35.68.792.89.86.68 M3.32.7.8.374.255.4.26.374 M46.75.9.23.742.6.82.26.742 M6.33.9.2.742.284.85.25.742 M.663.95.9.67.545.44.62.67 M32 2.7.27.22.327.887.463.36.327 M3.592.94.5.68.442.58.67.68 M46.762.63.87.356.459.268.36.356 M6.97.48.62.93.65.262.28.92 M.635.24.43.98.332.8.27.97 M32 5.54.444.44 2.95 3.399.75.86 2.94 M3.4.22.36.885.739.25.224.885 M46 6.23.275.324 2.267 4.589.48.475 2.265 M6.555.94.23.759.93.345.384.759 (e) Triangles, Williams-Shunn Order = Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β TEX CSE SPA NAI TEX CSE SPA NAI M.44.37.42..78.67.75. M32.47.4.48.85.57.52.58.85 M3.52.43.48.85.58.52.59.85 M46.37.42..77.67.75. M6.75.56.63.239.2.88..239 M.74.82.9.42.9.45.59.42 M32.22.2.8.494.22.4.52.494 M3.9.87.99.42.96.26.29.42 M46.24.94.4.58.23.68.9.58 M6.53.3.48.99.378.24.244. M.42.49.69.27.973.322.289.28 M32 2.543.249.33.695.658.446.67.694 M3.278.69.83.45.93.349.35.46 M46 2.79.24.235.84.39.37.44.85 M6 3.7.264.294 3.389.74.77.76 3.39 M 6.345.38.36 2.99 5.2.866.669 2.99 M32 2. 2.88.257 5.32 8.97 2.978.652 5.32 M3 4.95.49.433 2.973 2.967.79.958 2.973 M46 2.76.792.62 5.273 9.268.982.27 5.273 M6 8.64.458.65 8.838 4.77.359 2.558 8.838 M 6..294.479 6.532 9.795.6 2.8 6.533 M32 44.69 9.95 6.756 2.68 38.2 4.28 2.78 2.68 M3 3.47 2.67.54 6.499 8.35 3.33.977 6.53 M46 34.43 2.37 2.95 3.2 2.6 4. 5.86 3.2 M6 2.97.747 2.382 9.43 9.748 2.654 5.395 9.39 M 35.52 2.95 2.785 2.94 2.53 4.322 3.665 2.95 M32 2.5 27.3 26.8 28.27 5.74 45.7 5.56 28.28 M3 32.2 6.254 2.88 2.89 23.7 7.535 3.538 2.88 M46 85.79 6.79 7.236 29.7 52.5 9.66.9 29.8 M6 42.25 4.2 4.623 38.52 22.77 5.38.4 38.53 (f) Tetrahedra, Shunn-Ham Table B.: Results of experiments conducted to verify hypothesis stated in Section 4.2. TEX corresponds to kernels using the texture cache (Section 4.2.), SPA to kernels without sparsity elimnation (Section 4.2.3), CSE to kernels with common sub-expression elimination (Section 4.2.2) and NAI is the naive 3-loop implementation (Section 4.2.4). Results for triangular and tetrahedral element matrices in double precision on Tesla K4c. Reported values are averages of 3 runs reproducible within 2% in [ms]. APPENDIX B. RESULTS FOR INDIVIDUAL OPTIMISATIONS 74
Order = Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β TEX CSE SPA NAI TEX CSE SPA NAI M.6.7.5.32.32.5 M32.2.2.44.27.27.44 M3.2.2.44.27.27.44 M46.6.7.5.32.33.5 M6.24.23.83.37.38.83 M.28.28.35.5.5.35 M32.4.4.2.57.56.2 M3.3.3.47.45.46.47 M46.36.36.2.65.66.2 M6.4.4.259.69.7.259 M.43.43.28.7.75.28 M32.73.72.53.96.99.53 M3.43.43.28.74.72.28 M46.64.64.55.2.5.552 M6.64.64.552.9.4.55 M.64.64.528...527 M32..9.25.67.65.2 M3.6.6.563.4.7.563 M46.2.5.327.87.9.326 M6.97.95.7.77.8.7 M.87.88.849.33.37.849 M32.57.29 2.3.3.347 2.36 M3.82.82.897.43.47.897 M46.47.5 2.57.287.27 2.58 M6.27.29.784.246.249.784 M.2.5.278.64.85.278 M32.22.33 4.337.659.524 4.36 M3.7.9.456.87.93.454 M46.2.222 4.448.394.457 4.448 M6.7.69 2.833.328.332 2.83 (a) Quadrangles, Gauss-Legendre Method Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β TEX CSE SPA NAI TEX CSE SPA NAI M.26.28.35.48.5.35 M32.4.4.2.55.58.2 M3.3.3.47.46.46.47 M46.36.36.2.65.66.2 M6.4.4.259.69.7.259 M.37.43.28.63.7.28 M32.72.72.53.97.99.53 M3.43.43.28.74.7.28 M46.64.65.55.2.23.552 M6.66.64.552.2.4.55 M.49.63.527.79.98.527 M32..22.2.65.68.29 M3.6.6.563.5.8.563 M46.3.4.327.85.9.327 M6.97.95.7.77.8.7 M.59.87.849.94.26.849 M32.57.229 2.37.296.353 2.36 M3.83.82.897.42.48.897 M46.48.75 2.55.287.336 2.57 M6.32.29.784.246.248.784 M.7.4.278.7.65.277 M32.23.63 4.335.649.692 4.346 M3.8.8.455.86.93.456 M46.28.25 4.447.393.634 4.453 M6.7.69 2.833.329.332 2.83 (b) Quadrangles, Gauss-Legendre-Lobatto Method Table B.2: Results of experiments conducted to verify hypothesis stated in Section 4.2. TEX corresponds to kernels using the texture cache (Section 4.2.), SPA to kernels without sparsity elimnation (Section 4.2.3), CSE to kernels with common sub-expression elimination (Section 4.2.2) and NAI is the naive 3-loop implementation (Section 4.2.4). Results for quadrilateral element matrices in single precision on Tesla K4c. Reported values are averages of 3 runs reproducible within 2% in [ms]. APPENDIX B. RESULTS FOR INDIVIDUAL OPTIMISATIONS 75
Order = Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β TEX CSE SPA NAI TEX CSE SPA NAI M.43.43.236.8.8.236 M32.5.48.28.63.63.28 M3.5.49.28.62.63.28 M46.43.43.236.8.8.236 M6.65.66.6.6.9.6 M.5.4.535.99.22.537 M32.6.2.97.36.295.969 M3.7.45.352.29.94.349 M46.47.53 2.298.28.39 2.299 M6.89.25 4.29.353.5 4.2 M.227.265 5.468.569.625 5.475 M32.46 2.369.47.889 2.374.47 M3.23.448 5.344.684.492 5.345 M46.358.482.9.45.53.9 M6.45.35 5.96.755 2.9 5.96 M.392 2.25 6.29.529 2.656 6.28 M32 2.247.54 39.94 3.23 3.56 39.94 M3.393 4.6 6.5.279 4.27 6.4 M46.725 3.72 4.45 3.586 4.898 4.46 M6.757 2.97 48. 3.39 4.35 47.99 M.636 4.768 39.42 2.65 39.42 M32 7.682 37.37 9.33 7.66 9.34 M3.85 9.993 39.42 2.392 39.44 M46.963 5.89 8.5 6.928 8.2 M6.427 5.789 8.4 6.9 8.6 M 4.22 85.5 5.654 85.5 M32 4.97 3.28 4.47 3.34 M3.83 85.39 4.5 85.42 M46 5.597 298.37 2.5 298.25 M6 2.29 255.74 9.5 255.82 (c) Hexahedra, Gauss-Legendre Method Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β TEX CSE SPA NAI TEX CSE SPA NAI M..2.537.95.22.535 M32.58.22.969.355.292.968 M3.8.46.352.2.94.35 M46.49.53 2.297.3.39 2.298 M6.88.26 4.23.353.5 4.2 M.25.255 5.468.526.588 5.464 M32.398 2.86.47.85 2.86.47 M3.232.45 5.352.692.488 5.343 M46.357.728.9.38.288.92 M6.46.37 5.96.78 2.58 5.96 M.347.93 6.29.638 2.393 6.29 M32 2.264 3.35 39.93 2.463 2.52 39.94 M3.398 4.66 6.5.249 4.273 6.5 M46.727 5.94 4.45 3.554 6.68 4.45 M6.757 2.887 48.2 3.393 4.34 48. M.59 4.72 39.42.62 5.266 39.44 M32 6.23 45.3 9.33 6.43 9.23 M3.746 9.988 39.42 2.396 39.42 M46.852 7.43 8.2 6.773 8.6 M6.32 5.846 8. 5.627 8.2 M.67 4.35 85.53 2.89 85.52 M32 5.56 3.23 3.86 3.36 M3.76 85.39 3.72 85.37 M46 5.38 298.26.47 298.26 M6 2.67 255.77 9.2 255.84 (d) Hexahedra, Gauss-Legendre-Lobatto Method Table B.2: Results of experiments conducted to verify hypothesis stated in Section 4.2. TEX corresponds to kernels using the texture cache (Section 4.2.), SPA to kernels without sparsity elimnation (Section 4.2.3), CSE to kernels with common sub-expression elimination (Section 4.2.2) and NAI is the naive 3-loop implementation (Section 4.2.4). Results for hexahedral element matrices in single precision on Tesla K4c. Reported values are averages of 3 runs reproducible within 2% in [ms]. APPENDIX B. RESULTS FOR INDIVIDUAL OPTIMISATIONS 76
Order = Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β TEX CSE SPA NAI TEX CSE SPA NAI M.4.4.37.24.25.37 M32.6.6.32.2.2.32 M3.6.7.32.2.2.32 M46.3.3.37.24.24.37 M6.7.9.59.3.3.59 M.2.2.84.38.38.84 M32.28.28..38.38. M3.23.24.8.34.34.8 M46.24.24.98.46.46.97 M6.27.28.35.5.5.35 M.29.29.43.52.52.43 M32.47.46.24.66.66.24 M3.3.32.53.5.5.53 M46.4.4.235.74.74.235 M6.43.43.272.8.77.272 M.4.4.26.69.69.26 M32.69.7.485.99.9.485 M3.4.42.26.73.73.26 M46.6.62.53...52 M6.6.6.52.7.9.52 M.57.57.429.95.95.429 M32.5.8.892.85.87.89 M3.54.53.436.97.97.436 M46.87.88.956.75.75.956 M6.83.84.823.58.6.823 M.74.75.658.8.8.658 M32.74.62.43.37.372.428 M3.68.68.624.26.26.624 M46.32.32.66.237.238.64 M6.7.5.238.26.22.238 (e) Triangles, Williams-Shunn Order = Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β TEX CSE SPA NAI TEX CSE SPA NAI M.22.22.73.43.43.73 M32.26.26.6.32.32.6 M3.26.26.6.33.33.6 M46.22.22.73.43.43.73 M6.32.3.66.54.54.66 M.46.46.28.9.9.28 M32.62.62.35.84.84.35 M3.52.52.285.7.7.285 M46.53.54.359...359 M6.74.74.777.38.29.777 M.85.83.84.82.82.84 M32.8.6.98.4.49.98 M3.94.96.766.325.325.766 M46..4.256.222.223.255 M6.5.5 2.259.46.35 2.257 M.65.68 2.68.57.57 2.66 M32.433.599 3.36.94.957 3.32 M3.239.27.92.653.653.92 M46.27.277 3.662.52.54 3.662 M6.268.54 5.77.798.824 5.722 M.84.759 4.236.625.625 4.227 M32 2.62 2.66 8.56 2.827 2.875 8.54 M3.87.797 4.3.28.278 4.29 M46.37.42 8.428.847.862 8.444 M6.632.236 2.32 2.595 2.664 2.3 M 2.34.459 8.226 2.879 2.882 8.224 M32 9.44 6.62 7.96.52.47 7.97 M3 2.239 2.64 8.22 2.62 2.624 8.27 M46 6.94 5.695 8.46 6.889 6.9 8.46 M6 3.23 4.36 24.53 4.598 4.656 24.52 (f) Tetrahedra, Shunn-Ham Table B.2: Results of experiments conducted to verify hypothesis stated in Section 4.2. TEX corresponds to kernels using the texture cache (Section 4.2.), SPA to kernels without sparsity elimnation (Section 4.2.3), CSE to kernels with common sub-expression elimination (Section 4.2.2) and NAI is the naive 3-loop implementation (Section 4.2.4). Results for triangular and tetrahedral element matrices in single precision on Tesla K4c. Reported values are averages of 3 runs reproducible within 2% in [ms]. APPENDIX B. RESULTS FOR INDIVIDUAL OPTIMISATIONS 77
Order = Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β TEX CSE SPA NAI TEX CSE SPA NAI M.27.24.28.69.38.36.38.6 M32.24.24.27.63.3.3.36.63 M3.24.25.39.63.32.32.36.63 M46.2.2.23.69.37.36.37.68 M6.29.27.35.22.45.44.55.22 M.4.36.47.22.62.62.79.22 M32.57.52.8.295.69.67.7.294 M3.42.4.59.26.56.57.78.25 M46.5.46.64.33.85.82..32 M6.52.52.7.384.85.86.37.383 M.89.58.86.443.24.88.58.443 M32.64.93.27.773.62.24.36.854 M3.67.6.9.4.9.85.58.443 M46.74.86.4.796.28.42.36.878 M6.88.82.3.796.43.37.36.878 M.27.86.62.844.69.2.297.74 M32.527.6.523.829.42.225.748.827 M3.4.8.23.755.69.23.293.754 M46.644.48.38.842.364.244.7 2.96 M6.24.2.7.483.23.27.568.683 M.333.5.255.257.332.9.495.294 M32.974.282.944 3.693.696.357.594 3.697 M3.25.2.385.445.235.74.48.268 M46.285.255.44 3.869.98.447.474 3.756 M6.74.74.23 2.53.34.285.97 2.53 M.449.53.44.99.443.234.722.989 M32.624.466.798 6.925.39.696 2.645 6.779 M3.325.43.567 2..32.226.684 2.288 M46 2.49.49.887 6.938.52.634 2.424 6.934 M6.28.227.3 4.2.56.382.529 3.989 (a) Quadrangles, Gauss-Legendre Method Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β TEX CSE SPA NAI TEX CSE SPA NAI M.34.34.4.22.56.57.8.22 M32.56.5.79.294.69.67.7.266 M3.42.39.6.25.56.57.78.86 M46.5.46.66.33.83.82..274 M6.52.54.6.384.86.85.38.347 M.49.48.73.443.76.78.59.443 M32.47.93.259.772.57.28.336.853 M3.65.6.2.4.89.87.6.443 M46.43.88.86.797.26.4.35.878 M6.85.84.9.796.42.37.36.878 M.6.6.4.843.97.96.297.762 M32.469.56.593.882.399.222.737.824 M3..84.22.756.33.26.294.755 M46.553.39.445.843.356.239.7.842 M6.24.8.63.484.23.23.564.684 M.77.77.24.296.7.6.53.293 M32.94.298.85 3.699.648.457.668 3.698 M3.2..369.269.238.72.468.268 M46.88.225.87 3.828.924.432.477 3.754 M6.73.7.24 2.532.296.289.93 2.879 M.9.9.394.99.37.38.8.988 M32.628.434 2.23 6.786.66.66 2.839 6.78 M3.28.38.67 2..299.272.74 2.285 M46.827.386.524 6.94.399.65 2.57 6.935 M6.28.25.37 3.99.58.39.347 3.99 (b) Quadrangles, Gauss-Legendre-Lobatto Method Table B.3: Results of experiments conducted to verify hypothesis stated in Section 4.2. TEX corresponds to kernels using the texture cache (Section 4.2.), SPA to kernels without sparsity elimnation (Section 4.2.3), CSE to kernels with common sub-expression elimination (Section 4.2.2) and NAI is the naive 3-loop implementation (Section 4.2.4). Results for quadrilateral element matrices in double precision on GTX 78 Ti. Reported values are averages of 3 runs reproducible within 2% in [ms]. APPENDIX B. RESULTS FOR INDIVIDUAL OPTIMISATIONS 78
Order = Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β TEX CSE SPA NAI TEX CSE SPA NAI M.57.54.67.355.2.99.3.355 M32.59.63.96.328.74.73.25.293 M3.62.62.2.3.74.75.25.292 M46.55.54.64.326.99.99.3.37 M6.85.82.64.888.26.24.34.865 M.354.53.372 2.44.53.3.747 2.4 M32.353.242.2 3.34.45.34.294 3.3 M3.265.5.746 2.98.335.26.84 2.27 M46.445.95.583 3.22.78.47.2 3.25 M6.25.24.76 6.37.462.373 2.3 6.38 M.896.32.87 9.355.945.86 3.442 8.77 M32 2.348.79 5.379 7.5.523.9 9.55 7.3 M3.628.35 2.587 8.658.699.623 3.739 8.67 M46 2.546.479.5 7.54 2.6.552 6.868 7.54 M6.573.52 2.56 25.8.58.437 9.896 25.77 M 3.64.68 3.725 26.32 2.23.26.47 26.3 M32 8.575 3.3 23.93 65.8 4.47 2.84 37.76 65.7 M3.566.52 8.563 26.25.425.49 4.58 26.26 M46 6.483.25 7.68 65.74 5.48 3.45 25.83 65.7 M6.64.838 5.24 78.55 2.98 2.53 44.5 78.49 M 7.463 3.62 65. 5. 3.848 64.9 M32 2.27 8.28 94.54 9.26 6.942 93.64 M3 4.58.95 64.99 3.9 2.58 64.89 M46 6.94 4.42 94.93.86 7.32 94.72 M6 3.84.454 94.8 4.888 4.635 94.66 M 3.7 7.398 4.23 8.586 7.547 4.2 M32 44.4 6.47 49.63 8.82 2.93 49.88 M3 6.52 2.755 4.7 4.832 3.854 39.94 M46 33.67 8.84 489.6 9.28 3.99 489.8 M6 6.42 2.2 42.3 8.436 6.83 42.3 (c) Hexahedra, Gauss-Legendre Method Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β TEX CSE SPA NAI TEX CSE SPA NAI M.45.44.23 2.43.279.232.82 2.272 M32.37.239.8 3.59.45.348.422 3.33 M3.263.5.747 2.2.335.26.897 2.98 M46.445.2.586 3.22.78.47.25 3.277 M6.247.239.762 6.32.46.373 2.558 6.33 M.28.274.2 8.788.76.492 3.55 8.783 M32.937.64 6.25 7.5.559.873 2.5 7.5 M3.66.35 2.595 8.624.783.625 3.369 8.622 M46.87.448 2.94 7.56.946.5 6.864 7.56 M6.576.522 2.6 25.8.387.98 9.957 25.78 M.45.44 3.259 26.33.9.78.53 26.32 M32 7.78 2.854 25.7 65.24 4.39 2.773 38.89 65.7 M3.377.493 8.472 26.26.379.5 4.95 26.24 M46 5.662.52 5.23 65.73 4.974 2.96 25.82 65.69 M6.639.848 5.385 78.52 3.7 2.524 44.8 78.48 M.676.657.4 64.97.58.393 85.47 64.9 M32 8.7 7.69 94.73 8.87 6.32 94.36 M3 4.5.98 65. 3.42.826 64.88 M46 4.99 3.993 94.88 9.93 6.723 94.63 M6 3.488.443 94.86 4.938 4.385 94.56 M.349.9 4.8 2.272.944 4.8 M32 37.85 6.2 49.28 7.6 2.5 49.74 M3 5.754 2.446 4.8 4.793 3.296 39.93 M46 3.83 8.355 489.37 8.9 2.8 489.67 M6 6.479 2.227 49.58 8.44 6.824 42.3 (d) Hexahedra, Gauss-Legendre-Lobatto Method Table B.3: Results of experiments conducted to verify hypothesis stated in Section 4.2. TEX corresponds to kernels using the texture cache (Section 4.2.), SPA to kernels without sparsity elimnation (Section 4.2.3), CSE to kernels with common sub-expression elimination (Section 4.2.2) and NAI is the naive 3-loop implementation (Section 4.2.4). Results for hexahedral element matrices in double precision on GTX 78 Ti. Reported values are averages of 3 runs reproducible within 2% in [ms]. APPENDIX B. RESULTS FOR INDIVIDUAL OPTIMISATIONS 79
Order = Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β TEX CSE SPA NAI TEX CSE SPA NAI M.2.4.27.42.3.25.28.42 M32.9.7.9.34.25.2.25.34 M3.22.8.2.38.27.22.25.38 M46.4.6.47.28.25.28.47 M6.28.22.28.76.36.3.37.75 M.43.36.35.3.53.45.5.3 M32.5.43.49.34.62.54.59.34 M3.46.37.38.7.52.47.49.7 M46.49.43.45.4.65.6.63.4 M6.55.44.62.22.68.63.79.22 M.97.69.7.28.2.89.89.28 M32.22.9.5.345.96.27.3.346 M3..7.72.28.99.87.89.27 M46.58.6.8.36.56.34.36.36 M6.2.85.99.423.32.3.52.423 M.272.27.27.394.29.46.45.393 M32.928.237.273.677.629.274.277.657 M3.25.6.27.357.23.39.46.346 M46.6.233.247.72.54.273.28.69 M6.275.55.83.72.243.93.272.69 M.544.25.27.649.453.229.24.648 M32 2.6.54.52.338.394.65.59.335 M3.49.99.25.597.36.226.234.596 M46.254.54.526.358.4.544.56.356 M6.674.26.3.63.498.357.45.29 M.42.325.352.9.54.357.357.898 M32 3.63.953.984 2.257 2.59.8.54 2.258 M3.8.32.37.88.538.374.352.88 M46 4.222.93.943 2.288 3.69.948.982 2.285 M6.24.49.438.99.659.526.655.964 (e) Triangles, Williams-Shunn Order = Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β TEX CSE SPA NAI TEX CSE SPA NAI M.34.3.29.89.55.52.5.89 M32.28.32.35.78.4.4.46.78 M3.38.36.37.89.48.46.46.89 M46.27.27..5.49.5. M6.5.45.57.256.7.67.98.257 M.44.97.22.43.8.37.58.43 M32.85.2.66.457.95.44.2.457 M3.77.7.42.37.83.33.52.37 M46.84.4.33.488.95.59.94.487 M6.397.52.237.9.3.222.423.89 M.78.43.43.222.746.485.47.348 M32.892.549.77.73.257.627.83.78 M3.956.364.464.67.796.454.499.67 M46.42.532.65 2.2.49.677.7.776 M6 2.54.53.77 3.475.446.83.444 3.782 M 4.27.98.232 3.65 3.763.239.32 3.6 M32 7.926 2.3 2.8 5.2 5.87 2.853 2.44 5.96 M3 3.33.959.25 3.3 2.3.35.398 3.6 M46 8.538.887 2.3 5.36 6.368.826 2.372 5.36 M6 5.723.338.767 9.2 3.42.62 3.623 9.9 M.79 2.268 2.453 6.738 6.776 2.4 2.74 6.737 M32 28.95 7.86 5.936 3.3 24.5.92 9.5 3.5 M3 9.83 2.493 2.438 6.66 5.574 3.29 2.72 6.656 M46 23.3 4.82 5.72 3.45 3.54 5.426 6.39 3.46 M6 4.69 2.952 3.866 9.9 7.7 3.63 8.5 9.92 M 23.9 4.454 4.85 3.28 4.46 5.533 5.435 3.29 M32 78.36 2.66 8.94 29.58 73.46 33.4 33.3 29.6 M3 2.46 5.62 4.949 3.24 5.4 7.66 5.642 3.23 M46 57.56.3.59 29.92 34.84 2.2 3.49 29.89 M6 28.4 6.323 7.637 39.68 5.27 7.268 5.59 39.6 (f) Tetrahedra, Shunn-Ham Table B.3: Results of experiments conducted to verify hypothesis stated in Section 4.2. TEX corresponds to kernels using the texture cache (Section 4.2.), SPA to kernels without sparsity elimnation (Section 4.2.3), CSE to kernels with common sub-expression elimination (Section 4.2.2) and NAI is the naive 3-loop implementation (Section 4.2.4). Results for triangular and tetrahedral element matrices in double precision on GTX 78 Ti. Reported values are averages of 3 runs reproducible within 2% in [ms]. APPENDIX B. RESULTS FOR INDIVIDUAL OPTIMISATIONS 8
Order = Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β TEX CSE SPA NAI TEX CSE SPA NAI M.3...38.23.2.22.38 M32.4.3.4.33.8.8.8.38 M3.4.3.4.37.8.8.8.37 M46..2..42.22.2.22.42 M6.7.6.5.7.26.25.25.7 M.23.2.9.4.36.33.34.4 M32.3.26.27.78.39.37.37.78 M3.22.2.2.25.32.3.3.26 M46.27.24.24.78.48.43.44.78 M6.29.26.26.22.5.45.47.222 M.39.28.28.238.59.46.47.239 M32.93.46.48.45.6.63.72.452 M3.33.28.28.238.5.48.47.239 M46.73.42.4.47.2.73.78.47 M6.47.4.42.47.8.78.77.47 M.83.4.42.447.8.65.72.447 M32.343.69.76.4.378.2.39.4 M3.53.39.4.48.79.69.69.48 M46.239.65.66.3.26.2.4.3 M6.68.6.6.93.33.6.27.93 M.23.55.57.725.258.88.9.725 M32.689..4.988.59.392.395.989 M3.94.53.53.764.3.92.7.764 M46.892.95.95 2.48.727.95.222 2.49 M6.88.82.83.59.86.56.9.52 M.298.7.73..33.7.49.99 M32.225.42.23 3.626.849.455.654 3.783 M3.73.68.7.24.23.23.58.23 M46.552.28.4 3.484.8.262.56 3.49 M6.42.9.7 2.3.247.22.263 2.28 (a) Quadrangles, Gauss-Legendre Method Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β TEX CSE SPA NAI TEX CSE SPA NAI M.8.8.9.4.34.32.33.4 M32.3.27.27.78.39.36.37.78 M3.22.2.2.25.32.3.3.25 M46.27.23.24.78.48.43.44.78 M6.3.26.26.22.5.45.47.22 M.25.24.28.239.44.42.47.238 M32.88.46.48.45.4.67.72.452 M3.34.28.28.238.5.48.47.238 M46.64.4.42.469.95.79.77.469 M6.43.44.42.47.8.73.77.47 M.33.32.4.447.56.52.72.447 M32.35.7.8.4.9.9.38.42 M3.52.39.4.48.78.7.76.48 M46.22.66.67.29.258.2.44.3 M6.68.6.6.93.33.6.6.93 M.4.4.57.725.66.62.98.725 M32.583..5.989.534.25.382.986 M3.93.53.53.693.32.94.7.693 M46.798.95.2.95.69.9.222.949 M6.86.84.83.377.84.62.9.377 M.49.45.73..77.79.6.99 M32.8.37.427 3.785.784.46.64 3.56 M3.4.68.7.24.57.2.47.23 M46.399.33.6 3.466.52.262.42 3.395 M6.4.9.9 2.29.247.25.274 2.29 (b) Quadrangles, Gauss-Legendre-Lobatto Method Table B.4: Results of experiments conducted to verify hypothesis stated in Section 4.2. TEX corresponds to kernels using the texture cache (Section 4.2.), SPA to kernels without sparsity elimnation (Section 4.2.3), CSE to kernels with common sub-expression elimination (Section 4.2.2) and NAI is the naive 3-loop implementation (Section 4.2.4). Results for quadrilateral element matrices in single precision on GTX 78 Ti. Reported values are averages of 3 runs reproducible within 2% in [ms]. APPENDIX B. RESULTS FOR INDIVIDUAL OPTIMISATIONS 8
Order = Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β TEX CSE SPA NAI TEX CSE SPA NAI M.32.28.28.22.59.53.58.23 M32.33.32.33.76.39.4.48.76 M3.33.32.33.76.42.4.43.76 M46.28.28.28.22.57.52.53.22 M6.45.42.42.52.72.69.8.52 M.8.76.73.52.247.3.54.36 M32.25..43.5.324.248.336.694 M3.6.73.94.59.54.4.79.33 M46.26.93.98.94.352.8.222.95 M6.36.22.3 3.2.332.237.437 3.383 M.69.46.59 4.35.848.387.522 4.9 M32.2.355.479 8.64.87.6 2.433 7.972 M3.49.48.988 4.8.628.475.7 4.85 M46.96.227.39 8.422.664.79.548 8.368 M6.49.256.883 2.38.378.28 2.32 2.43 M 2.74.245.377 2.53.423.955 3.666 2.53 M32 4.855.468 7.293 3.48 3.24.797 59.5 3.47 M3.948.25 2.599 2.29.56.889 3.67 2.29 M46 5.236.449 2.486 3. 3.733 2.5 8.829 3.9 M6.68.484.993 36.5 2.56 2.58.28 36.7 M 5.237.398 3.4 3.7 3.476.656 8.347 3.6 M32 3.22 6.76 23.47 89.84 6.948 4.774 35.66 9.8 M3 2.25.49 6.354 3.6 2.77.537 8.384 3.2 M46 5.87.342 3.938 89.97 8.864 4.87 24.79 89.9 M6 2.356.936 3.84 9.5 4.546 3.886 89.88 M 2.93 9.33 65.7 5.999 3.72 65.2 M32 3.42 6.3 227.8 8.98 226.9 M3.2 4.3 65.5 2.758 65. M46 3.797 5.3 226.67 7.549 226.74 M6.44 6.977 94.24 5.782 94. (c) Hexahedra, Gauss-Legendre Method Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β TEX CSE SPA NAI TEX CSE SPA NAI M.74.7.72.36.4.28.6.37 M32.29..48.69.38.242.356.695 M3.7.74.95.6.55.4.84.34 M46.26.94.98.955.353.99.22.93 M6.37.2.29 3.33.332.237.449 3.378 M.44.34.62 4.82.366.359.56 4.359 M32.57.28.9 7.964.6.584 2.6 7.98 M3.42.45.93 4.238.63.486.87 4.36 M46.673.227.94 8.358.577.73.89 8.44 M6.427.253.87 2.37.367.9 2.359 2.39 M.298.27.36 2.53.979.428 3.629 2.55 M32 4.625.52 8.42 3.39 3.28.683 57.89 3.47 M3.8.25 2.595 2.29.22.865 3.523 2.3 M46 5.62.457 3.864 3.2 3.64 2.478 8.83 3.3 M6.69.482.962 36.53 2.56 2.57.33 36.5 M.468.324 3.247 3.8.32.722 8.355 3.5 M32.97 5.998 28.43 9.8 6.67 4.5 328.64 9.5 M3 2.27.436 6.357 3.8 2.75.494 8.383 3.22 M46 2.5.259.92 89.93 6.98 4.333 24.68 89.98 M6.954.938 3.96 9.35 4.48 3.862 24.67 9.28 M.2.7 9.4 65.2 2.47.999 64.98 M32 3.35 69. 227.4 3.27 8.579 226.96 M3.72 4.32 65.24 3.289 2.6 65.5 M46 3.275 3.66 226.8 3.58 7.236 226.42 M6.35 6.788 94. 6.63 5.785 94.9 (d) Hexahedra, Gauss-Legendre-Lobatto Method Table B.4: Results of experiments conducted to verify hypothesis stated in Section 4.2. TEX corresponds to kernels using the texture cache (Section 4.2.), SPA to kernels without sparsity elimnation (Section 4.2.3), CSE to kernels with common sub-expression elimination (Section 4.2.2) and NAI is the naive 3-loop implementation (Section 4.2.4). Results for hexahedral element matrices in single precision on GTX 78 Ti. Reported values are averages of 3 runs reproducible within 2% in [ms]. APPENDIX B. RESULTS FOR INDIVIDUAL OPTIMISATIONS 82
Order = Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β TEX CSE SPA NAI TEX CSE SPA NAI M.2.8.9.28.8.7.28.28 M32....24.4.3.4.24 M3.2...27.5.3.4.27 M46.9.9..3.6.7.7.3 M6.5.2.2.5.2.2.2.5 M.23.3.4.72.29.25.25.72 M32.27.9.9.85.3.25.25.86 M3.22.5.5.69.29.22.23.69 M46.29.6.7.83.36.3.3.84 M6.33.8.9.4.4.33.34.4 M.52.9.2.22.53.34.34.22 M32.46.3.3.25.24.43.43.25 M3.52.2.2.3.49.32.32.3 M46.82.27.27.2.79.48.48.2 M6.66.28.28.232.69.54.5.232 M.9.27.27.22.89.45.45.22 M32.373.44.45.43.255.65.67.43 M3.98.27.27.22.85.48.45.22 M46.238.4.4.437.69.72.76.437 M6.44.39.4.436.3.77.72.436 M.344.37.4.365.244.63.62.365 M32.542.69.7.763.93.27.3.764 M3.36.35.34.37.65.65.7.37 M46.876.57.58.8.537.6.5.82 M6.32.53.54.699.2.4..7 M.588.46.48.56.357.79.78.56 M32 2.76.2.8.233.555.255.89.229 M3.45.79.44.53.35.84.85.53 M46.739.82.88.368.68.62.64.368 M6.668.64.68.52.353.38.5.52 (e) Triangles, Williams-Shunn Order = Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β TEX CSE SPA NAI TEX CSE SPA NAI M.8.5.5.62.32.28.4.62 M32.7.7.7.5.2.2.22.5 M3.2.7.7.5.23.22.22.5 M46.5.5.4.62.28.28.29.62 M6.28.2.2.4.39.35.36.4 M.76.3.3.24.8.6.55.24 M32.6.39.4.298.2.54.55.299 M3.98.34.35.243.2.47.48.242 M46.9.37.35.38.98.65.66.38 M6.66.47.49.663.4.9.94.663 M.665.56.54.75.478.24.9.75 M32.393.2.2.944.795.28.55.944 M3.82.6.62.655.536.223.93.655 M46.26.74.73.7.66.5.59.7 M6.928.94.99.934.544.292.2.936 M 3.24...762.26.358.89.763 M32 4.764.296.45 2.857 3.84.66.52 2.74 M3 2.285.5.35.649.477.45.343.56 M46 6.49.85.86 2.976 3.667.348.35 2.856 M6 4.69.26.353 4.368 2.9.648.25 4.686 M 8.2.754.52 3.236 4.82.44.74 3.296 M32 2.85.746.85 6.24 5..949 2.8 6.39 M3 6.43.5.534 3.5 3.88.89.62 3.357 M46 6.99.893.962 6.552 8.54.345.3 6.475 M6.6.6.84 9.44 5.97.822.67 9.44 M 6.45.589.994 6.595 9.225.8.249 6.368 M32 74.86 6.69 3.945 3.68 84.84 6.6 4.463 3.76 M3 4.34.38.659 6.38 7.738.822.94 6.39 M46 39.84 4.383 3.993 4. 2.92 4.323 2.894 4. M6 22.74.989 2.82 8.88 9.825 3.97 2.988 8.9 (f) Tetrahedra, Shunn-Ham Table B.4: Results of experiments conducted to verify hypothesis stated in Section 4.2. TEX corresponds to kernels using the texture cache (Section 4.2.), SPA to kernels without sparsity elimnation (Section 4.2.3), CSE to kernels with common sub-expression elimination (Section 4.2.2) and NAI is the naive 3-loop implementation (Section 4.2.4). Results for triangular and tetrahedral element matrices in single precision on GTX 78 Ti. Reported values are averages of 3 runs reproducible within 2% in [ms]. APPENDIX B. RESULTS FOR INDIVIDUAL OPTIMISATIONS 83
Appendix C Plots of Results for Individual Optimisations 85
APPENDIX C. PLOTS OF RESULTS FOR INDIVIDUAL OPTIMISATIONS 86.8.8.6.4.6.4.2.2 (a) GTX 78 Ti, single precision, β = (b) Tesla K4c, double precision, β.8.8.6.4.6.4.2.2 (c) GTX 78 Ti, double precision, β (d) GTX 78 Ti, β Figure C.: Comparison of kernels embedding values the operator matrix in the code and those accessing it through the texture unit. Positive and neutral effects of value embedding on the set of benchmark matrices are indicated as green.
APPENDIX C. PLOTS OF RESULTS FOR INDIVIDUAL OPTIMISATIONS 87.8.8.6.4.6.4.2.2 (a) Tesla K4c, single precision, β = (b) GTX 78 Ti, single precision, β =.8.8.6.4.6.4.2.2 (c) Tesla K4c, double precision, β (d) GTX 78 Ti, double precision, β.8.8.6.4.6.4.2.2 (e) Tesla K4c, single precision, β (f) GTX 78 Ti, single precision, β Figure C.2: Comparison of kernels eliminating common sub-expressions from the operator matrix and those performing no such optimisation. Positive effects of common sub-expression elimination on the set of benchmark matrices are indicated as green.
APPENDIX C. PLOTS OF RESULTS FOR INDIVIDUAL OPTIMISATIONS 88.8.8.6.4.6.4.2.2 (a) Tesla K4c, single precision, β = (b) GTX 78 Ti, single precision, β =.8.8.6.4.6.4.2.2 (c) Tesla K4c, double precision, β (d) GTX 78 Ti, double precision, β.8.8.6.4.6.4.2.2 (e) Tesla K4c, single precision, β (f) GTX 78 Ti, single precision, β Figure C.3: Comparison of kernels eliminating sparsity from the operator matrix and those performing all the multiplications by zeros. Positive and neutral effects of sparsity elimination on the set of benchmark matrices are indicated as green.
APPENDIX C. PLOTS OF RESULTS FOR INDIVIDUAL OPTIMISATIONS 89.8.8.6.4.6.4.2.2 (a) Tesla K4c, single precision, β = (b) GTX 78 Ti, single precision, β =.8.8.6.4.6.4.2.2 (c) Tesla K4c, double precision, β (d) GTX 78 Ti, double precision, β.8.8.6.4.6.4.2.2 (e) Tesla K4c, single precision, β (f) GTX 78 Ti, single precision, β Figure C.4: Comparison of NVIDIA cublas and the naive 3-loop matrix multiplication kernel. Cases when cublas performs better are indicated as green.
Appendix D Benchmarking Results for GiMMiK Kernels 9
Order = Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β CU GCU CL GCL CU GCU CL GCL M.73.27.64.3.5.47..55 M32.84.3.93.35.5.4.8.47 M3.84.3.93.35.5.4.9.46 M46.73.26.66.3.5.47..56 M6.89.36.2.42.9.57.37.66 M.23.47.8.53.54.78.28.93 M32.6.66.237.7.9.85.292. M3.23.49.69.56.52.72.2.84 M46.25.6.27.68.24.4.262.25 M6.28.68.66.74.245..29.34 M.27.73.27.82.7.3.256.32 M32.34.5.38.23.32.52.428.84 M3.27.73.29.82.7..256.33 M46.226.9.236.2.266.84.38.22 M6.226.8.236.2.267.82.32.22 M.3.6.455.7.32.55.496.9 M32.432.8.65.9.456.259.728.294 M3.262.3.28.3.289.6.336.94 M46.3.76.936.29.384.34.58.376 M6.27.57.53.8.356.273.632.33 M.346.44.425.55.369.22.47.244 M32.527.272.55.285.579.373.684.568 M3.269.39.547.5.39.224.658.264 M46.68.255.276.29.799.497.483.545 M6.522.28.877.247.664.386.77.46 M.432.89.862.94.459.275.927.3 M32.689.36 2.36.386.753.663 2.475.79 M3.3.8.73.24.383.296.8.377 M46.855.356 3.33.39.997.68 3.444.766 M6.67.288.396.388.76.53.6.677 (a) Quadrangles, Gauss-Legendre Method Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β CU GCU CL GCL CU GCU CL GCL M.23.45.82.54.54.74.29.87 M32.6.65.237.7.9.85.293. M3.23.49.67.56.52.72.22.85 M46.25.6.28.68.24.4.263.25 M6.28.68.67.74.245..29.33 M.27.63.27.7.7..254.22 M32.35.6.378.25.32.53.428.84 M3.27.72.28.82.7..257.32 M46.226..235.2.266.84.38.22 M6.226..236.2.266.82.38.22 M.3.8.453.9.32.28.495.55 M32.43.8.655.9.455.259.73.295 M3.262.4.282.3.29.6.336.93 M46.3.74.936.28.385.34.59.376 M6.27.56.529.8.356.273.63.33 M.346.98.425..368.54.473.87 M32.527.258.554.284.58.372.68.56 M3.269.39.546.6.39.224.659.264 M46.68.255.28.292.798.495.486.544 M6.522.28.878.246.663.386.78.46 M.43.2.872.3.459.82.922.22 M32.689.359 2.364.388.755.656 2.488.78 M3.3.85.74.24.383.35.8.377 M46.855.355 3.34.392.996.677 3.449.764 M6.66.287.394.388.76.529.64.676 (b) Quadrangles, Gauss-Legendre-Lobatto Method Table D.: Benchmarking results for GiMMiK CUDA (GCU) and OpenCL (GCL) kernels, cublas (CU) and clblas(cl) for quadrilateral element matrices in double precision on Tesla K4c. Reported values are averages of 3 runs reproducible within 2% in [ms]. APPENDIX D. BENCHMARKING RESULTS FOR GiMMiK KERNELS 92
Order = Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β CU GCU CL GCL CU GCU CL GCL M.77.7.22.8.22.29.86.55 M32.259.79.249.83.269.96.284. M3.259.78.247.84.269.96.283. M46.77.7.22.8.22.28.83.55 M6.265.8.294.22.293.64.345.98 M.34.94.939.23.399.365.66.46 M32.65.283.397.282.627.399.463.48 M3.434.97.7.25.462.274.89.36 M46.598.255.49.29.733.56.66.57 M6.853.32 2.43.367.988.54 2.254.656 M.86.996 4.826.34.34. 5.66.529 M32.657.379 2.434.423.755.79 2.63.873 M3.524.6 4.828.768.869.752 5.377.978 M46 2.8.698 7.8.882 2.35.729 7.775 2.8 M6 2.53.59.47.75 2.74.3.99.62 M 2.537.7.49.92 2.729.572.98.878 M32 4.74 3.75 24.83 4.32 4.462 3.39 25.62 4.352 M3.864.652 7.575.895 2.3.332 7.956.728 M46 4.98.442 25.3.883 5.45 3.586 26.53 4.87 M6 5.778.276 22.3.873 6.89 3.37 23.37 3.94 M 5.84 4.669 8.39 3.8 5.453 4.985 9.6 5.44 M32 4.25 7.575 55.9 9.38 4.49 8.35 56.6 9.893 M3 5.2.283 8.39.74 5.453 2.4 9.6 3.56 M46 4.8 5.43 55.3 5.662 4.73 8.935 57.4 9.965 M6 4.6.934 55. 3.478 4.68 5.674 57.34 6.852 M 9.923 8.6 54.38.8.25.8 56.23 9.845 M32 33.35 6.27 85.53 8.54 33.88 5.6 89.87 8.57 M3.27 2.745 39.5 2.73.66 4.47 4.72 4.855 M46 33.68.34 83.97 2.6 34.67 7.58 9.4 9.2 M6 29.7 2.922 8.4 5.53 3.8 8.963 22.3.87 (c) Hexahedra, Gauss-Legendre Method Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β CU GCU CL GCL CU GCU CL GCL M.34.84.939.26.4.3.64.4 M32.65.273.49.282.628.393.465.482 M3.435.99.72.25.462.274.89.36 M46.598.255.47.292.73.57.66.588 M6.853.32 2.4.366.989.54 2.258.657 M.948.363 2.427.49.3.646 2.72.757 M32.85.924 4.826.3.335.96 5.65.57 M3.656.383 2.49.423.752.72 2.62.873 M46.53.69 4.826.78.856.73 5.373.963 M6 2.2.699 7.8.88 2.33.725 7.784 2. M.949.385 2.43.423.3.938 2.7.72 M32 4.72 3.32 24.85 3.839 4.439 2.957 25.62 4.62 M3.864.653 7.556.9 2.38.337 7.973.73 M46 4.972.382 25.28.938 5.46 3.59 26.52 4. M6 5.769.274 22.29.866 6.86 3.34 23.37 3.953 M 5.95.883 8.38.73 5.442.939 9.3 2.52 M32 4.26 6.884 55. 9.224 4.45 7.258 56.9 9.49 M3 5.87.283 8.38.76 5.452 2.48 9.6 3.62 M46 4.8 4.8 55.6 5.324 4.74 8.393 57.35 9.532 M6 4.5.94 55.2 3.477 4.69 5.679 57.38 6.858 M 9.92.23 54.39 2.6.24 2.674 56.26 3.786 M32 33.36 5.34 85.69 7.56 33.87 4.32 89.87 8.9 M3.29 2.973 39.52 2.83.66 4.95 4.67 4.885 M46 33.68.83 83.98.47 34.67 6.25 89.94 8.4 M6 29.8 2.938 8.52 5.75 3.2 8.973 22.24.98 (d) Hexahedra, Gauss-Legendre-Lobatto Method Table D.: Benchmarking results for GiMMiK CUDA (GCU) and OpenCL (GCL) kernels, cublas (CU) and clblas(cl) for hexahedral element matrices in double precision on Tesla K4c. Reported values are averages of 3 runs reproducible within 2% in [ms]. APPENDIX D. BENCHMARKING RESULTS FOR GiMMiK KERNELS 93
Order = Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β CU GCU CL GCL CU GCU CL GCL M.69.2.77.24.97.36.5.42 M32.76.23.65.28.97.3.74.37 M3.76.24.67.27.97.3.74.36 M46.68.2.79.23.97.36.5.42 M6.8.27.4.3.7.44.73.5 M.8.33.24.4.4.57.74.65 M32.22.45.54.49.47.6.95.68 M3.2.36.58.4.46.52.22.59 M46.82.4.6.46.2.7.54.8 M6.23.48.83.53.54.78.28.9 M.23.5.56.57.55.8.95.9 M32.62.75.257.79.93.98.297.3 M3.23.5.74.58.53.78.223.9 M46.27.67.6.76.244.7.23.4 M6.28.73.63.8.247.22.2.45 M.26.69.284.79.66.8.358.27 M32.34.4.387.28.38.57.468.85 M3.26.69.285.78.66.8.355.27 M46.225.5.3.3.262.76.387.2 M6.224.5.33.4.26.74.386.2 M.262.92.4.4.284.39.446.62 M32.388.8.55.99.48.35.553.28 M3.256.9.248.2.282.43.32.69 M46.268.59.767.75.329.26.92.34 M6.265.42.52.53.324.245.664.288 M.34.2.343.35.328.76.376.222 M32.437.29.72.34.466.452.773.443 M3.265.6.42.27.296.9.47.29 M46.37.27.7.265.47.44.835.5 M6.274.88.88.26.379.337.922.375 (e) Triangles, Williams-Shunn Order = Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β CU GCU CL GCL CU GCU CL GCL M.74.36.7.4.3.64.2.75 M32.8.4.27.45.38.5.55.57 M3.8.4.27.46.38.5.55.58 M46.74.36.73.42.3.64.3.75 M6.24.54.53.64.56.84.95. M.27.78.63.86.248.36.222.63 M32.3..367.6.34.26.434.47 M3.26.85.297.92.273..333.24 M46.29.92.89..256.63.295.96 M6.267.26.346.38.32.2.42.256 M.265.43.48.56.322.247.598.285 M32.476.286.686.36.5.495.726.52 M3.349.55.465.75.37.267.5.258 M46.274.99.538.28.389.344.69.398 M6.363.249.2.265.46.493.76.53 M.36.26.22.282.45.566.346.539 M32.73.56 2.866.984.769.36 3.75.297 M3.482.365.34.389.525.82.42.83 M46.69.85 2.335.8.839.99 2.622.894 M6.947.673 2.969.545.2.933 3.47.572 M.86.26 2.54.72.999.77 2.266.689 M32.45 6.2 3.986 5.5.77 9.872 4.23 9.68 M3.6.35 2.9.235.693.669 2.37.66 M46.393 2.48 4.7 2.34.75 4.287 4.527 4.9 M6.9.883 6.38.748 2.22 2.953 6.52 2.942 M.97 2.37 4.8 2.8.344 3.97 4.45 3.9 M32 2.926 22.48 9.55 2.8 3.97 4.34 9.327 37.8 M3.439 2.45 4.65 2.263.543 2.987 4.277 2.876 M46 2.498 6.59 8.375 5.878 2.825 9.276 9.82 8.85 M6 2.998 3.764. 5.53 3.345 5.59.85 5.36 (f) Tetrahedra, Shunn-Ham Method Table D.: Benchmarking results for GiMMiK CUDA (GCU) and OpenCL (GCL) kernels, cublas (CU) and clblas(cl) for triangular and tetrahedral element matrices in double precision on Tesla K4c. Reported values are averages of 3 runs reproducible within 2% in [ms]. APPENDIX D. BENCHMARKING RESULTS FOR GiMMiK KERNELS 94
Order = Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β CU GCU CL GCL CU GCU CL GCL M.69.4.43.8.237.27.52.34 M32.5.7.57.22.9.23.58.29 M3.5.7.54.23.9.23.6.3 M46.69.4.4.9.235.27.52.34 M6.69.9.59.25.24.32.68.42 M.69.24.5.3.24.42.24.55 M32.232.35.35.42.29.47.69.6 M3.69.26.82.33.246.39.2.5 M46.7.3.32.39.249.55.72.72 M6.7.34.83.43.25.59.2.78 M.72.36.9.47.246.6.27.78 M32.233.6.93.7.293.83.26. M3.7.36..48.245.6.27.79 M46.73.55.35.7.263.96.66.28 M6.73.55.36.72.263.2.66.28 M.232.55.253.65.29.86.283.6 M32.343.95.357.8.387.4.398.74 M3.232.52.34.63.29.89.8.3 M46.235.87.52.3.34.6.569.227 M6.234.82.24.4.338.5.39.9 M.286.74.27.86.33.3.226.48 M32.42.35.76.6.465.255.764.277 M3.233.7.283.83.323.2.39.73 M46.564.27.632.53.7.245.688.328 M6.452.9.442.35.6.2.53.35 M.343.97.483..383.39.55.83 M32.58.87.262.28.583.556.337.45 M3.235.9.39.6.34.59.388.227 M46.677.72.67.265.832.336.787.448 M6.455.46.6.8.66.282.77.47 (a) Quadrangles, Gauss-Legendre Method Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β CU GCU CL GCL CU GCU CL GCL M.69.23.4.29.242.4.22.53 M32.232.35.35.42.29.47.67.6 M3.69.26.8.33.244.39.2.5 M46.7.3.32.38.247.56.72.73 M6.7.34.8.43.249.59.24.78 M.7.32..4.248.54.28.7 M32.233.62.93.69.295.83.25.2 M3.7.36.9.47.246.6.28.78 M46.73.55.36.7.263.2.68.27 M6.73.55.36.7.264.2.65.27 M.232.4.254.52.29.67.285.9 M32.342.94.359.7.387.4.398.74 M3.232.52.33.63.29.9.8.3 M46.235.87.54.6.34.58.57.226 M6.234.82.239.4.338.52.38.9 M.286.5.26.63.329.8.226.2 M32.42.32.76.59.465.253.764.273 M3.233.7.286.83.323.2.37.72 M46.565.27.63.56.7.245.689.327 M6.452.9.443.36.62.2.54.35 M.342.6.476.74.383..58.42 M32.58.88.265.29.584.543.337.42 M3.235.9.39.6.34.6.39.228 M46.677.77.67.263.833.334.79.447 M6.455.45.6.79.66.28.77.46 (b) Quadrangles, Gauss-Legendre-Lobatto Method Table D.2: Benchmarking results for GiMMiK CUDA (GCU) and OpenCL (GCL) kernels, cublas (CU) and clblas(cl) for quadrilateral element matrices in single precision on Tesla K4c. Reported values are averages of 3 runs reproducible within 2% in [ms]. APPENDIX D. BENCHMARKING RESULTS FOR GiMMiK KERNELS 95
Order = Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β CU GCU CL GCL CU GCU CL GCL M.7.37.76.45.249.68.98.89 M32.232.43.32.49.296.52.36.68 M3.232.4.32.48.297.5.37.69 M46.7.36.76.44.25.68.98.89 M6.232.56.45.68.292.9.69.28 M.236.98.58.4.347.7.574.244 M32.459.32.76.56.5.3.833.24 M3.343..385.7.387.74.458.86 M46.453.26.754.54.6.239.857.343 M6.676.62.9.222.82.32.23.386 M.678.94.66.275.89.484.239.52 M32.82.445 2.339.47.88.93 2.333.973 M3.464.9.72.23.543.594.24.623 M46.48.35 2.34.466.456.89 2.44.24 M6.24.348 3.43.537.646.489 3.495.529 M.322.337 5.572.458.798.296 5.699.523 M32 2.997.79 3.2.39 3.38.868 3.27 2.246 M3.372.34 3.992.467.49.22 4.79.37 M46 2.898.66 3.2.866 3.747 3.4 3.64 3.448 M6 3.29.647.7.896 4.92 2.879 2.2 3.333 M 2.79.542 8.546.75 3.256 2.29 8.465 2.286 M32 6.767 9.65 25.54 3.295 7.292 3.756 24.82 4.653 M3 2.73.64 8.56.75 3.24.967 8.46 2.292 M46 7.94.676 25.77.646 8.577 5.883 25.48 6.86 M6 7.73.22 25.76.79 8.57 5.75 25.46 5.74 M 5.3 3.578 28.6 2.63 5.796 4.795 28.69 4.323 M32 6.2 8.5 96.34 6.95 6.9 9.459 96.77 9.469 M3 5.427.55 2.7.29 6.223 3.369 2.27 3.622 M46 5.89 4.738 95.23 5.39 8.8.2 96.8.83 M6 4.7.82 6.72 2.72 6.37 7.79 63.43 9.2 (c) Hexahedra, Gauss-Legendre Method Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β CU GCU CL GCL CU GCU CL GCL M.236.95.59.5.347.67.573.243 M32.459.38.762.58.5.39.827.24 M3.342..387.7.388.75.459.99 M46.453.26.757.82.69.256.856.395 M6.677.6.89.223.82.32.22.386 M.678.83.67.284.89.447.238.5 M32.8.4 2.342.467.88.758 2.335.959 M3.464.9.75.23.543.597.26.667 M46.53.34 2.34.473.46.882 2.44.5 M6.246.354 3.47.537.647.482 3.495.436 M.336.297 5.53.43.797.545 5.75.85 M32 2.996.724 3..349 3.23.8 3.27 2.227 M3.374.336 3.988.466.493.26 4.7.37 M46 2.896.62 3.9.875 3.75 3.8 3.63 3.44 M6 3.34.648.69.894 4.87 2.878 2.2 3.339 M 2.74.439 8.553.649 3.24.99 8.48 2.39 M32 6.755 7.95 25.55 3.49 7.296 3.669 24.83 4.574 M3 2.78.624 8.556.747 3.235.969 8.464 2.282 M46 7.23.58 25.82.554 8.576 5.73 25.47 6.32 M6 7.7.29 25.76.799 8.563 5.244 25.46 5.695 M 5.39.97 28.7.95 5.769 2.447 28.66 2.775 M32 6. 8.24 96.3 6.73 6.9 8.85 97. 9.78 M3 5.427.526 2.73.294 6.26 3.394 2.26 3.632 M46 5.87 4.68 95.36 4.558 8.9 9.726 96.78.6 M6 4.9.775 6.75 2.457 6.35 7.725 63.42 9.97 (d) Hexahedra, Gauss-Legendre-Lobatto Method Table D.2: Benchmarking results for GiMMiK CUDA (GCU) and OpenCL (GCL) kernels, cublas (CU) and clblas(cl) for hexahedral element matrices in single precision on Tesla K4c. Reported values are averages of 3 runs reproducible within 2% in [ms]. APPENDIX D. BENCHMARKING RESULTS FOR GiMMiK KERNELS 96
Order = Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β CU GCU CL GCL CU GCU CL GCL M.46..47.5.9.2.85.25 M32.46.3.4.7.75.7.45.2 M3.47.4.4.7.75.7.45.22 M46.46..47.6.9.2.85.26 M6.5.5.74.2.96.25.6.32 M.69.7.7.22.24.32.2.4 M32.7.24.77.29.4.33.8.42 M3.7.9..25.3.29.57.36 M46.69.2.66.26.24.39.8.5 M6.69.24.5.3.245.43.25.54 M.69.25.86.32.244.44.5.56 M32.23.39.2.46.294.56.67.67 M3.7.28.87.34.248.42.34.53 M46.7.35.96.42.25.62.25.8 M6.7.37.87.47.25.7.9.84 M.7.35.67.46.245.58.228.74 M32.232.6.229.7.293.85.295. M3.7.35.67.45.244.58.232.74 M46.73.52.7.64.26.94.233.2 M6.73.52.69.62.259..233.2 M.232.48.225.59.29.8.25.94 M32.287.87.29..329.26.37.62 M3.232.45.42.57.287.77.74.97 M46.234.75.42.89.33.49.5.7 M6.233.7.288.83.328.36.373.62 M.233.64.57.77.292..87.35 M32.343.56.364.74.385.2.378.292 M3.232.58.232.7.29.9.263.24 M46.236.3.347.3.35.22.399.28 M6.236.9.436.5.348.78.53.252 (e) Triangles, Williams-Shunn Order = Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β CU GCU CL GCL CU GCU CL GCL M.69.8.46.24.236.37.58.45 M32.67.22.73.27.5.28.76.35 M3.67.22.74.28.5.28.74.36 M46.69.8.44.23.235.36.59.46 M6.7.28.87.36.24.46.96.59 M.7.39.95.48.25.72.26.92 M32.232.53.2.62.298.69.262.9 M3.232.44.39.52.297.62.85.8 M46.72.45.9.55.258.86.84. M6.233.63.62.76.294..23.54 M.233.7.252.84.326.4.288.93 M32.345.2.324.49.389.255.334.236 M3.286.82.224..329.8.24.56 M46.235.97.28.2.353.96.337.272 M6.29.26.58.44.394.24.554.35 M.29.43.654.6.392.235.72.322 M32.57.476.452.39.574.64.59.58 M3.345.76.577.97.44.47.672.36 M46.566.242.236.294.75.44.376.55 M6.68.38.289.277.844.76.442.692 M.678.64.969.5.89.839.32.74 M32.757 2.239.868 2.2.824 2.486.853 2.456 M3.464.67.936.63.534.795.96.668 M46.85.94.94.926.42.65 2.56.345 M6.64.74 2.862.858.59.392 2.935.372 M.9.24.94.89.48.479.986.359 M32 2.83 5.32 4.69 4.778 2.85 5.569 4.7 5.23 M3.22 2.248.883 2.4.42.466.98.345 M46.68 5.498 3.972 5.32 2.77 3.368 4.95 3.77 M6.789 3.354 5.25 3.99 2.354 2.457 5.39 4.43 (f) Tetrahedra, Shunn-Ham Method Table D.2: Benchmarking results for GiMMiK CUDA (GCU) and OpenCL (GCL) kernels, cublas (CU) and clblas(cl) for triangular and tetrahedral element matrices in single precision on Tesla K4c. Reported values are averages of 3 runs reproducible within 2% in [ms]. APPENDIX D. BENCHMARKING RESULTS FOR GiMMiK KERNELS 97
Order = Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β CU GCU CL GCL CU GCU CL GCL M.2.2.5.24.32.36.6.38 M32.82.24.65.33.92.32.9.37 M3.82.24.74.3.92.32.83.4 M46.2.2.99.24.32.36.23.46 M6.83.27.68.4.95.44.86.53 M.37.36.247.44.329.6.272.78 M32.45.5.39.59.46.66.428.8 M3.37.4.242.47.329.57.268.67 M46.646.46.26.58.658.8.289.98 M6.646.5.232.64.656.84.265.8 M.38.58.32.69.33.87.328.7 M32.55.95.52.4.64.24.595.65 M3.38.6.274.7.33.85.326.7 M46.649.87.28.96.665.42.347.8 M6.649.82.28.84.664.37.346.7 M.54.87.547.99.7.2.53.39 M32.896.45.839.68.96.225.866.245 M3.87.75.335.93.9.2.366.39 M46.49.34.95.56.2.242.67.38 M6.89.8.64.2.929.26.778.258 M.399.5.565.24.234.7.64.88 M32 2.4.282 2.446.32 2.26.35 2.29.492 M3.8.98.855.32.88.53.84.9 M46 2.484.224.663.25 2.442.4.74.43 M6.547.48.3.65.798.285.26.38 M.872.43.86.63.674.2.98.238 M32 3.22.42 3.93.45 3.9.64 3.6.7 M3.23.27.874.46.55.2.36.38 M46 3.276.367 3.347.397 3.687.574 3.527.632 M6 2.276.98.734.23 2.23.336.88.477 (a) Quadrangles, Gauss-Legendre Method Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β CU GCU CL GCL CU GCU CL GCL M.38.34.256.4.329.56.28.66 M32.45.5.4.62.46.66.382.76 M3.37.4.242.52.329.56.243.62 M46.647.46.267.57.658.8.263.93 M6.647.5.234.58.657.84.233.93 M.38.48.34.6.33.76.328.93 M32.55.92.58..63.22.597.57 M3.38.6.274.67.33.86.326.6 M46.65.85.289.89.666.4.346.79 M6.649.83.28.82.665.37.353.72 M.56.6.546.68.7.97.527. M32.897.52.843.64.95.22.86.238 M3.897.83.335.86.9.2.357.4 M46.64.39.95.5.2.239.32.28 M6.898.8.64.23.93.23.757.248 M.4.75.582.75.42.5.6.28 M32 2.49.223 2.7.297 2.384.297 2.29.474 M3.83.98.757.3.827.55.82.9 M46 2.484.23.663.228 2.5.389.746.47 M6.588.49.3.62.582.254.367.357 M.893.9.864.87.93.37.94.48 M32 3.42.387 3.94.48 3.424.64 3.28.679 M3.48.23.873.35.77.2.35.39 M46 3.34.339 3.589.374 3.439.55 3.534.62 M6 2.8.92.735.225 2.74.348.8.478 (b) Quadrangles, Gauss-Legendre-Lobatto Method Table D.3: Benchmarking results for GiMMiK CUDA (GCU) and OpenCL (GCL) kernels, cublas (CU) and clblas(cl) for quadrilateral element matrices in double precision on GTX 78 Ti. Reported values are averages of 3 runs reproducible within 2% in [ms]. APPENDIX D. BENCHMARKING RESULTS FOR GiMMiK KERNELS 98
Order = Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β CU GCU CL GCL CU GCU CL GCL M.4.54.77.64.4.98.94.2 M32.898.59.43.66.9.73.48.8 M3.898.6.397.66.92.74.46.79 M46.4.53.65.56.4.98.76.9 M6.9.82.47.85.95.24.45.36 M.64.43.956.36.62.267.34.292 M32 2.93.239.388.226 2.562.38.443.378 M3.9.45.96.49.687.96.946.233 M46 2.274.95.42.85 2.249.384.54.48 M6 3.443.2 2.969.237 3.484.332 2.68.46 M 4.7.266 3.295.34 3.82.735 2.99.828 M32 5.445.744 5.745.84 5.494.887 5.8.64 M3 3.84.275 2.896.345 2.835.578 2.953.686 M46 5.666.48 5.792.483 5.777.4 5.955.547 M6 8.28.54 8.624.487 8.386.32 8.77.494 M.78.59.23.658.84.239.54.455 M32 2.64 2.33 24. 3.26 2.79 2.398 24.4 3.82 M3 8.68.489 9.288.688 8.562.83 9.382.34 M46 2.37. 24.5.238 2.52 2.98 25.34 3.29 M6 25.34.834 27.7.23 25.53 2.56 27.98 2.99 M 23.99 3.64 22.47 2.657 24.3 3.73 22.66 3.73 M32 7.93 5.643 67.5 6.53 7. 6.3 67.29 7.38 M3 23.99.23 22.48.222 24..935 22.66 2.327 M46 65.79 4.448 67.33 4.399 66.5 7. 67.93 7.647 M6 65.75.438 67.29 2.273 66. 4.295 67.89 5.62 M 47.34 7.42 54.7 5.72 47.46 7.472 55.35 7.87 M32 68. 2.5 78.86 2.95 68.36.85 82.2 3.43 M3 48.77 2.249 49.2 2.9 48.93 3.23 49.34 3.689 M46 6.74 8.777 79.3 9.37 6.8 3.74 83.7 4.55 M6 37.94 2.79 47.4 3.32 38.48 6.78 47.95 8.5 (c) Hexahedra, Gauss-Legendre Method Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β CU GCU CL GCL CU GCU CL GCL M.63.4.953.37.27.23.9.29 M32 2.96.22.394.226 2.938.344.445.379 M3.72.29.94.49.925.29.944.234 M46 2.48.74.424.82 2.349.378.543.422 M6 3.269.29 2.68.237 3.377.332 2.684.46 M 3.867.24 2.93.276 3.85.44 2.99.52 M32 5.438.668 5.97.849 5.498.853 5.85.3 M3 3.38.277 2.896.344 2.838.58 2.957.686 M46 5.753.393 5.792.452 5.779.364 5.95.59 M6 8.283.53 8.626.486 8.459.34 8.775.493 M.75.392.22.47.85.692.55.68 M32 2.78 2.474 23.85 2.65 2.8 2.29 24.4 3.6 M3 8.49.539 9.292.69 8.566.87 9.374.339 M46 2.35.42 24.5.36 2.54 2.876 25.32 3.9 M6 25.33.939 27.7.96 25.55 2.5 27.99 2.927 M 23.95.644 22.48.737 24.7.4 22.67.84 M32 7.98 5.55 67.54 6.48 7.2 5.456 67.34 6.824 M3 24..4 22.46.22 24..942 22.66 2.334 M46 65.8 3.979 67.29 4.87 66.5 6.566 67.9 7.25 M6 65.76.44 67.28 2.268 66.6 4.3 67.85 5.55 M 47.29.895 54.6.447 47.37.928 55.34 2.693 M32 68.2.34 78.77 2.2 68.35.77 82.8 3.7 M3 48.76 2.294 49.9 2.23 48.93 3.39 49.36 3.696 M46 6.7 8.336 78.97 8.448 6.4 2.66 83.5 3.72 M6 37.97 2.92 47.34 3.347 38.45 6.799 47.95 8.88 (d) Hexahedra, Gauss-Legendre-Lobatto Method Table D.3: Benchmarking results for GiMMiK CUDA (GCU) and OpenCL (GCL) kernels, cublas (CU) and clblas(cl) for hexahedral element matrices in double precision on GTX 78 Ti. Reported values are averages of 3 runs reproducible within 2% in [ms]. APPENDIX D. BENCHMARKING RESULTS FOR GiMMiK KERNELS 99
Order = Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β CU GCU CL GCL CU GCU CL GCL M.4.6.94.9.8.28.24.3 M32.5.9.69.28.59.25.75.26 M3.5.2.79.25.6.26.83.37 M46.4.5..9.8.28.45.33 M6.5.23.2.36.63.35.24.4 M.5.35.88.49.63.5.233.59 M32.37.46.255.59.328.57.282.68 M3.37.39.259.53.328.49.33.56 M46.5.43.78.56.64.59.24.7 M6.37.46.249.63.328.65.272.78 M.37.7.245.85.328.88.268.3 M32.45..383.43.46.25.48.56 M3.37.7.254.94.329.88.286.5 M46.647.7.249.3.656.34.28.6 M6.646.89.235.2.656.5.268.42 M.38.28.346.57.33.44.45.74 M32.53.245.56.247.59.269.587.27 M3.38.28.36.4.33.45.362.55 M46.649.233.36.25.663.27.35.278 M6.649.65.38.82.664.97.353.22 M.898.23.478.25.99.228.54.272 M32.646.57.73.479.649.566.754.522 M3.897.25.352.222.99.233.382.248 M46.9.54.848.472.923.54.933.549 M6.9.272.772.37.99.322.746.337 M.57.35.463.322.67.355.484.362 M32.9.823.87.774.97.889.9.836 M3.89.287.424.33.835.32.457.36 M46.5.843.996.82.9.866.934.926 M6.82.44.937.455.857.465.95.522 (e) Triangles, Williams-Shunn Order = Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β CU GCU CL GCL CU GCU CL GCL M.8.26.97.36.8.45.2.53 M32.286.29.24.39.287.36.23.45 M3.286.34.242.47.287.4.25.59 M46.8.25.5.32.8.44.29.59 M6.37.48.235.65.29.6.264.85 M.645.2.25.48.579.38.28.86 M32.53.5.548.59.6.63.584.8 M3.894.4.47.48.9.5.435.68 M46.646.27.24.39.663.72.278.84 M6.9.9.4.22.94.23.442.273 M.898.42.658.43.98.469.783.533 M32 2.57.598.929.572 2.85.755.97.688 M3.264.48.62.422.46.496.642.459 M46.8.567.666.552.933.63.77.647 M6.27.68.244.575.428.77.332.739 M.46.225.2.972.427.299.29.3 M32 3.596.834 3.4.742 3.684 2.8 3.556.934 M3.94.73.87.5.974.243.825.9 M46 2.425.784 2.375.832 2.52.923 2.554.924 M6 4..65 3.629.48 3.8.688 3.669.95 M 3.364 2.33 2.89 2.484 3.35 2.636 2.635 2.553 M32 4.764 6.6 5.49 5.538 5.32 8.389 5.8 7.728 M3 2.95 2.456 2.542 2.266 2.592 2.636 2.597 2.548 M46 5.3 4.779 5.86 4.593 5. 5.689 5.254 5.664 M6 7.55 3.52 7.573 3.386 7.528 4.386 7.73 4.235 M 5.4 4.66 5.83 4.334 5.22 5.32 5.96 5.5 M32 4.6 8.47.25 6.9 4.9 3.6.39 28.8 M3 6.33 4.682 5.75 4.376 6.32 5. 5.59 4.893 M46 9.998.42.5.9. 2.76.37 2.65 M6 2.7 7.43 3.44 7.49 2.79 8.355 3.7 8.23 (f) Tetrahedra, Shunn-Ham Table D.3: Benchmarking results for GiMMiK CUDA (GCU) and OpenCL (GCL) kernels, cublas (CU) and clblas(cl) for triangular and tetrahedral element matrices in double precision on GTX 78 Ti. Reported values are averages of 3 runs reproducible within 2% in [ms]. APPENDIX D. BENCHMARKING RESULTS FOR GiMMiK KERNELS
Order = Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β CU GCU CL GCL CU GCU CL GCL M.43..33.4.99.2.48.29 M32.42.3.47.6.75.8.48.27 M3.42.3.49.7.75.8.54.23 M46.43..38.5.2.2.44.29 M6.43.5.53.2.23.25.59.32 M.44.8.9.25.24.33..49 M32.95.27.2.36.244.36.5.5 M3.44.9.7.26.26.3.95.39 M46.45.23.23.29.2.42.53.56 M6.45.26.7.33.29.45.4.66 M.46.28.95.37.28.47.8.66 M32.97.46.73.56.249.62.74.88 M3.46.28.2.4.27.46.9.6 M46.47.42.4.53.28.73.46.98 M6.47.4.22.53.27.78.38.98 M.96.4.27.49.245.64.249.93 M32.29.69.38.8.327.9.339.4 M3.96.39.5.48.245.69.6.9 M46.98.65.433.79.28.22.485.83 M6.98.62.24.8.279.6.278.48 M.242.55.78.64.277.88.2.2 M32.34..654.6.392.25.646.222 M3.96.53.24.66.268.93.264.38 M46.478.94.537..592.9.58.264 M6.382.8.377.96.59.6.427.246 M.29.7.43.82.324.8.437.48 M32.44.4.99.36.492.458.33.297 M3.99.68.246.7.28.2.299.64 M46.572.28.29.3.695.263.339.32 M6.385.9.459.2.547.23.652.292 (a) Quadrangles, Gauss-Legendre Method Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β CU GCU CL GCL CU GCU CL GCL M.44.7.94.29.24.3..48 M32.95.27.27.33.245.36.42.52 M3.44.9.75.25.27.3.99.39 M46.45.23.5.3.29.43.45.63 M6.45.26.79.33.2.45.6.62 M.45.24.2.3.27.4..55 M32.97.46.64.52.248.62.74.89 M3.45.28.94.36.27.46..6 M46.47.42.2.58.28.79.39.99 M6.47.42.22.52.29.78.37.2 M.96.3.224.39.244.53.235.73 M32.29.69.38.8.327.8.342.36 M3.96.39.2.48.245.69.52.9 M46.99.65.429.78.28.2.483.8 M6.98.62.25.8.279.6.278.49 M.242.38.77.52.277.6.2.95 M32.34.99.589.6.392.23.588.99 M3.96.53.22.58.268.93.242.25 M46.477.95.49..59.9.53.238 M6.382.8.345.89.57.6.389.224 M.29.45.46.55.324.77.43.3 M32.44.38.988.36.492.444.39.294 M3.99.68.247.7.282.22.3.64 M46.572.29.29.32.695.262.34.39 M6.385.9.459.6.547.24.583.294 (b) Quadrangles, Gauss-Legendre-Lobatto Method Table D.4: Benchmarking results for GiMMiK CUDA (GCU) and OpenCL (GCL) kernels, cublas (CU) and clblas(cl) for quadrilateral element matrices in single precision on GTX 78 Ti. Reported values are averages of 3 runs reproducible within 2% in [ms]. APPENDIX D. BENCHMARKING RESULTS FOR GiMMiK KERNELS
Order = Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β CU GCU CL GCL CU GCU CL GCL M.46.28.7.38.2.52.8.68 M32.96.32.2.38.249.4.8.57 M3.96.32.8.38.249.4.26.56 M46.46.27.74.34.2.52.9.7 M6.97.42.34.5.245.7.44.2 M.99.73.434.78.286.3.429.96 M32.389..624.6.424.25.62.86 M3.29.73.327.86.326.38.392.42 M46.385.94.626.2.57.83.649.268 M6.575.8.882.26.679.239.95.286 M.574.42.882.44.675.387.92.443 M32.689.337.77.45.742.763.98.846 M3.394.42.887.46.453.488.95.559 M46.897.226.737.237.237.72.82.75 M6.6.252 2.566.276.398.226 2.62.429 M.3.239 4.7.38.56.66 4.24.24 M32 2.544.22 9.755.983 2.646.397 9.872.632 M3.6.222 3.2.33.44.846 3.47.4 M46 2.49.426 9.852.54 2.823 2.25.7 2.539 M6 2.799.474 8.877.53 3.56 2.3 9.239 2.448 M 2.237.359 6.445.52 2.43.625 6.43.7 M32 5.465 6.85 9.22 2.439 5.6 2.85 8.66 3.348 M3 2.32.47 6.459.58 2.44.45 6.366.688 M46 5.735.339 9.44.295 6.585 4.338 9.3 4.559 M6 5.5.935 9.44.862 6.577 3.82 9.2 4.67 M 4.29 2.846 2. 2.44 4.466 3.533 2.4 3.4 M32 3.33 3.67 72. 4.825 3.75 7.2 72.39 6.85 M3 4.22.23 5.72.87 4.89 2.488 6.9 2.683 M46 2.7 3.744 7.4 3.86 4.2 7.542 72.25 8.727 M6.8.397 46.89.623 2.34 5.727 48.8 6.642 (c) Hexahedra, Gauss-Legendre Method Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β CU GCU CL GCL CU GCU CL GCL M.99.7.435.88.286.28.485.95 M32.39.2.63.6.424.248.7.88 M3.29.73.33.9.329.39.397.57 M46.385.94.626..58.2.692.35 M6.576.8.856.2.68.237.96.278 M.574.35.882.37.676.36.923.366 M32.69.296.767.343.742.62.76.7 M3.394.37.887.45.453.49.96.549 M46.89.224.737.24.237.74.82.744 M6.6.252 2.566.276.398.29 2.63.99 M.33.27 4.24.226.59.427 4.25.69 M32 2.545.328 9.835.945 2.654.347 9.879.69 M3.33.243 3.2.33.47.849 3.52.4 M46 2.45.448 9.858.54 2.848 2.232.6 2.524 M6 2.735.426 8.886.53 3.58 2.3 9.226 2.449 M 2.39.36 6.529.47 2.694.724 6.355.5 M32 5.24 5.936 9.8 2.9 5.625 2.696 8.67 3.25 M3 2.246.434 6.46.54 2.743.485 6.37.7 M46 5.559.258 9.46.22 6.556 4.23 9.3 4.7 M6 5.64.936 9.43.866 6.424 3.874 9.3 4.63 M 4..699 2.99.58 4.436.83 2.38 2.42 M32 3.25 3.47 7.94 4.99 3.83 6.566 72.43 6.99 M3 4.434.25 5.73.973 4.86 2.58 6. 2.665 M46 2.63 3.4 7.2 3.55 4. 7.77 72.23 8.52 M6.87.345 46.9.633 2.37 5.722 48.7 6.78 (d) Hexahedra, Gauss-Legendre-Lobatto Method Table D.4: Benchmarking results for GiMMiK CUDA (GCU) and OpenCL (GCL) kernels, cublas (CU) and clblas(cl) for hexahedral element matrices in single precision on GTX 78 Ti. Reported values are averages of 3 runs reproducible within 2% in [ms]. APPENDIX D. BENCHMARKING RESULTS FOR GiMMiK KERNELS 2
Order = Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β CU GCU CL GCL CU GCU CL GCL M.38.9.44.2.76.6.64.8 M32.39..37.2.64.3.35.5 M3.39..35.4.64.4.4.7 M46.77.9.4.2.76.6.77.2 M6.42.2.74.7.8.2.4.25 M.44.4.64.8.22.25.9.38 M32.6.9.72.24.95.25.6.32 M3.6.5.97.2.94.22.42.29 M46.44.6.56.24.22.3.79.38 M6.44.8.9.24.24.33.6.47 M.44.9.83.29.26.34.9.43 M32.95.3.3.4.245.43.53.55 M3.44.2.8.3.27.33.2.4 M46.45.26.8.34.29.48.5.66 M6.45.28.75.38.22.54..66 M.45.26.42.4.27.44.98.57 M32.96.45.22.58.246.66.26.89 M3.45.26.44.35.26.44.23.59 M46.47.39.53.48.26.7.97.93 M6.47.39.45.47.26.78.97.93 M.96.37.2.5.245.62.223.74 M32.243.66.248.89.278.97.276.3 M3.95.34.25.5.242.59.56.76 M46.98.56.354.69.273.5.424.3 M6.96.53.25.6.273.6.36.24 M.96.48.32.63.244.78.66. M32.29.23.3.4.326.59.32.239 M3.96.44.26.58.244.84.234. M46.2.6.296.6.288.62.333.229 M6.99.67.368.77.287.4.424.25 (e) Triangles, Williams-Shunn Method Order = Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = β CU GCU CL GCL CU GCU CL GCL M.43.6.48.8.99.28.58.36 M32.57.7.63.22.87.2.65.3 M3.57.7.63.25.87.22.65.28 M46.99.6.46.8.98.28.5.37 M6.44.2.8.35.25.36.9.5 M.46.3.82.37.22.55..72 M32.96.4.94.53.249.53.224.74 M3.96.35.26.42.249.48.62.67 M46.46.35.5.45.25.66.63.89 M6.96.47.4.57.245.85.96.24 M.97.53.25.64.272.9.24.57 M32.293.9.275.2.329.26.282.93 M3.243.6.9.8.278.92.25.25 M46.99.72.24.86.29.52.278.22 M6.246.95.433.7.327.9.469.246 M.246..554.3.325.88.63.264 M32.439.372.52.292.485.49.29.45 M3.292.4.46.6.349.332.579.28 M46.48.92.925.27.624.359.58.38 M6.576.242.989.92.73.62.9.526 M.575.524.733.364.677.695.774.533 M32.644.87.392.555.692 2.22.382.8 M3.394.54.727.434.448.652.723.5 M46.733.965.469.669.66.364.54. M6.934.4 2.63.69.39.56 2.25.3 M.773..467.785.879.23.49.8 M32.767 3.946 3.37 3.524.85 4.98 3.88 3.886 M3.868.89.44.56.853.75.43.988 M46.376 4.33 3.72 4.34.833 2.486 3.5 2.447 M6.55 2.735 3.962 2.28.768.86 3.989 3.34 (f) Tetrahedra, Shunn-Ham Method Table D.4: Benchmarking results for GiMMiK CUDA (GCU) and OpenCL (GCL) kernels, cublas (CU) and clblas(cl) for triangular and tetrahedral element matrices in single precision on GTX 78 Ti. Reported values are averages of 3 runs reproducible within 2% in [ms]. APPENDIX D. BENCHMARKING RESULTS FOR GiMMiK KERNELS 3
Order = Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = CL GCL M.43.27 M32.38.28 M3.34.28 M46.43.3 M6.5.35 M.8.45 M32.77.59 M3.64.46 M46.4.59 M6.99.63 M.74.69 M32..24 M3.73.67 M46.26.9 M6.27.99 M.89.5 M32.263.23 M3.38.98 M46.357.98 M6.26.42 M.23.6 M32.526.537 M3.225.3 M46.492.422 M6.37.99 M.36.32 M32.94.7 M3.328.76 M46.973.627 M6.567.253 (a) Quadrangles, Gauss-Legendre Method Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = CL GCL M.8.45 M32.75.55 M3.6.45 M46.3.57 M6.97.64 M.73.58 M32.3.6 M3.73.68 M46.24.8 M6.23. M.89.73 M32.263.224 M3.37.96 M46.357.9 M6.264.4 M.23.89 M32.527.57 M3.227.33 M46.489.42 M6.373.95 M.3. M32.92.72 M3.327.75 M46.973.594 M6.567.248 (b) Quadrangles, Gauss-Legendre- Lobatto Method Order = Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = CL GCL M.87.64 M32.85.65 M3.89.65 M46.86.64 M6.55.97 M.38.25 M32.444.373 M3.283.86 M46.55.25 M6.797.274 M.933.495 M32.727 2.36 M3.9.474 M46.835 2.435 M6 2.62.583 M 3.4 4.9 M32 6.858 8.2 M3 2.65.685 M46 7.23 4.84 M6 7.86.7 M 6.478 4.47 M32 9.32 4.576 M3 6.489 7.43 M46 9.249 38.59 M6 8.9 4.775 M 4.97 23.436 M32.63 87.88 M3 3.95.93 M46.473 7.84 M6.528 7.353 (c) Hexahedra, Gauss-Legendre Method Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = CL GCL M.379.68 M32.448.368 M3.283.86 M46.548.244 M6.8.273 M.929.287 M32.7.999 M3.9.476 M46.837.729 M6 2.69.574 M 3.34.484 M32 6.853 6.434 M3 2.648.598 M46 7.237 2.58 M6 7.997.82 M 6.47.728 M32 9.32 4.442 M3 6.476 6.98 M46 9.225 32.76 M6 8.9 4.456 M 5.92.56 M32 5.84 77.85 M3 3.947 5.39 M46 5.496 63.6 M6 4.66 8.995 (d) Hexahedra, Gauss-Legendre- Lobatto Method Table D.5: Benchmarking results for GiMMiK OpenCL (GCL) kernels and clblas(cl) for quadrilateral and hexahedral element matrices in double precision on Tesla K4c. Reported values are averages of 3 runs reproducible within 2% in [ms]. APPENDIX D. BENCHMARKING RESULTS FOR GiMMiK KERNELS 4
Order = Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = CL GCL M.36.2 M32.34.24 M3.34.22 M46.3.3 M6.42.27 M.48.35 M32.66.48 M3.6.4 M46.53.43 M6.8.52 M.65.79 M32.8.28 M3.64.72 M46.92.25 M6.2.96 M..27 M32.3.276 M3.28. M46.54.256 M6.53.228 M.73.324 M32.227.568 M3.26.23 M46.25.666 M6.9.525 M.7.53 M32.293 2.577 M3.79.39 M46.336 4.826 M6.43 2.276 (e) Triangles, Williams-Shunn Method Order = Order = 2 Order = 3 Order = 4 Order = 5 Order = 6 Matrix β = CL GCL M.46.35 M32.45.35 M3.46.37 M46.39.26 M6.66.52 M.95.99 M32.2.27 M3.93.27 M46.2.69 M6.63.227 M.23.52 M32.32.6 M3.28.656 M46.273 3.86 M6.435 4.2 M.454 3.28 M32.88 4.955 M3.447 7.87 M46.759 25.23 M6.27 2.27 M.832 39.7 M32.498 66.23 M3.79 22.23 M46.53 83.452 M6 2.34 57.742 M.364 2.863 M32 3.252 423.69 M3.526 59.262 M46 3.96 97.345 M6 4.7 32.365 (f) Tetrahedra, Shunn-Ham Method Table D.5: Benchmarking results for GiMMiK OpenCL (GCL) kernels and clblas(cl) for triangular and tetrahedral element matrices in double precision on FirePro W9. Reported values are averages of 3 runs reproducible within 2% in [ms]. APPENDIX D. BENCHMARKING RESULTS FOR GiMMiK KERNELS 5
Appendix E Plots of Benchmarking and Profiling Results for GiMMiK Kernels 7
APPENDIX E. PLOTS OF BENCHMARKING AND PROFILING RESULTS FOR GiMMiK KERNELS 8 Speedup.8.6.4.2 e+6 9 8 7 6 5 4 3 2.8 % FLOPS 8.6.4.2 e+6 6 4 2.8 % Memory Bandwidth 8.6.4.2 e+6 6 4 2 Figure E.: Plots illustrating the speedup of GiMMiK s CUDA kernels over cublas, the achieved percentage of the peak floating-point rate and the achieved percentage of the peak memory bandwidth. The metric of interest is represented through the size and colour intensity of the data points. Speedups smaller than are denoted with crosses. These plots are for double precision, β on Tesla K4c.
APPENDIX E. PLOTS OF BENCHMARKING AND PROFILING RESULTS FOR GiMMiK KERNELS 9 Speedup.8.6.4.2 e+6 9 8 7 6 5 4 3 2.8 % FLOPS 8.6.4.2 e+6 6 4 2.8 % Memory Bandwidth 8.6.4.2 e+6 6 4 2 Figure E.2: Plots illustrating the speedup of GiMMiK s CUDA kernels over cublas, the achieved percentage of the peak floating-point rate and the achieved percentage of the peak memory bandwidth. The metric of interest is represented through the size and colour intensity of the data points. Speedups smaller than are denoted with crosses. These plots are for single precision, β on Tesla K4c.
APPENDIX E. PLOTS OF BENCHMARKING AND PROFILING RESULTS FOR GiMMiK KERNELS Speedup.8.6.4.2 e+6 25 2 7 3 9 5.8 % FLOPS 8.6.4.2 e+6 6 4 2.8 % Memory Bandwidth 8.6.4.2 e+6 6 4 2 Figure E.3: Plots illustrating the speedup of GiMMiK s CUDA kernels over cublas, the achieved percentage of the peak floating-point rate and the achieved percentage of the peak memory bandwidth. The metric of interest is represented through the size and colour intensity of the data points. Speedups smaller than are denoted with crosses. These plots are for double precision, β on GTX 78 Ti.
APPENDIX E. PLOTS OF BENCHMARKING AND PROFILING RESULTS FOR GiMMiK KERNELS Speedup.8.6.4.2 e+6 25 2 7 3 9 5.8 % FLOPS 8.6.4.2 e+6 6 4 2.8 % Memory Bandwidth 8.6.4.2 e+6 6 4 2 Figure E.4: Plots illustrating the speedup of GiMMiK s CUDA kernels over cublas, the achieved percentage of the peak floating-point rate and the achieved percentage of the peak memory bandwidth. The metric of interest is represented through the size and colour intensity of the data points. Speedups smaller than are denoted with crosses. These plots are for single precision, β on GTX 78 Ti.
APPENDIX E. PLOTS OF BENCHMARKING AND PROFILING RESULTS FOR GiMMiK KERNELS 2 Speedup 3.8.6.4.2 e+6 26 2 6 6 Figure E.5: Plots illustrating the speedup of GiMMiK s OpenCL kernels over clblas, the achieved percentage of the peak floating-point rate and the achieved percentage of the peak memory bandwidth. The metric of interest is represented through the size and colour intensity of the data points. Speedups smaller than are denoted with crosses. These plots are for double precision, β = on Tesla K4c. Speedup 3.8.6.4.2 e+6 26 2 6 6 Figure E.6: Plots illustrating the speedup of GiMMiK s OpenCL kernels over clblas, the achieved percentage of the peak floating-point rate and the achieved percentage of the peak memory bandwidth. The metric of interest is represented through the size and colour intensity of the data points. Speedups smaller than are denoted with crosses. These plots are for single precision, β = on Tesla K4c.
APPENDIX E. PLOTS OF BENCHMARKING AND PROFILING RESULTS FOR GiMMiK KERNELS 3 Speedup.8.6.4.2 e+6 43 36 29 22 5 8 Figure E.7: Plots illustrating the speedup of GiMMiK s OpenCL kernels over clblas, the achieved percentage of the peak floating-point rate and the achieved percentage of the peak memory bandwidth. The metric of interest is represented through the size and colour intensity of the data points. Speedups smaller than are denoted with crosses. These plots are for double precision, β = on GTX 78 Ti. Speedup.8.6.4.2 e+6 43 36 29 22 5 8 Figure E.8: Plots illustrating the speedup of GiMMiK s OpenCL kernels over clblas, the achieved percentage of the peak floating-point rate and the achieved percentage of the peak memory bandwidth. The metric of interest is represented through the size and colour intensity of the data points. Speedups smaller than are denoted with crosses. These plots are for single precision, β = on GTX 78 Ti.
APPENDIX E. PLOTS OF BENCHMARKING AND PROFILING RESULTS FOR GiMMiK KERNELS 4 Speedup.8.6.4.2 e+6 5 3 9 7 5 3 Figure E.9: Plots illustrating the speedup of GiMMiK s OpenCL kernels over clblas, the achieved percentage of the peak floating-point rate and the achieved percentage of the peak memory bandwidth. The metric of interest is represented through the size and colour intensity of the data points. Speedups smaller than are denoted with crosses. These plots are for double precision, β on Tesla K4c. Speedup.8.6.4.2 e+6 5 3 9 7 5 3 Figure E.: Plots illustrating the speedup of GiMMiK s OpenCL kernels over clblas, the achieved percentage of the peak floating-point rate and the achieved percentage of the peak memory bandwidth. The metric of interest is represented through the size and colour intensity of the data points. Speedups smaller than are denoted with crosses. These plots are for single precision, β on Tesla K4c.
APPENDIX E. PLOTS OF BENCHMARKING AND PROFILING RESULTS FOR GiMMiK KERNELS 5 Speedup.8.6.4.2 e+6 9 6 3 7 4 Figure E.: Plots illustrating the speedup of GiMMiK s OpenCL kernels over clblas, the achieved percentage of the peak floating-point rate and the achieved percentage of the peak memory bandwidth. The metric of interest is represented through the size and colour intensity of the data points. Speedups smaller than are denoted with crosses. These plots are for double precision, β on GTX 78 Ti. Speedup.8.6.4.2 e+6 9 6 3 7 4 Figure E.2: Plots illustrating the speedup of GiMMiK s OpenCL kernels over clblas, the achieved percentage of the peak floating-point rate and the achieved percentage of the peak memory bandwidth. The metric of interest is represented through the size and colour intensity of the data points. Speedups smaller than are denoted with crosses. These plots are for single precision, β on GTX 78 Ti.
APPENDIX E. PLOTS OF BENCHMARKING AND PROFILING RESULTS FOR GiMMiK KERNELS 6.8.6.4.2 Speedup 2 7 3 9 5 e+6 Figure E.3: Plots illustrating the speedup of GiMMiK s OpenCL kernels over clblas, the achieved percentage of the peak floating-point rate and the achieved percentage of the peak memory bandwidth. The metric of interest is represented through the size and colour intensity of the data points. Speedups smaller than are denoted with crosses. These plots are for single precision, β = on FirePro W9.
Appendix F Speedups of Individual Matrices Stacked Together to Mimic PyFR From a practical standpoint of running a PyFR simulation, the most interesting case make matrices for the third order of accuracy. This order provides a good balance between the accuracy and the cost of a simulation. For the sake of brevity only stacked plots for the third order of accuracy are shown in this appendix. The procedure used to generate this data is described in Section 5.4. Table F. gives the results of aggregating the benchmarking results for individual matrices to mimic the matrix multiplication steps executed by PyFR. 7
Tesla K4c GTX 78 Ti Order double single double single MIN CU GIM MIN CU GIM MIN CU GIM MIN CU GIM.2222.484.3943.6 2.7.23.29.76332.3432.5238.7546.6536 2.23333.8794.7355.667 2.4223.37752.46286 4.93559.56352. 2.4934.28987 3.37778 2.323.774.8889 2.45865.6493.295 5.6647.92738.69 2.7822.46263 4.55556 4.8358.744.27778 3.2985.9875 2.695 5.75.3884.238 2.7794.6876 5.8298 5.3554 2.42938.38333 4.47383.2625 4.9374 2.937.97624.32857 3.77489.9452 6.2927 6.5793 3.2386.52526 5.27793.66864 8.49333 26.582 2.594.44889 4.4498.2375 2.23333.87852.78.667 2.483.36722.46286 4.93944.54568. 2.4885.2885 3.37778 2.55.8979.8889 2.45853.56795.295 5.6623.8276.69 2.783.4398 4.55556 4.96.5782.27778 3.29725.79472 2.695 5.3378.6253.238 2.783.625 5.8298 5.373 2.78.38333 4.4762.4532 4.9374 9.9382.5684.32857 3.76898.788 6.2927 6.5723 2.62584.52526 5.2756.35373 8.49333 26.669 2.55.44889 4.4532..37778 3.8253.5477.8889 3.7395.8525.9429 9.5245.676.69 3.75.67 2.7425 7.6853 4.4225.5332 5.87925 2.3543 7.6374 28.9822 3.2475.45643 4.96746.73364 3 5.558 2.965.277.786 4.99 5.93266 35.857 88.2773 7.29273.46286 2.6362 4.5449 4 7.4545 57.94878 8.49778 5.6882 32.84768.966 6.743 248.343 3.37428 4.8363 27.3285 8.22743 5 45.6772 3.4853 95.78757 5.22573 69.27776 26.6649 3.4 6.7737 73.982 2.96 56.637 9.2344 6 5.77832 266.6556 69.6655 35.25944 35.9674 9.57254 72.3,273.7529 49.57296 3.25.62222 7.68 2.7425 7.67273 4.25729.5332 5.8837 2.25339 7.6374 28.87332 3.682.45643 4.96988.79 3 5.558 2.943 9.7923.786 4.9446 5.757 35.857 84.3643 6.78652.46286 2.67 4.38448 4 7.4545 57.94878 8.49778 5.6882 32.84768.966 6.743 248.343 3.37428 4.8363 27.3285 8.22743 5 45.6772 57.94878 8.49778 5.22573 32.84768.966 3.4 248.343 3.37428 2.96 27.3285 8.22743 6 5.77832 266.648 59.36 35.25944 35.9557 48.44 72.3,272.982 44.22 3.25 8.7225 36.467.967.9657.294.4583.67545.6764.843.5382.24233.3929.624.392 2.625.23234.5344.825 2.87.2676.22286 2.4846.5847.6964.7947.2324 3.25.88676.78864.25 2.42358.426.53333 4.93532.52.74 2.5383.38 4.3547 2.953.2323.778 2.4463.5829.743 5.663.977.579 2.77.449 5.475 3.6223.55895.2375 3.22285.8526.92 2.4548 3.4634.2357 2.7772.6769 6.625 4.66 2.8.3625 3.362.279 3.7333 5.5984 5.65.2625 2.7897.88783.6667.5492.5332.8333 2.36.285.257.94448.39264.743.83832.232 2.375 3.4855.2274.875 2.538.6269.857 9.4494.7568.67 2.469.47424 3.8672 4.76654 2.9677.38889 4.586.4333 4.948 6.57924 7.4889.33333 3.45557.838 4.5496 6.3 6.279.659 4.95625 3.752.5 24.45646 7.57326.5786 4.8542 2.5998 5 3.6846 2.99464 24.4644.265 9.8379.3973 24.64 5.8688 38.68229.2667 8.363 9.37 6 7.56587 2.8438 58.9879 2.5296 5.38387 25.72484 5.52 88.29794 85.28484 2.4667 2.7299 2.6244 Quad GL Quad GLL Hex GL Hex GLL Tri WS Tet SH Table F.: Benchmarking results for individual matrices have been combined together to mimic the matrix multiplication steps executed by PyFR. MIN is the minimum time required by any dense GEMM based on the peak floating-point rate and memory bandwidth of the device. CU is the matrix-multiplication step performed by cublas GEMM and GIM is the step performed by GiMMiK. Times reported in [ms]. APPENDIX F. SPEEDUPS OF INDIVIDUAL MATRICES STACKED TOGETHER TO MIMIC PyFR 8
APPENDIX F. SPEEDUPS OF INDIVIDUAL MATRICES STACKED TOGETHER TO MIMIC PyFR 9 gtx78-gi-s gtx78-cu-s gtx78-gi-d gtx78-cu-d k4-gi-s k4-cu-s k4-gi-d k4-cu-d Quad, Gauss-Legendre.5.5 2 2.5 3 3.5 4 4.5 5 5.5 6 gtx78-gi-s gtx78-cu-s gtx78-gi-d gtx78-cu-d k4-gi-s k4-cu-s k4-gi-d k4-cu-d Quad, Gauss-Legendre-Lobatto 2 4 6 8 2 4 6 8 2 gtx78-gi-s gtx78-cu-s gtx78-gi-d gtx78-cu-d k4-gi-s k4-cu-s k4-gi-d k4-cu-d Hex, Gauss-Legendre 2 3 4 5 6 7 Time [ms] M M32 M3 M46 M6 K4-double K4-single GTX78-double GTX78-single Figure F.: Combined time for execution of a single time step in PyFR for quadrilateral elements mesh for 3rd order of accuracy and the theoretical limits imposed by the devices.
APPENDIX F. SPEEDUPS OF INDIVIDUAL MATRICES STACKED TOGETHER TO MIMIC PyFR 2 gtx78-gi-s gtx78-cu-s gtx78-gi-d gtx78-cu-d k4-gi-s k4-cu-s k4-gi-d k4-cu-d Hex, Gauss-Legendre-Lobatto 2 4 6 8 2 4 6 8 2 22 gtx78-gi-s gtx78-cu-s gtx78-gi-d gtx78-cu-d k4-gi-s k4-cu-s k4-gi-d k4-cu-d Tri, Williams-Shunn.5.5 2 2.5 3 3.5 4 4.5 5 5.5 6 gtx78-gi-s gtx78-cu-s gtx78-gi-d gtx78-cu-d k4-gi-s k4-cu-s k4-gi-d k4-cu-d Tet, Shunn-Ham 2 4 6 8 2 4 Time [ms] M M32 M3 M46 M6 K4-double K4-single GTX78-double GTX78-single Figure F.: Combined time for execution of a single time step in PyFR for quadrilateral elements mesh for 3rd order of accuracy and the theoretical limits imposed by the devices.
Appendix G Performance Bounds for GiMMiK Kernels 2
APPENDIX G. PERFORMANCE BOUNDS FOR GiMMiK KERNELS 22.8.6.4.2 e+6 (a) Tesla K4c, β =.8.6.4.2 e+6 (b) GTX 78 Ti, β = Figure G.: Memory bandwidth bound (blue) and floating-point rate bound (red) single precision, β = kernels for a set of benchmark matrices.
APPENDIX G. PERFORMANCE BOUNDS FOR GiMMiK KERNELS 23.8.6.4.2 e+6 (c) Tesla K4c, β.8.6.4.2 e+6 (d) GTX 78 Ti, β Figure G.: Memory bandwidth bound (blue) and floating-point rate bound (red) double precision, β kernels for a set of benchmark matrices.
APPENDIX G. PERFORMANCE BOUNDS FOR GiMMiK KERNELS 24.8.6.4.2 e+6 (e) Tesla K4c, β.8.6.4.2 e+6 (f) GTX 78 Ti, β Figure G.: Memory bandwidth bound (blue) and floating-point rate bound (red) single precision, β kernels for a set of benchmark matrices.