AMD APP SDK v2.9 FAQ. 1 General Questions
|
|
|
- Rodney Andrews
- 9 years ago
- Views:
Transcription
1 AMD APP SDK v2.9 FAQ 1 General Questions 1. Do I need to use additional software with the SDK? To run an OpenCL application, you must have an OpenCL runtime on your system. If your system includes a recent AMD discrete GPU, or an APU, you also should install the latest Catalyst drivers, which can be downloaded from AMD.com. Information on supported devices can be found at developer.amd.com/appsdk. If your system does not include a recent AMD discrete GPU, or APU, the SDK installs a CPU-only OpenCL run-time. Also, we recommend using the debugging profiling and analysis tools contained in the AMD CodeXL heterogeneous compute tools suite. 2. Which versions of the OpenCL standard does this SDK support? AMD APP SDK 2.9 supports the development of applications using the OpenCL Specification v Will applications developed to execute on OpenCL 1.1 still operate in an OpenCL 1.2 environment? OpenCL is designed to be backwards compatible. The OpenCL 1.2 run-time delivered with the AMD Catalyst drivers run any OpenCL 1.1-compliant application. However, an OpenCL 1.2-compliant application will not execute on an OpenCL 1.1 run-time if APIs only supported by OpenCL 1.2 are used. 4. Does AMD provide any additional OpenCL samples, other than those contained within the SDK? The most recent versions of all of the samples contained within the SDK are also available for individual download from the developer.amd.com/appsdk Samples & Demos page. This page also contains additional samples that either were too large to include in the SDK, or which have been developed since the most recent SDK release. Check this web page for new, updated, or large samples. 5. How often can I expect to get AMD APP SDK updates? Developers can expect that the AMD APP SDK may be updated two to three times a year. Actual release intervals may vary depending on available new features and product updates. AMD is committed to providing developers with regular updates to allow them to take advantage of the latest developments in AMD APP technology. FAQ 1 of 14
2 6. What is the difference between the CPU and GPU components of OpenCL that are bundled with the AMD APP SDK? The CPU component uses the compatible CPU cores in your system to accelerate your OpenCL compute kernels; the GPU component uses the compatible GPU cores in your system to accelerate your OpenCL compute kernels. 7. What CPUs does the AMD APP SDK v2.9 with OpenCL 1.2 support work on? The CPU component of OpenCL bundled with the AMD APP SDK works with any x86 CPU with SSE3 or later, as well as SSE2.x or later. AMD CPUs have supported SSE3 (and later) since Some examples of AMD CPUs that support SSE3 (or later) are the AMD Athlon 64 (starting with the Venice/San Diego steppings), AMD Athlon 64 X2, AMD Athlon 64 FX (starting with San Diego stepping), AMD Opteron (starting with E4 stepping), AMD Sempron (starting with Palermo stepping), AMD Phenom, AMD Turion 64, and AMD Turion 64 X2. 8. What APUs and GPUs does the AMD APP SDK v2.9 with OpenCL 1.2 support work on? For the list of supported APUs and GPUs, see the AMD APP SDK v2.9 System Requirements list at: 9. Can my OpenCL code run on GPUs from other vendors? At this time, AMD does not plan to have the AMD APP SDK support GPU products from other vendors; however, since OpenCL is an industry standard programming interface, programs written in OpenCL 1.2 can be recompiled and run with any OpenCL-compliant compiler and runtime. 10. What version of MS Visual Studio is supported? The AMD APP SDK v2.9 with OpenCL 1.2 supports Microsoft Visual Studio 2010 Professional Edition, and Microsoft Visual Studio Is it possible to run multiple AMD APP applications (compute and graphics) concurrently? Multiple AMD APP applications can be run concurrently, as long as they do not access the same GPU at the same time. AMD APP applications that attempt to access the same GPU at the same time are automatically serialized by the runtime system. 12. Which graphics driver is required for the current AMD APP SDK v2.9 with OpenCL 1.2 CPU support? For the minimum required graphics driver, see the AMD APP SDK v2.9 System Requirements list at: In general, it is advised that you update your system to use the most recent graphics drivers that are available for it. 13. How does OpenCL compare to other APIs and programming platforms for parallel computing, such as OpenMP and MPI? Which one should I use? OpenCL is designed to target parallelism within a single system and provide portability to multiple different types of devices (GPUs, multi-core CPUs, etc.). OpenMP targets multi-core CPUs and SMP systems. MPI is a message passing protocol most often used for 2 of 14 FAQ
3 2 OpenCL Questions communication between nodes; it is a popular parallel programming model for clusters of machines. Each programming model has its advantages. It is anticipated that developers mix APIs, for example programming a cluster of machines with GPUs with MPI and OpenCL. 14. If I write my code on the CPU version, does it work on the GPU version, or do I have to make changes. Assuming the size limitations for CPUs is considered, the code works on both the CPU and GPU components. Performance tuning, however, is different for each. 15. What is the precision of mathematical operations? See Chapter 7, OpenCL Numerical Compliance, of the OpenCL 1.2 Specification for exact mathematical operations precision requirements Are byte-addressable stores supported? Byte-addressable stores are supported. 17. Are long integers supported? Yes, 64-bit integers are supported. 18. Are operations on vectors supported? Yes, operations on vectors are supported. 19. Is swizzling supported? Yes, swizzling (the rearranging of elements in a vector) is supported. 20. How does one verify if OpenCL has been installed correctly on the system? Run clinfo from the command-line interface. The clinfo tool shows the available OpenCL devices on a system. 21. How do I know if I have installed the latest version of the AMD APP SDK On installation of the SDK, if an internet connection is available, the installer states whether or not a newer SDK is available in the file VersionInfo.txt in the directory C:\Users\<username>\Documents\AMD APP. Alternatively, you can check for the latest available version of the AMD APP SDK at What is OpenCL? OpenCL (Open Computing Language) is the first truly open and royalty-free programming standard for general-purpose computations on heterogeneous systems. OpenCL lets programmers preserve their expensive source code investment and easily target both multicore CPUs and the latest GPUs, such as those from AMD. FAQ 3 of 14
4 Developed in an open standards committee with representatives from major industry vendors, OpenCL gives users a cross-vendor, non-proprietary solution for accelerating their applications on their CPU and GPU cores. 23. How much does the AMD OpenCL development platform cost? AMD bundles support for OpenCL as part of its AMD APP SDK product offering. The AMD APP SDK is offered to developers and users free of charge. 24. What operating systems does the AMD APP SDK v2.9 with OpenCL 1.2 support? AMD APP SDK v2.9 runs on 32-bit and 64-bit versions of Windows and Linux. For the exact list of supported operating systems, see the AMD APP SDK v2.9 System Requirements list at: Can I write an OpenCL application that works on both CPU and GPU? Applications that program to the core OpenCL 1.2 API and kernel language should be able to target both CPUs and GPUs. At runtime, the appropriate device (CPU or GPU) must be selected by the application. 26. Does the AMD OpenCL compiler automatically vectorize for SSE on the CPU? The CPU component of OpenCL that is bundled with the AMD APP SDK takes advantage of SSE3 instructions on the CPU. It also takes advantage of the AVX instructions where supported. In addition to AVX, OpenCL math library functions also leverage XOP and FMA4 capabilities on CPUs that support them. 27. Does the AMD APP SDK v2.9 with OpenCL 1.2 support work on multiple GPUs (ATI CrossFire)? OpenCL applications can explicitly invoke separate compute kernels on multiple compatible GPUs in a single system. The partitioning of the algorithm to multiple parallel compute kernels must be done by the developer. It is recommended that ATI CrossFire be turned off in most system configurations so that AMD APP applications can access all available GPUs in the system. ATI CrossFire technology allows multiple AMD GPUs to work together on a single graphicsrendering task. This method does not apply to AMD APP computational tasks because it is not compatible with the compute model used for AMD APP applications. 28. Can I ship pre-compiled OpenCL application binaries that work on either CPU or GPU? By using OpenCL runtime APIs, developers can write OpenCL applications that can detect the available compatible CPUs and GPUs in the system. This lets developers precompile applications into binaries that dynamically work on either CPUs or GPUs that execute on targeted devices. Including LLVM IR in the binary provides a means for the binary to support devices for which the application was not explicitly pre-compiled. 29. Is the OpenCL double precision optional extension supported? The Khronos and AMD double precision extensions are supported on certain devices. Your application can use the OpenCL API to query if this functionality is supported on the device in use. 4 of 14 FAQ
5 30. Is it possible to write OpenCL code that scales transparently over multiple devices? For OpenCL programs that target only multi-core CPUs, scaling can be done transparently; however, scaling across multiple GPUs requires the developer to explicitly partition the algorithm into multiple compute kernels, as well as explicitly launch the compute kernels onto each compatible GPU. 31. What should I do if I get wrong results on the Apple platform with AMD devices? Apple handles support for the Apple platform; please contact them. 32. Is it possible to dynamically index into a vector? No, this is not possible because a vector is not an array, but a representation of a hardware register. 33. What is the difference between local int a[4] and int a[4]? local int a[4] uses hardware local memory, which is a small, low-latency, high-bandwidth memory; int a[4] uses per-thread hardware scratch memory, which is located in uncached global memory. 34. How come my program runs slower in OpenCL than in CUDA/Brook+/IL? When comparing performance, it is better to compare code optimized for our OpenCL platform against code optimized against another vendor's OpenCL platforms. By comparing the same toolchain on different vendors, you can find out which vendors hardware works the best for your problem set. 35. Why do read-write images not exist in OpenCL? OpenCL has a memory consistency model that requires certain constraints (see the OpenCL Specification for more information). Since images are special functional hardware units, they are different for reading and writing. This is different from pointers, which for the most part use the same hardware units and can guarantee the consistency that OpenCL requires. 36. Does prefetching work on the GPU? Prefetch is not needed on the GPU because the hardware has a built-in mechanism to hide latency when many work-groups are running. The LDS can be used as a software-controlled cache. 37. How do you determine the max number of concurrent work-groups? The maximum number of concurrent work-groups is determined by resource usage. This includes number of registers, amount of LDS space, and number of threads per work group. There is no way to directly specify the number of registers used by a kernel. Each SIMD has a 64-wide register file, with each column consisting of 256 x 32 x 4 registers. 38. Is it possible to tell OpenCL not to use all CPU Cores? Yes, use the device fission extension. Also, OpenCL 1.2 introduces the sub-device concept. The CPU can be partitioned into sub-devices/cores and the program can choose FAQ 5 of 14
6 3 OpenCL Optimizations which sub-device/core to use. The Device Fission feature of OpenCL is demonstrated in the DeviceFission APP SDK sample. 39. How do I use constants in a kernel for best performance? For performance using constants, highest to lowest performance is achieved using: - Literal values - Constant pointer with compile time constants indexing. - Constant pointer with runtime constant indexing that is the same for all threads. - Constant pointer with linear access indexing that is the same for all threads. - Constant pointer with linear access indexing that is different between threads. - Constant pointer with random access indexing. 40. Why are literal values the fastest way to use constants? Up to 96 bits of literal values are embedded in the instruction; thus, in theory, there is no limit on the number of usable literals. In practice, the limit is 16K unique literals in a compilation unit. 41. Why does a * b + c not generate a mad instruction? Depending on the hardware and the floating point precision, the compiler may not generate a mad instruction for the computation of a * b + c due to the floating-point precision requirements in the OpenCL specification. Here, developers who want to exploit the mad instruction for performance can replace that computation with the mad() built-in function in OpenCL. 42. What is the preferred work-group size? The preferred work-group size on the AMD platform is a multiple of What is more efficient, the ternary operator?: or the select function? The select function compiles to the single cycle instruction, cmov_logical; in most cases,?: also compiles to the same instruction. In some cases, when memory is in one of the operands, the?: operator is compiled down to an IF/ELSE block. An IF/ELSE block takes more than a single instruction to execute. 44. What is the difference between 24-bit and 32-bit integer operations? 24-bit operations are faster because they use floating point hardware and can execute on all compute units. Many 32-bit integer operations also run on all stream processors, but if both a 24-bit and a 32-bit version exist for the same instruction, the 32-bit instruction executes only one per cycle. 6 of 14 FAQ
7 4 Hardware Information 45. How are 8/16-bit operations handled in hardware? The 8/16-bit operations are emulated with 32-bit registers. 46. Do 24-bit integers exist in hardware? No, there are 24-bit instructions, such as MUL24/MAD24, but the smallest integer in hardware registers is 32-bits. 47. What are the benefits of using 8/16-bit types over 32-bit integers? 8/16-bit types take less memory than a 32-bit integer type, increasing the amount of data you are able to load with a single instruction. The OpenCL compiler up-converts to 32-bits on load and down-converts to the correct type on store. 48. What is the difference between a GPR and a shared register? Although they are physically equivalent, the difference is whether the register offset in the hardware is absolute to the register file or relative to the wavefront ID. 49. How often are wavefronts created? Wavefronts are created by the hardware to execute as long as resources are available. If they are created but cannot execute immediately, they are put in a wait queue where they stay until currently running wavefronts are finished. 50. What is the maximum number of wavefronts? The maximum number of wavefronts is determined by which resource limits the number of wavefronts that can be spawned. This can be the number of registers, amount of local memory, required stack size, or other factors. Developers are recommended to profile the OpenCL application using CodeXL to figure out the resources that limit the number of wavefronts and that thereby affect the performance. 51. Why do I get blue or black screens when executing longer running kernels? The GPU is not a preemptable device. If you are running the GPU as your display device, ensure that a compute program does not use the GPU past a certain time limit set by Windows. Exceeding the time limit causes the watchdog timer to trigger; this can result in undefined program results. 52. What is the cost of a clause switch? In general, the latency of a clause switch is around 40 cycles. Note that this is relevant only for Evergreen and Northern Island devices. 53. How can I hide clause switch latency? By executing multiple wavefronts in parallel. Note that this is relevant only for Evergreen and Northern Island devices. FAQ 7 of 14
8 54. How can I reduce clause switches? Clause switches are almost directly related to source program control flow. By reducing source program control flow, clause switches can also be reduced. This is only relevant for Evergreen and Northern Islands devices. 55. How does the hardware execute a loop on a wavefront? The loop only ends execution for a wavefront once every thread in the wavefront breaks out of the loop. Once a thread breaks out of the loop, all of its execution results are masked, but the execution still occurs. 56. How does flow control work with wavefronts? There are no flow control units for each individual thread, so the whole wavefront must execute the branch if any thread in the wavefront executes the branch. If the condition is false, the results are not written to memory, but the execution still occurs. 57. What is the constant buffer size on GPU hardware? 64 kb. 58. What happens with out-of-bound memory operations? Writes are dropped, and reads return a pre-defined value. 5 Build Environment for the APP SDK samples 6 Bolt 59. Can I use the SDK with Microsoft Visual Studio 2012 Express? Due to limitations in Microsoft Visual Studio Express, it is only possible to use build files for individual samples. Microsoft Visual Studio Express does not support building of all of the samples at the same time. The project files that build all of the samples are only supported by full versions of Microsoft Visual Studio 2010 or Why is CMake preferred over typical make files? CMake is a cross-platform open-source build system that can be freely downloaded from CMake thus provides users with added flexibility. 61. What is Bolt? Bolt is a C++ template library optimized for heterogeneous computing. Bolt is designed to provide high-performance library implementations for common algorithms such as scan, reduce, transform, and sort. The Bolt interface was modeled on the C++ Standard Template Library (STL). Developers familiar with the STL will recognize many of the Bolt APIs and customization techniques. 62. Where can I learn more about Bolt? You can learn more about Bolt on this page: 8 of 14 FAQ
9 63. Why does Bolt need TBB libraries? BOLT supports parallelization using Intel Threading Building Blocks (TBB (Intel)). One can switch between OpenCL /C++ AMP and TBB (Intel) calls by changing the control structure. 64. Where should one report the issues found in Bolt? One can report the issues found in Bolt on this page: What is the selection order of the devices in Bolt? As of the Bolt 1.1 release, the default mode is "Automatic" which means it will go into the OpenCL/C++ AMP path first, then TBB (Intel), then SerialCpu with the order mentioned below. The control will go to the other paths only if the selected path is not found. The selection order is as follows: i. Run on GPU if AMD GPU is found and AMD APP SDK is installed. ii. If AMD GPU is not found, run on Multicore CPU if Intel TBB is installed. iii. If neither AMD GPU nor Intel TBB is found, run the CPU serial path. 66. How to force the APP SDK Bolt samples to run on a specific code path GPU/Multicore TBB/Serial CPU? APP SDK Bolt sample provides the " --device" option to run on GPU or Multi-core CPU(TBB) or Serial CPU. The usage is as follows: For Bolt OpenCL samples: <BoltSample>.exe --device [auto OpenCL SerialCpu MultiCoreCpu] For Bolt C++ AMP Samples: <BoltSample>.exe --device [auto GPU SerialCpu MultiCoreCpu] 67. Is installing TBB necessary? No. The TBB libraries are needed if one wants to make use of the multi-core CPU path supported in BOLT. 68. Do all APP SDK Bolt samples support running all 3 Bolt code paths GPU/Multicore TBB/Serial CPU? Except for the introductory samples, BoltIntro and BoltAMPIntro, all the other APP SDK Bolt samples support running in all 3 code paths GPU, Multicore TBB, and Serial CPU. These Bolt projects include additional build configuration (Debug_TBB and Release_TBB) for enabling TBB path and have precompiler flag, "ENABLE_TBB" set. Binaries generated in this build configuration have "_TBB " as suffix and can be used to run in Multicore TBB path in addition to GPU and Serial CPU path. An additional requirement for running the multicore path is to have the environment variable TBB_ROOT, that points to install location of Intel TBB. FAQ 9 of 14
10 Samples compiled with regular Debug and Release configuration can only be used to run in GPU and Serial CPU modes. 69. Do all the Bolt 1.1 APIs support the Multicore TBB path? Yes. Beginning with the Bolt 1.1 release, all the Bolt APIs support the Multicore TBB path. 70. Which GPU compute technologies are supported by Bolt? APP SDK 2.9 is aligned with Bolt 1.1 libraries. The GPU compute technologies supported by Bolt 1.1 are OpenCL and C++ AMP. 71. Does Bolt require linking to a library? Yes, Bolt includes a small, static library that must be linked into a user program to properly resolve symbols. Since Bolt is a template library, almost all functionality is written in header files, but OpenCL follows an online compilation model which makes sense to include in a library and not in header files. 72. Does Bolt depend on any other libraries? Yes, Bolt contains dependencies on Boost. Bolt also depends on Intel TBB libraries for execution on the multicore cpu path. 73. What algorithms does Bolt currently support? For the complete list of functions supported by Bolt 1.1 and the supported code paths, see: What version of OpenCL does Bolt currently require? Bolt uses features available in OpenCL v Does Bolt require the AMD OpenCL runtime? Yes, Bolt requires the use of templates in kernel code. At the time of this writing, AMD is the only vendor to provide this support. 76. Which Catalyst package should I use with Bolt? Generally speaking, downloading and installing the latest Catalyst package contains the most recent OpenCL runtime. As of the time of this writing, the recommended Catalyst package is beta V Which license is Bolt licensed under? The Apache License, Version When should I use device_vector vs regular host memory? bolt::cl::device_vector is used to manage device-local memory and can deliver higher performance on discrete GPU system. However, the host memory interfaces eliminate the need to create and manage device_vectors. If memory is re-used across multiple Bolt calls or is referenced by other kernels, using device_vector delivers higher performance 10 of 14 FAQ
11 79. How do I measure the performance of OpenCL Bolt library calls? The first time that a Bolt library routine is called, the runtime calls the OpenCL compiler to compile the kernel. Each unique template instantiation reuses the same OpenCL compilation, so two Bolt calls with the same functors are compiled only one time. When measuring the performance of Bolt, it is important to exclude the first call (with the compilation overhead) from the timing measurement. Here are two ways to time Bolt function calls: The first example pulls the call outside the timer loop. std::vector<int> a; bolt::cl::transform(a.begin, a.end, z.begin(), bolt::cl::negate<int>); starttimer(...); for (int i=0; i<100; i++) { bolt::cl::transform(a.begin, a.end, z.begin(), bolt::cl::negate<int>); } stoptimer(...); A second alternative is to explicitly exclude the first compilation from the timing region: // Explicitly start timer after first iteration: std::vector<int> a,z; for (int i=0; i<100; i++) { if (i==1) { starttimer(...) } bolt::cl::transform(a.begin(), a.end(), z.begin(), bolt::cl::negate<int>); } stoptimer(...); 80. The Bolt library APIs accept host data structures such as std::vector(). Are these copied to the GPU or kept in host memory? For host data structures, Bolt creates an OpenCL buffer with the CL_MEM_USE_HOST_PTR flag and uses the OpenCL map and unmap APIs to manage the host and device access. On the AMD OpenCL implementation, this results in data being kept in host memory when the host memory is aligned on 256-byte boundary. In other cases, the runtime copies the data as needed to the GPU. See section 4.5 of the AMD Accelerated Parallel Processing OpenCL Programming Guide for more information on memory optimization. Higherperformance may be obtained by aligning the host data structures to 256-byte boundaries or by using the device_vector class for finer control over the allocation properties. 81. Bolt APIs also accept device_vector inputs. When should use device_vectors for my inputs? The bolt::cl::device_vector interface is a thin wrapper around the cl::buffer class; it provides an interface similar to std::vector. Also, the device_vector optionally accepts a flags argument that is passed to the OpenCL buffer creation. The following two flags can be useful: CL_MEM_ALLOC_HOST_POINTER: Forces memory to be allocated in host memory, even on discrete GPUs. GPUs access to the data is slower (over the PCIe bus), but the allocation avoids the overhead of copying data to, and from, the GPU. This can be useful when the data is used in only a single Bolt call before being processed again on the CPU. FAQ 11 of 14
12 CL_MEM_USE_PERSISTENT_MEM_AMD: Provides device-resident zero copy memory. Use this for data that is write-only by the CPU. See section 4.5 of the AMD Accelerated Parallel Processing OpenCL Programming Guide for more information. 7 C++ AMP 82. What is C++ AMP? C++ Accelerated Massive Parallelism (C++ AMP) accelerates execution of C++ code by taking advantage of data-parallel hardware, such as the compute units on and APU. C++ AMP is an open specification that extends regular C++ through the use of the restrict keyword to denote portions of code to be executed on the accelerated device. 83. What AMD devices support C++ AMP? Any AMD APU or GPU that supports Microsoft DirectX 11 Feature Level 11.0 and higher. 84. Can you run a C++ AMP program on a CPU? You can run a C++ AMP program on a CPU, but its usage is not recommended in a production environment. For more information, see: Where can I get support for C++ AMP? C++ AMP is supported by the C++ compiler in Microsoft Visual Studio Is C++ AMP supported on Linux? At this time, C++ AMP is not supported on Linux; however, as C++ AMP is defined as an open standard, Microsoft has opened the door for implementations on operating systems other than Windows. 87. Where can I learn more about C++ AMP? There is an MSDN page on C++ AMP: Also, the book C++ AMP: Accelerated Massive Parallelism with Microsoft Visual C++, by Kate Gregory and Ade Miller, may be useful. 8 Aparapi 88. What is Aparapi? Aparapi is an API for expressing data parallel algorithms in Java and a runtime component capable of converting Java bytecode to OpenCL for execution on the GPU. If Aparapi cannot execute on the GPU at runtime, it executes the developer s algorithm in a Java thread pool. For appropriate workloads, this extends Java's 'Write Once Run Anywhere' to include GPU devices. 12 of 14 FAQ
13 89. What AMD devices support Aparapi? Any AMD APU, CPU, or GPU that supports OpenCL 1.1 and higher will work. See the list of AMD APP SDK v2.9 System Requirements list at: Where can I get support for Aparapi? See: Is Aparapi supported on Linux? Yes. 92. Which JDK/JRE is recommended for Aparapi? The Oracle JDK/JRE, although not a requirement for Aparapi, is highly recommended for performance and for stability reasons. 93. Where can I learn more about Aparapi? You can find out more about Aparapi from: 9 OpenCV 94. Where to download OpenCV libraries from? OpenCV can either be installed by downloading pre-builts or by building from sources from opencv.org. 95. Which version of OpenCV does APP SDK 2.9 support? APP SDK 2.9 OpenCV samples require OpenCV to be installed as a prerequisite. The prebuilts of OpenCV do not have OpenCV-CL libraries in them. Hence to compile and run OpenCV samples in APP SDK 2.9, one would need to build from the source files of OpenCV 2.4.4, enabling OpenCL flag in CMake. 96. Does OpenCV-CL APIs support interoperability with raw OpenCL kernels? Yes. The CartoonFilter sample in APP SDK demonstrates it. 97. I am getting the opencv2/core/core.hpp: No such file or directory error while building. Verify that the OPENCV_DIR environment variable is set/exported to the installed OpenCV directory and that the OCVCL_VER environment variable is set to OpenCV version number(244). For details, see the OpenCV section in the AMD APP SDK Getting Started Guide document. 98. I am not able to run the built examples. I am getting the error while loading shared libraries: libopencv_core.so.2.x: cannot open shared object file: No such file or directory error. Export LD_LIBRARY_PATH (in Linux) or set PATH (in Windows) to the installed OpenCV `lib directory. FAQ 13 of 14
14 99. I am getting the error - Unspecified error (The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script). The reason for this error could be that the above tools might not have been installed before building OpenCV. The tools are required for displaying the images (cv::imshow calls). Install the above packages and rebuild the OpenCV library. 100.Do the OpenCV samples depend on any other library apart from the OpenCV binaries and OpenCL? The Gesture Recognition sample depends on an additional library called OpenNI. The OpenNI library is used to extract frames from the input video. 101.Where must one download the OpenNI libraries from? The OpenNI libraries can be downloaded from: What version of the OpenNI library is preferred? The OpenNI library version (Beta) is preferred. 103.I am getting the "Cannot find OpenNI.h" or "Cannot find OpenNI2.lib" error. For 32-bit builds, set OPENNI2_INCLUDE and OPENNI2_LIB, and for 64-bit builds, set OPENI2_INCLUDE64 and OPENNI2_LIB64, to the corresponding include and library locations. 104.Where can I know more about the OpenNI APIs? You can know more about the OpenNI APIs here: Contact Advanced Micro Devices, Inc. One AMD Place P.O. Box 3453 Sunnyvale, CA, Phone: For AMD Accelerated Parallel Processing: URL: developer.amd.com/appsdk Developing: developer.amd.com/ Forum: developer.amd.com/openclforum The contents of this document are provided in connection with Advanced Micro Devices, Inc. ( AMD ) products. AMD makes no representations or warranties with respect to the accuracy or completeness of the contents of this publication and reserves the right to make changes to specifications and product descriptions at any time without notice. The information contained herein may be of a preliminary or advance nature and is subject to change without notice. No license, whether express, implied, arising by estoppel or otherwise, to any intellectual property rights is granted by this publication. Except as set forth in AMD s Standard Terms and Conditions of Sale, AMD assumes no liability whatsoever, and disclaims any express or implied warranty, relating to its products including, but not limited to, the implied warranty of merchantability, fitness for a particular purpose, or infringement of any intellectual property right. AMD s products are not designed, intended, authorized or warranted for use as components in systems intended for surgical implant into the body, or in other applications intended to support or sustain life, or in any other application in which the failure of AMD s product could create a situation where personal injury, death, or severe property or environmental damage may occur. AMD reserves the right to discontinue or make changes to its products at any time without notice. Copyright and Trademarks 2013 Advanced Micro Devices, Inc. All rights reserved. AMD, the AMD Arrow logo, ATI, the ATI logo, Radeon, FireStream, and combinations thereof are trademarks of Advanced Micro Devices, Inc. OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos. Other names are for informational purposes only and may be trademarks of their respective owners. 14 of 14 FAQ
AMD APP SDK v2.8 FAQ. 1 General Questions
AMD APP SDK v2.8 FAQ 1 General Questions 1. Do I need to use additional software with the SDK? To run an OpenCL application, you must have an OpenCL runtime on your system. If your system includes a recent
Ensure that the AMD APP SDK Samples package has been installed before proceeding.
AMD APP SDK v2.6 Getting Started 1 How to Build a Sample 1.1 On Windows Ensure that the AMD APP SDK Samples package has been installed before proceeding. Building With Visual Studio Solution Files The
Ensure that the AMD APP SDK Samples package has been installed before proceeding.
AMD APP SDK v2.9 Getting Started 1 How to Build a Sample Ensure that the AMD APP SDK Samples package has been installed before proceeding. The following sections describe the steps to build and execute
Introducing PgOpenCL A New PostgreSQL Procedural Language Unlocking the Power of the GPU! By Tim Child
Introducing A New PostgreSQL Procedural Language Unlocking the Power of the GPU! By Tim Child Bio Tim Child 35 years experience of software development Formerly VP Oracle Corporation VP BEA Systems Inc.
Intel Media SDK Library Distribution and Dispatching Process
Intel Media SDK Library Distribution and Dispatching Process Overview Dispatching Procedure Software Libraries Platform-Specific Libraries Legal Information Overview This document describes the Intel Media
AMD GPU Architecture. OpenCL Tutorial, PPAM 2009. Dominik Behr September 13th, 2009
AMD GPU Architecture OpenCL Tutorial, PPAM 2009 Dominik Behr September 13th, 2009 Overview AMD GPU architecture How OpenCL maps on GPU and CPU How to optimize for AMD GPUs and CPUs in OpenCL 2 AMD GPU
Intel Media Server Studio - Metrics Monitor (v1.1.0) Reference Manual
Intel Media Server Studio - Metrics Monitor (v1.1.0) Reference Manual Overview Metrics Monitor is part of Intel Media Server Studio 2015 for Linux Server. Metrics Monitor is a user space shared library
Family 12h AMD Athlon II Processor Product Data Sheet
Family 12h AMD Athlon II Processor Publication # 50322 Revision: 3.00 Issue Date: December 2011 Advanced Micro Devices 2011 Advanced Micro Devices, Inc. All rights reserved. The contents of this document
Family 10h AMD Phenom II Processor Product Data Sheet
Family 10h AMD Phenom II Processor Product Data Sheet Publication # 46878 Revision: 3.05 Issue Date: April 2010 Advanced Micro Devices 2009, 2010 Advanced Micro Devices, Inc. All rights reserved. The contents
FLOATING-POINT ARITHMETIC IN AMD PROCESSORS MICHAEL SCHULTE AMD RESEARCH JUNE 2015
FLOATING-POINT ARITHMETIC IN AMD PROCESSORS MICHAEL SCHULTE AMD RESEARCH JUNE 2015 AGENDA The Kaveri Accelerated Processing Unit (APU) The Graphics Core Next Architecture and its Floating-Point Arithmetic
INTEL PARALLEL STUDIO XE EVALUATION GUIDE
Introduction This guide will illustrate how you use Intel Parallel Studio XE to find the hotspots (areas that are taking a lot of time) in your application and then recompiling those parts to improve overall
NVIDIA CUDA GETTING STARTED GUIDE FOR MAC OS X
NVIDIA CUDA GETTING STARTED GUIDE FOR MAC OS X DU-05348-001_v6.5 August 2014 Installation and Verification on Mac OS X TABLE OF CONTENTS Chapter 1. Introduction...1 1.1. System Requirements... 1 1.2. About
AMD CodeXL 1.7 GA Release Notes
AMD CodeXL 1.7 GA Release Notes Thank you for using CodeXL. We appreciate any feedback you have! Please use the CodeXL Forum to provide your feedback. You can also check out the Getting Started guide on
Maximize Performance and Scalability of RADIOSS* Structural Analysis Software on Intel Xeon Processor E7 v2 Family-Based Platforms
Maximize Performance and Scalability of RADIOSS* Structural Analysis Software on Family-Based Platforms Executive Summary Complex simulations of structural and systems performance, such as car crash simulations,
Introduction to GPU hardware and to CUDA
Introduction to GPU hardware and to CUDA Philip Blakely Laboratory for Scientific Computing, University of Cambridge Philip Blakely (LSC) GPU introduction 1 / 37 Course outline Introduction to GPU hardware
x64 Servers: Do you want 64 or 32 bit apps with that server?
TMurgent Technologies x64 Servers: Do you want 64 or 32 bit apps with that server? White Paper by Tim Mangan TMurgent Technologies February, 2006 Introduction New servers based on what is generally called
The ROI from Optimizing Software Performance with Intel Parallel Studio XE
The ROI from Optimizing Software Performance with Intel Parallel Studio XE Intel Parallel Studio XE delivers ROI solutions to development organizations. This comprehensive tool offering for the entire
Next Generation GPU Architecture Code-named Fermi
Next Generation GPU Architecture Code-named Fermi The Soul of a Supercomputer in the Body of a GPU Why is NVIDIA at Super Computing? Graphics is a throughput problem paint every pixel within frame time
Getting Started with CodeXL
AMD Developer Tools Team Advanced Micro Devices, Inc. Table of Contents Introduction... 2 Install CodeXL... 2 Validate CodeXL installation... 3 CodeXL help... 5 Run the Teapot Sample project... 5 Basic
Three Paths to Faster Simulations Using ANSYS Mechanical 16.0 and Intel Architecture
White Paper Intel Xeon processor E5 v3 family Intel Xeon Phi coprocessor family Digital Design and Engineering Three Paths to Faster Simulations Using ANSYS Mechanical 16.0 and Intel Architecture Executive
Intel Media SDK Features in Microsoft Windows 7* Multi- Monitor Configurations on 2 nd Generation Intel Core Processor-Based Platforms
Intel Media SDK Features in Microsoft Windows 7* Multi- Monitor Configurations on 2 nd Generation Intel Core Processor-Based Platforms Technical Advisory December 2010 Version 1.0 Document Number: 29437
Improve Fortran Code Quality with Static Analysis
Improve Fortran Code Quality with Static Analysis This document is an introductory tutorial describing how to use static analysis on Fortran code to improve software quality, either by eliminating bugs
NVIDIA CUDA GETTING STARTED GUIDE FOR MICROSOFT WINDOWS
NVIDIA CUDA GETTING STARTED GUIDE FOR MICROSOFT WINDOWS DU-05349-001_v6.0 February 2014 Installation and Verification on TABLE OF CONTENTS Chapter 1. Introduction...1 1.1. System Requirements... 1 1.2.
Performance Evaluation of NAS Parallel Benchmarks on Intel Xeon Phi
Performance Evaluation of NAS Parallel Benchmarks on Intel Xeon Phi ICPP 6 th International Workshop on Parallel Programming Models and Systems Software for High-End Computing October 1, 2013 Lyon, France
Radeon GPU Architecture and the Radeon 4800 series. Michael Doggett Graphics Architecture Group June 27, 2008
Radeon GPU Architecture and the series Michael Doggett Graphics Architecture Group June 27, 2008 Graphics Processing Units Introduction GPU research 2 GPU Evolution GPU started as a triangle rasterizer
SYCL for OpenCL. Andrew Richards, CEO Codeplay & Chair SYCL Working group GDC, March 2014. Copyright Khronos Group 2014 - Page 1
SYCL for OpenCL Andrew Richards, CEO Codeplay & Chair SYCL Working group GDC, March 2014 Copyright Khronos Group 2014 - Page 1 Where is OpenCL today? OpenCL: supported by a very wide range of platforms
Leveraging Aparapi to Help Improve Financial Java Application Performance
Leveraging Aparapi to Help Improve Financial Java Application Performance Shrinivas Joshi, Software Performance Engineer Abstract Graphics Processing Unit (GPU) and Accelerated Processing Unit (APU) offload
APPLICATION NOTE. AT07175: SAM-BA Bootloader for SAM D21. Atmel SAM D21. Introduction. Features
APPLICATION NOTE AT07175: SAM-BA Bootloader for SAM D21 Atmel SAM D21 Introduction Atmel SAM Boot Assistant (Atmel SAM-BA ) allows In-System Programming (ISP) from USB or UART host without any external
Intel Integrated Native Developer Experience (INDE): IDE Integration for Android*
Intel Integrated Native Developer Experience (INDE): IDE Integration for Android* 1.5.8 Overview IDE Integration for Android provides productivity-oriented design, coding, and debugging tools for applications
Oracle Solaris Studio Code Analyzer
Oracle Solaris Studio Code Analyzer The Oracle Solaris Studio Code Analyzer ensures application reliability and security by detecting application vulnerabilities, including memory leaks and memory access
Overview. Lecture 1: an introduction to CUDA. Hardware view. Hardware view. hardware view software view CUDA programming
Overview Lecture 1: an introduction to CUDA Mike Giles [email protected] hardware view software view Oxford University Mathematical Institute Oxford e-research Centre Lecture 1 p. 1 Lecture 1 p.
NVIDIA CUDA GETTING STARTED GUIDE FOR MAC OS X
NVIDIA CUDA GETTING STARTED GUIDE FOR MAC OS X DU-05348-001_v5.5 July 2013 Installation and Verification on Mac OS X TABLE OF CONTENTS Chapter 1. Introduction...1 1.1. System Requirements... 1 1.2. About
Intel X38 Express Chipset Memory Technology and Configuration Guide
Intel X38 Express Chipset Memory Technology and Configuration Guide White Paper January 2008 Document Number: 318469-002 INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE,
Eliminate Memory Errors and Improve Program Stability
Eliminate Memory Errors and Improve Program Stability with Intel Parallel Studio XE Can running one simple tool make a difference? Yes, in many cases. You can find errors that cause complex, intermittent
Introduction to OpenCL Programming. Training Guide
Introduction to OpenCL Programming Training Guide Publication #: 137-41768-10 Rev: A Issue Date: May, 2010 Introduction to OpenCL Programming PID: 137-41768-10 Rev: A May, 2010 2010 Advanced Micro Devices
Hetero Streams Library 1.0
Release Notes for release of Copyright 2013-2016 Intel Corporation All Rights Reserved US Revision: 1.0 World Wide Web: http://www.intel.com Legal Disclaimer Legal Disclaimer You may not use or facilitate
GPUs for Scientific Computing
GPUs for Scientific Computing p. 1/16 GPUs for Scientific Computing Mike Giles [email protected] Oxford-Man Institute of Quantitative Finance Oxford University Mathematical Institute Oxford e-research
How to use PDFlib products with PHP
How to use PDFlib products with PHP Last change: July 13, 2011 Latest PDFlib version covered in this document: 8.0.3 Latest version of this document available at: www.pdflib.com/developer/technical-documentation
INTEL PARALLEL STUDIO EVALUATION GUIDE. Intel Cilk Plus: A Simple Path to Parallelism
Intel Cilk Plus: A Simple Path to Parallelism Compiler extensions to simplify task and data parallelism Intel Cilk Plus adds simple language extensions to express data and task parallelism to the C and
GPU Profiling with AMD CodeXL
GPU Profiling with AMD CodeXL Software Profiling Course Hannes Würfel OUTLINE 1. Motivation 2. GPU Recap 3. OpenCL 4. CodeXL Overview 5. CodeXL Internals 6. CodeXL Profiling 7. CodeXL Debugging 8. Sources
ORACLE GOLDENGATE BIG DATA ADAPTER FOR HIVE
ORACLE GOLDENGATE BIG DATA ADAPTER FOR HIVE Version 1.0 Oracle Corporation i Table of Contents TABLE OF CONTENTS... 2 1. INTRODUCTION... 3 1.1. FUNCTIONALITY... 3 1.2. SUPPORTED OPERATIONS... 4 1.3. UNSUPPORTED
Oracle Enterprise Manager
Oracle Enterprise Manager System Monitoring Plug-in Installation Guide for Apache Tomcat Release 12.1.0.1.0 E28545-04 February 2014 This document provides installation instructions and configuration information
Freescale Semiconductor, I
nc. Application Note 6/2002 8-Bit Software Development Kit By Jiri Ryba Introduction 8-Bit SDK Overview This application note describes the features and advantages of the 8-bit SDK (software development
Cross-Platform GP with Organic Vectory BV Project Services Consultancy Services Expertise Markets 3D Visualization Architecture/Design Computing Embedded Software GIS Finance George van Venrooij Organic
MySQL and Virtualization Guide
MySQL and Virtualization Guide Abstract This is the MySQL and Virtualization extract from the MySQL Reference Manual. For legal information, see the Legal Notices. For help with using MySQL, please visit
Binary search tree with SIMD bandwidth optimization using SSE
Binary search tree with SIMD bandwidth optimization using SSE Bowen Zhang, Xinwei Li 1.ABSTRACT In-memory tree structured index search is a fundamental database operation. Modern processors provide tremendous
Measuring Cache and Memory Latency and CPU to Memory Bandwidth
White Paper Joshua Ruggiero Computer Systems Engineer Intel Corporation Measuring Cache and Memory Latency and CPU to Memory Bandwidth For use with Intel Architecture December 2008 1 321074 Executive Summary
Writing Applications for the GPU Using the RapidMind Development Platform
Writing Applications for the GPU Using the RapidMind Development Platform Contents Introduction... 1 Graphics Processing Units... 1 RapidMind Development Platform... 2 Writing RapidMind Enabled Applications...
NVIDIA GeForce Experience
NVIDIA GeForce Experience DU-05620-001_v02 October 9, 2012 User Guide TABLE OF CONTENTS 1 NVIDIA GeForce Experience User Guide... 1 About GeForce Experience... 1 Installing and Setting Up GeForce Experience...
An Oracle Technical Article November 2015. Certification with Oracle Linux 6
An Oracle Technical Article November 2015 Certification with Oracle Linux 6 Oracle Technical Article Certification with Oracle Linux 6 Introduction... 1 Comparing Oracle Linux 6 and Red Hat Enterprise
Chapter 1 Computer System Overview
Operating Systems: Internals and Design Principles Chapter 1 Computer System Overview Eighth Edition By William Stallings Operating System Exploits the hardware resources of one or more processors Provides
Course materials. In addition to these slides, C++ API header files, a set of exercises, and solutions, the following are useful:
Course materials In addition to these slides, C++ API header files, a set of exercises, and solutions, the following are useful: OpenCL C 1.2 Reference Card OpenCL C++ 1.2 Reference Card These cards will
Intel Q35/Q33, G35/G33/G31, P35/P31 Express Chipset Memory Technology and Configuration Guide
Intel Q35/Q33, G35/G33/G31, P35/P31 Express Chipset Memory Technology and Configuration Guide White Paper August 2007 Document Number: 316971-002 INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION
AMD Proprietary Linux Release Notes
AMD Proprietary Linux Release Notes Web Content This release note provides information on the latest posting of AMD s Proprietary Linux driver. This particular driver updates the software version to 8.561.
Intel 810 and 815 Chipset Family Dynamic Video Memory Technology
Intel 810 and 815 Chipset Family Dynamic Video Technology Revision 3.0 March 2002 March 2002 1 Information in this document is provided in connection with Intel products. No license, express or implied,
Installing Service Pack Updater Archive for CodeWarrior Tools (Windows and Linux) Quick Start
Installing Service Pack Updater Archive for CodeWarrior Tools (Windows and Linux) Quick Start SYSTEM REQUIREMENTS Processor Windows OS: Intel Pentium 4 processor, 2 GHz or faster, Intel Xeon, Intel Core,
IDE Integration for Android* Part of the Intel Integrated Native Developer Experience (Intel INDE) 1.5.7
IDE Integration for Android* Part of the Intel Integrated Native Developer Experience (Intel INDE) 1.5.7 Overview IDE Integration for Android provides productivity-oriented design, coding, and debugging
OpenACC 2.0 and the PGI Accelerator Compilers
OpenACC 2.0 and the PGI Accelerator Compilers Michael Wolfe The Portland Group [email protected] This presentation discusses the additions made to the OpenACC API in Version 2.0. I will also present
ATI Radeon 4800 series Graphics. Michael Doggett Graphics Architecture Group Graphics Product Group
ATI Radeon 4800 series Graphics Michael Doggett Graphics Architecture Group Graphics Product Group Graphics Processing Units ATI Radeon HD 4870 AMD Stream Computing Next Generation GPUs 2 Radeon 4800 series
Intel 965 Express Chipset Family Memory Technology and Configuration Guide
Intel 965 Express Chipset Family Memory Technology and Configuration Guide White Paper - For the Intel 82Q965, 82Q963, 82G965 Graphics and Memory Controller Hub (GMCH) and Intel 82P965 Memory Controller
Atmel AVR4920: ASF - USB Device Stack - Compliance and Performance Figures. Atmel Microcontrollers. Application Note. Features.
Atmel AVR4920: ASF - USB Device Stack - Compliance and Performance Figures Features Compliance to USB 2.0 - Chapters 8 and 9 - Classes: HID, MSC, CDC, PHDC Interoperability: OS, classes, self- and bus-powered
Real-time processing the basis for PC Control
Beckhoff real-time kernels for DOS, Windows, Embedded OS and multi-core CPUs Real-time processing the basis for PC Control Beckhoff employs Microsoft operating systems for its PCbased control technology.
OpenCL Programming for the CUDA Architecture. Version 2.3
OpenCL Programming for the CUDA Architecture Version 2.3 8/31/2009 In general, there are multiple ways of implementing a given algorithm in OpenCL and these multiple implementations can have vastly different
DATA MASKING A WHITE PAPER BY K2VIEW. ABSTRACT K2VIEW DATA MASKING
DATA MASKING A WHITE PAPER BY K2VIEW. ABSTRACT In today s world, data breaches are continually making the headlines. Sony Pictures, JP Morgan Chase, ebay, Target, Home Depot just to name a few have all
32-bit AVR UC3 Microcontrollers. 32-bit AtmelAVR Application Note. AVR32769: How to Compile the standalone AVR32 Software Framework in AVR32 Studio V2
AVR32769: How to Compile the standalone AVR32 Software Framework in AVR32 Studio V2 1. Introduction The purpose of this application note is to show how to compile any of the application and driver examples
Fast Arithmetic Coding (FastAC) Implementations
Fast Arithmetic Coding (FastAC) Implementations Amir Said 1 Introduction This document describes our fast implementations of arithmetic coding, which achieve optimal compression and higher throughput by
SkyRecon Cryptographic Module (SCM)
SkyRecon Cryptographic Module (SCM) FIPS 140-2 Documentation: Security Policy Abstract This document specifies the security policy for the SkyRecon Cryptographic Module (SCM) as described in FIPS PUB 140-2.
An Oracle White Paper November 2010. Leveraging Massively Parallel Processing in an Oracle Environment for Big Data Analytics
An Oracle White Paper November 2010 Leveraging Massively Parallel Processing in an Oracle Environment for Big Data Analytics 1 Introduction New applications such as web searches, recommendation engines,
ORACLE OPS CENTER: PROVISIONING AND PATCH AUTOMATION PACK
ORACLE OPS CENTER: PROVISIONING AND PATCH AUTOMATION PACK KEY FEATURES PROVISION FROM BARE- METAL TO PRODUCTION QUICKLY AND EFFICIENTLY Controlled discovery with active control of your hardware Automatically
An Oracle Technical Article October 2014. Certification with Oracle Linux 5
An Oracle Technical Article October 2014 Certification with Oracle Linux 5 Introduction... 1 Comparing Oracle Linux 5 and Red Hat Enterprise Linux (RHEL) 5.. 2 Checking the /etc/ File... 2 Checking for
An Oracle White Paper June 2012. High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database
An Oracle White Paper June 2012 High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database Executive Overview... 1 Introduction... 1 Oracle Loader for Hadoop... 2 Oracle Direct
GPU Architecture. An OpenCL Programmer s Introduction. Lee Howes November 3, 2010
GPU Architecture An OpenCL Programmer s Introduction Lee Howes November 3, 2010 The aim of this webinar To provide a general background to modern GPU architectures To place the AMD GPU designs in context:
NVIDIA GRID 2.0 ENTERPRISE SOFTWARE
NVIDIA GRID 2.0 ENTERPRISE SOFTWARE QSG-07847-001_v01 October 2015 Quick Start Guide Requirements REQUIREMENTS This Quick Start Guide is intended for those who are technically comfortable with minimal
AVR151: Setup and Use of the SPI. Introduction. Features. Atmel AVR 8-bit Microcontroller APPLICATION NOTE
Atmel AVR 8-bit Microcontroller AVR151: Setup and Use of the SPI APPLICATION NOTE Introduction This application note describes how to set up and use the on-chip Serial Peripheral Interface (SPI) of the
Programming models for heterogeneous computing. Manuel Ujaldón Nvidia CUDA Fellow and A/Prof. Computer Architecture Department University of Malaga
Programming models for heterogeneous computing Manuel Ujaldón Nvidia CUDA Fellow and A/Prof. Computer Architecture Department University of Malaga Talk outline [30 slides] 1. Introduction [5 slides] 2.
Multi-core architectures. Jernej Barbic 15-213, Spring 2007 May 3, 2007
Multi-core architectures Jernej Barbic 15-213, Spring 2007 May 3, 2007 1 Single-core computer 2 Single-core CPU chip the single core 3 Multi-core architectures This lecture is about a new trend in computer
Using VMware Player. VMware Player. What Is VMware Player?
VMWARE APPLICATION NOTE VMware Player Using VMware Player This document contains the following sections: Work and Play in a Virtual World on page 1 Options and Features in VMware Player on page 4 Installing
White Paper AMD PROJECT FREESYNC
White Paper AMD PROJECT FREESYNC TABLE OF CONTENTS INTRODUCTION 3 PROJECT FREESYNC USE CASES 4 Gaming 4 Video Playback 5 System Power Savings 5 PROJECT FREESYNC IMPLEMENTATION 6 Implementation Overview
SQL Server 2012 Performance White Paper
Published: April 2012 Applies to: SQL Server 2012 Copyright The information contained in this document represents the current view of Microsoft Corporation on the issues discussed as of the date of publication.
The Microsoft Windows Hypervisor High Level Architecture
The Microsoft Windows Hypervisor High Level Architecture September 21, 2007 Abstract The Microsoft Windows hypervisor brings new virtualization capabilities to the Windows Server operating system. Its
An Oracle Technical Article March 2015. Certification with Oracle Linux 7
An Oracle Technical Article March 2015 Certification with Oracle Linux 7 Oracle Technical Article Certification with Oracle Linux 7 Introduction...1 Comparing Oracle Linux 7 and Red Hat Enterprise Linux
GPU File System Encryption Kartik Kulkarni and Eugene Linkov
GPU File System Encryption Kartik Kulkarni and Eugene Linkov 5/10/2012 SUMMARY. We implemented a file system that encrypts and decrypts files. The implementation uses the AES algorithm computed through
CUDA programming on NVIDIA GPUs
p. 1/21 on NVIDIA GPUs Mike Giles [email protected] Oxford University Mathematical Institute Oxford-Man Institute for Quantitative Finance Oxford eresearch Centre p. 2/21 Overview hardware view
Application Note: AN00121 Using XMOS TCP/IP Library for UDP-based Networking
Application Note: AN00121 Using XMOS TCP/IP Library for UDP-based Networking This application note demonstrates the use of XMOS TCP/IP stack on an XMOS multicore micro controller to communicate on an ethernet-based
1 Changes in this release
Oracle SQL Developer Oracle TimesTen In-Memory Database Support Release Notes Release 4.0 E39883-01 June 2013 This document provides late-breaking information as well as information that is not yet part
BLM 413E - Parallel Programming Lecture 3
BLM 413E - Parallel Programming Lecture 3 FSMVU Bilgisayar Mühendisliği Öğr. Gör. Musa AYDIN 14.10.2015 2015-2016 M.A. 1 Parallel Programming Models Parallel Programming Models Overview There are several
AMD DASHConfig Tool. White Paper Descriptor. Document version: 1.0. March 27 th, 2013
AMD DASHConfig Tool Document version: 1.0 March 27 th, 2013 White Paper Descriptor This whitepaper provides users with detailed description about using AMD DASHConfig tool. DASHConfig is for provisioning
Siebel Application Deployment Manager Guide. Siebel Innovation Pack 2013 Version 8.1/8.2 September 2013
Siebel Application Deployment Manager Guide Siebel Innovation Pack 2013 Version 8.1/8.2 September 2013 Copyright 2005, 2013 Oracle and/or its affiliates. All rights reserved. This software and related
RTI Database Integration Service. Release Notes
RTI Database Integration Service Release Notes Version 5.2.0 2015 Real-Time Innovations, Inc. All rights reserved. Printed in U.S.A. First printing. June 2015. Trademarks Real-Time Innovations, RTI, NDDS,
Fast, Low-Overhead Encryption for Apache Hadoop*
Fast, Low-Overhead Encryption for Apache Hadoop* Solution Brief Intel Xeon Processors Intel Advanced Encryption Standard New Instructions (Intel AES-NI) The Intel Distribution for Apache Hadoop* software
GPU System Architecture. Alan Gray EPCC The University of Edinburgh
GPU System Architecture EPCC The University of Edinburgh Outline Why do we want/need accelerators such as GPUs? GPU-CPU comparison Architectural reasons for GPU performance advantages GPU accelerated systems
An Oracle White Paper October 2009. Frequently Asked Questions for Oracle Forms 11g
An Oracle White Paper October 2009 Frequently Asked Questions for Oracle Forms 11g Disclaimer The following is intended to outline our general product direction. It is intended for information purposes
21152 PCI-to-PCI Bridge
Product Features Brief Datasheet Intel s second-generation 21152 PCI-to-PCI Bridge is fully compliant with PCI Local Bus Specification, Revision 2.1. The 21152 is pin-to-pin compatible with Intel s 21052,
Intel Ethernet and Configuring Single Root I/O Virtualization (SR-IOV) on Microsoft* Windows* Server 2012 Hyper-V. Technical Brief v1.
Intel Ethernet and Configuring Single Root I/O Virtualization (SR-IOV) on Microsoft* Windows* Server 2012 Hyper-V Technical Brief v1.0 September 2012 2 Intel Ethernet and Configuring SR-IOV on Windows*
Gigabit Ethernet Packet Capture. User s Guide
Gigabit Ethernet Packet Capture User s Guide Copyrights Copyright 2008 CACE Technologies, Inc. All rights reserved. This document may not, in whole or part, be: copied; photocopied; reproduced; translated;
Choosing a Computer for Running SLX, P3D, and P5
Choosing a Computer for Running SLX, P3D, and P5 This paper is based on my experience purchasing a new laptop in January, 2010. I ll lead you through my selection criteria and point you to some on-line
Contents. 2. cttctx Performance Test Utility... 8. 3. Server Side Plug-In... 9. 4. Index... 11. www.faircom.com All Rights Reserved.
c-treeace Load Test c-treeace Load Test Contents 1. Performance Test Description... 1 1.1 Login Info... 2 1.2 Create Tables... 3 1.3 Run Test... 4 1.4 Last Run Threads... 5 1.5 Total Results History...
