Choose a topic to test your knowledge and improve your High Performance Computing (HPC) skills
A CUDA program is comprised of two primary components: a host and a _____.
The kernel code is dentified by the ________qualifier with void return type
Calling a kernel is typically referred to as _________.
the BlockPerGrid and ThreadPerBlock parameters are related to the ________ model supported by CUDA.
_______ is Callable from the device only
____ is Callable from the host
______ is Callable from the host
CUDA supports ____________ in which code in a single thread is executed by all other threads.
. In CUDA, a single invoked kernel is referred to as a _____.
A grid is comprised of ________ of threads.
A block is comprised of multiple _______.
a solution of the problem in representing the parallelismin algorithm is
Host codes in a CUDA application can not Reset a device
Any condition that causes a processor to stall is called as _____.
The time lost due to branch instruction is often referred to as _____.
___ method is used in centralized systems to perform out of order execution.
The computer cluster architecture emerged as an alternative for ____.
NVIDIA CUDA Warp is made up of how many threads?
Out-of-order instructions is not possible on GPUs.
CUDA supports programming in ....
FADD, FMAD, FMIN, FMAX are ----- supported by Scalar Processors of NVIDIA GPU.
Each streaming multiprocessor (SM) of CUDA herdware has ------ scalar processors (SP).
Each NVIDIA GPU has ------ Streaming Multiprocessors
CUDA provides ------- warp and thread scheduling. Also, the overhead of thread creation is on the order of ----.
Each warp of GPU receives a single instruction and βbroadcastsβ it to all of its threads. It is a ---- operation.
Limitations of CUDA Kernel
What is Unified Virtual Machine
_____ became the first language specifically designed by a GPU Company to facilitate general purpose computing on ____.
The CUDA architecture consists of --------- for parallel computing kernels and functions.
CUDA stands for --------, designed by NVIDIA.
The host processor spawns multithread tasks (or kernels as they are known in CUDA) onto the GPU device. State true or false.
The NVIDIA G80 is a ---- CUDA core device, the NVIDIA G200 is a ---- CUDA core device, and the NVIDIA Fermi is a ---- CUDA core device
NVIDIA 8-series GPUs offer -------- .
IADD, IMUL24, IMAD24, IMIN, IMAX are ----------- supported by Scalar Processors of NVIDIA GPU.
CUDA Hardware programming model supports: a) fully generally data-parallel archtecture; b) General thread launch; c) Global load-store; d) Parallel data cache; e) Scalar architecture; f) Integers, bit operation
In CUDA memory model there are following memory types available: a) Registers; b) Local Memory; c) Shared Memory; d) Global Memory; e) Constant Memory; f) Texture Memory.
What is the equivalent of general C program with CUDA C: int main(void) { printf("Hello, World! "); return 0; }
Which function runs on Device (i.e. GPU): a) __global__ void kernel (void ) { } b) int main ( void ) { ... return 0; }
If variable a is host variable and dev_a is a device (GPU) variable, to allocate memory to dev_a select correct statement:
If variable a is host variable and dev_a is a device (GPU) variable, to copy input from variable a to variable dev_a select correct statement:
Triple angle brackets mark in a statement inside main function, what does it indicates?
What makes a CUDA code runs in parallel
In ___________, the number of elements to be sorted is small enough to fit into the process's main memory.
_____________ algorithms use auxiliary storage (such as tapes and hard disks) for sorting because the number of elements to be sorted is too large to fit into memory.
____ can be comparison-based or noncomparison-based.
The fundamental operation of comparison-based sorting is ________.
The performance of quicksort depends critically on the quality of the ______-.
The main advantage of ______ is that its storage requirement is linear in the depth of the state space being searched.
___ algorithms use a heuristic to guide search.
Graph search involves a closed list, where the major operation is a _______
Breadth First Search is equivalent to which of the traversal in the Binary Trees?
Time Complexity of Breadth First Search is? (V β number of vertices, E β number of edges)
Which of the following is not an application of Breadth First Search?
In BFS, how many times a node is visited?
Which of the following is not a stable sorting algorithm in its typical implementation.
Which of the following is not true about comparison based sorting algorithms?
mathematically efficiency is
Cost of a parallel system is sometimes referred to____ of product
Scaling Characteristics of Parallel Programs Ts is
Speedup tends to saturate and efficiency _____ as a consequence of Amdahlβs law.
Speedup obtained when the problem size is _______ linearlywith the number of processing elements.
The n Γ n matrix is partitioned among n processors, with each processor storing complete ___ of the matrix.
cost-optimal parallel systems have an efficiency of ___
The n Γ n matrix is partitioned among n2 processors such that each processor owns a _____ element.
how many basic communication operations are used in matrix vector multiplication
In DNS algorithm of matrix multiplication it used
In the Pipelined Execution, steps contain
the cost of the parallel algorithm is higher than the sequential run time by a factor of __
The load imbalance problem in Parallel Gaussian Elimination: can be alleviated by using a ____ mapping
A parallel algorithm is evaluated by its runtime in function of
For a problem consisting of W units of work, p__W processors can be used optimally.
C(W)__Ξ(W) for optimality (necessary condition).
many interactions in oractical parallel programs occur in _____ pattern
efficient implementation of basic communication operation can improve
efficient use of basic communication operations can reduce
Group communication operations are built using_____ Messenging primitives.
one processor has a piece of data and it need to send to everyone is
the dual of one -to-all is
Data items must be combined piece-wise and the result made available at