Olete.in
Articles
Mock Tests
🧪 High Performance Computing (HPC) MCQ Quiz Hub
High Performance Computing (HPC) MCQ Set 2
Choose a topic to test your knowledge and improve your High Performance Computing (HPC) skills
1. wimpleat way to send p-1 messages from source to the other p-1 processors
algorithm
communication
concurrency
receiver
2. In a eight node ring, node ____ is source of broadcast
1
2
8
0
3. The processors compute ______ product of the vector element and the loval matrix
local
global
both
none
4. one to all broadcast use
recursive doubling
simple algorithm
both
none
5. In a broadcast and reduction on a balanced binary tree reduction is done in ______
recursive order
straight order
vertical order
parallel order
6. if "X" is the message to broadcast it initially resides at the source node
1
2
8
0
7. logical operators used in algorithm are
xor
and
both
none
8. Generalization of broadcast in Which each processor is
source as well as destination
only source
only destination
none
9. The algorithm terminates in _____ steps
p
p+1
p+2
p-1
10. Each node first sends to one of its neighbours the data it need to....
broadcast
identify
verify
none
11. The second communication phase is a columnwise ______ broadcast of consolidated
all-to-all
one -to-all
all-to-one
point-to-point
12. All nodes collects _____ message corresponding to √p nodes to their respectively
√p
p
p+1
p-1
13. It is not possible to port ____ for higher dimensional network
algorithm
hypercube
both
none
14. If we port algorithm to higher dimemsional network it would cause
error
contention
recursion
none
15. In the scatter operation ____ node send message to every other node
single
double
triple
none
16. The gather Operation is exactly the inverse of _____
scatter operation
recursion operation
execution
none
17. Similar communication pattern to all-to-all broadcast except in the_____
reverse order
parallel order
straight order
vertical order
18. Group communication operations are built using which primitives?
one to all
all to all
point to point
none of these
19. __ can be performed in an identical fashion by inverting the process.
recursive doubling
reduction
broadcast
none of these
20. Broadcast and reduction operations on a mesh is performed
along the rows
along the columns
both a and b concurrently
none of these
21. Cost Analysis on a ring is
(ts + twm)(p - 1)
(ts - twm)(p + 1)
(tw + tsm)(p - 1)
(tw - tsm)(p + 1)
22. Cost Analysis on a mesh is
A. 2ts(sqrt(p) + 1) + twm(p - 1)
2tw(sqrt(p) + 1) + tsm(p - 1)
2tw(sqrt(p) - 1) + tsm(p - 1)
2ts(sqrt(p) - 1) + twm(p - 1)
23. Communication between two directly link nodes
cut-through routing
store-and-forward routing
nearest neighbour communication
none
24. All-to-one communication (reduction) is the dual of ______ broadcast.
all-to-all
one-to-all
one-to-one
all-to-one
25. Which is known as Reduction?
. all-to-one
all-to-all
one-to-one
one-to-all
26. Which is known as Broadcast?
one-to-one
one-to-all
all-to-all
all-to-one
27. The dual of all-to-all broadcast is
all-to-all reduction
all-to-one reduction
both
none
28. All-to-all broadcast algorithm for the 2D mesh is based on the
linear array algorithm
ring algorithm
both
none
29. In the first phase of 2D Mesh All to All, the message size is ___
p
m*sqrt(p)
m
p*sqrt(m) discuss
30. In the second phase of 2D Mesh All to All, the message size is ___
m
p*sqrt(m)
p
m*sqrt(p)
31. In All to All on Hypercube, The size of the message to be transmitted at the next step is ____ by concatenating the received message with their current data
doubled
tripled
halfed
no change
32. The all-to-all broadcast on Hypercube needs ____ steps
p
sqrt(p) - 1
log p
none
33. One-to-All Personalized Communication operation is commonly called ___
gather operation
concatenation
scatter operation
none
34. The dual of the scatter operation is the
concatenation
gather operation
both
none
35. In Scatter Operation on Hypercube, on each step, the size of the messages communicated is ____
tripled
halved
doubled
no change
36. Which is also called "Total Exchange" ?
all-to-all broadcast
all-to-all personalized communication
all-to-one reduction
none
37. All-to-all personalized communication can be used in ____
fourier transform
matrix transpose
sample sort
All of the above
38. In collective communication operations, collective means
involve group of processors
involve group of algorithms
involve group of variables
none of these
39. efficiency of data parallel algorithm depends on the
efficient implementation of the algorithm
efficient implementation of the operation
both
none
40. All processes participate in a single ______ interaction operation.
global
local
wide
variable
41. subsets of processes in ______ interaction.
global
local
wide
variable
42. Goal of good algorithm is to implement commonly used _____ pattern.
communication
interaction
parallel
regular
43. Reduction can be used to find the sum, product, maximum, minimum of _____ of numbers.
tuple
list
sets
all of above
44. source ____ is bottleneck.
process
algorithm
list
tuple
45. only connections between single pairs of nodes are used at a time is
good utilization
poor utilization
massive utilization
medium utilization
46. all processes that have the data can send it again is
recursive doubling
naive approach
reduction
All of the above
47. The ____ do not snoop the messages going through them.
nodes
variables
tuple
list
48. accumulate results and send with the same pattern is...
broadcast
naive approach
recursive doubling
reduction symmetric
49. every node on the linear array has the data and broadcast on the columns with the linear array algorithm in _____
parallel
vertical
horizontal
all
50. using different links every time and forwarding in parallel again is
better for congestion
better for reduction
better for communication
better for algorithm
51. In a balanced binary tree processing nodes is equal to
. leaves
number of elemnts
branch
none
52. In one -to- all broadcast there is
divide and conquer type algorithm
sorting type algorithm
searching type algorithm
simple algorithm
53. For sake of simplicity, the number of nodes is a power of
1
2
3
4
54. Nides with zero in i least significant bits participate in _______
algorithm
broadcast
communication
searching
55. every node has to know when to communicate that is
call the procedure
call for broadcast
call for communication
call the congestion
56. the procedure is disturbed and require only point-to-point _______
synchronization
communication
both
none
57. Renaming relative to the source is _____ the source.
xor
xnor
and
nand
58. Task dependency graph is ------------------
directed
undirected
directed acyclic
undirected acyclic
59. In task dependency graph longest directed path between any pair of start and finish node is called as --------------
. total work
critical path
task path
task path
60. which of the following is not a granularity type
course grain
large grain
medium grain
fine grain
61. which of the following is a an example of data decomposition
matrix multiplication
merge sort
quick sort
15 puzzal
62. which problems can be handled by recursive decomposition
backtracking
greedy method
divide and conquer problem
branch and bound
63. In this decomposition problem decomposition goes hand in hand with its execution
data decomposition
recursive decomposition
explorative decomposition
speculative decomposition
64. the procedure is disturbed and require only point-to-point _______
synchronization
communication
both
none
65. Renaming relative to the source is _____ the source.
xor
xnor
and
nand
66. which of the following is not a granularity type
course grain
large grain
medium grain
fine grain
67. which of the following is not an example of explorative decomposition
n queens problem
15 puzzal problem
tic tac toe
quick sort
68. In ------------task are defined before starting the execution of the algorithmting?
dynamic task
static task
regular task
one way task
69. which of the following is not the array distribution method of data partitioning
block
cyclic
block cyclic
chunk
70. blocking optimization is used to improve temmporal locality for reduce
hit miss
misses
hit rate
cache misses
71. CUDA thought that 'unifying theme' of every form of parallelism is
cda thread
pta thread
cuda thread
cud thread
72. threads being block altogether and being executed in the sets of 32 threads called a
thread block
32 thread
32 block
unit block
73. When the topological sort of a graph is unique?
when there exists a hamiltonian path in the graph
in the presence of multiple nodes with indegree 0
in the presence of single node with indegree 0
in the presence of single node with outdegree 0
74. What is a high performance multi-core processor that can be used to accelerate a wide variety of applications using parallel computing.
cpu
dsp
gpu
clu
75. A good mapping does not depends on which following factor
knowledge of task sizes
the size of data associated with tasks
characteristics of inter-task interactions
task overhead
76. Which of the following is not a form of parallelism supported by CUDA
vector parallelism - floating point computations are executed in parallel on wide vector units
thread level task parallelism - different threads execute a different tasks
block and grid level parallelism - different blocks or grids execute different tasks
data parallelism - different threads and blocks process different parts of data in memory
77. The style of parallelism supported on GPUs is best described as
misd - multiple instruction single data
simt - single instruction multiple thread
sisd - single instruction single data
mimd
78. Which of the following correctly describes a GPU kernel
a kernel may contain a mix of host and gpu code
all thread blocks involved in the same computation use the same kernel
a kernel is part of the gpus internal micro-operating system, allowing it to act as in independent host
kernel may contain only host code
79. kernel may contain only host code
a code known as grid which runs on GPU consisting of a set of A. 32 thread
unit block
32 block
thread block
80. which of the following is not an parallel algorithm model
data parallel model
task graph model
task model
work pool model
Submit