🧪 High Performance Computing (HPC) MCQ Quiz Hub

High Performance Computing (HPC) MCQ Set 2

Choose a topic to test your knowledge and improve your High Performance Computing (HPC) skills

1. wimpleat way to send p-1 messages from source to the other p-1 processors




2. In a eight node ring, node ____ is source of broadcast




3. The processors compute ______ product of the vector element and the loval matrix




4. one to all broadcast use




5. In a broadcast and reduction on a balanced binary tree reduction is done in ______




6. if "X" is the message to broadcast it initially resides at the source node




7. logical operators used in algorithm are




8. Generalization of broadcast in Which each processor is




9. The algorithm terminates in _____ steps




10. Each node first sends to one of its neighbours the data it need to....




11. The second communication phase is a columnwise ______ broadcast of consolidated




12. All nodes collects _____ message corresponding to √p nodes to their respectively




13. It is not possible to port ____ for higher dimensional network




14. If we port algorithm to higher dimemsional network it would cause




15. In the scatter operation ____ node send message to every other node




16. The gather Operation is exactly the inverse of _____




17. Similar communication pattern to all-to-all broadcast except in the_____




18. Group communication operations are built using which primitives?




19. __ can be performed in an identical fashion by inverting the process.




20. Broadcast and reduction operations on a mesh is performed




21. Cost Analysis on a ring is




22. Cost Analysis on a mesh is




23. Communication between two directly link nodes




24. All-to-one communication (reduction) is the dual of ______ broadcast.




25. Which is known as Reduction?




26. Which is known as Broadcast?




27. The dual of all-to-all broadcast is




28. All-to-all broadcast algorithm for the 2D mesh is based on the




29. In the first phase of 2D Mesh All to All, the message size is ___




30. In the second phase of 2D Mesh All to All, the message size is ___




31. In All to All on Hypercube, The size of the message to be transmitted at the next step is ____ by concatenating the received message with their current data




32. The all-to-all broadcast on Hypercube needs ____ steps




33. One-to-All Personalized Communication operation is commonly called ___




34. The dual of the scatter operation is the




35. In Scatter Operation on Hypercube, on each step, the size of the messages communicated is ____




36. Which is also called "Total Exchange" ?




37. All-to-all personalized communication can be used in ____




38. In collective communication operations, collective means




39. efficiency of data parallel algorithm depends on the




40. All processes participate in a single ______ interaction operation.




41. subsets of processes in ______ interaction.




42. Goal of good algorithm is to implement commonly used _____ pattern.




43. Reduction can be used to find the sum, product, maximum, minimum of _____ of numbers.




44. source ____ is bottleneck.




45. only connections between single pairs of nodes are used at a time is




46. all processes that have the data can send it again is




47. The ____ do not snoop the messages going through them.




48. accumulate results and send with the same pattern is...




49. every node on the linear array has the data and broadcast on the columns with the linear array algorithm in _____




50. using different links every time and forwarding in parallel again is




51. In a balanced binary tree processing nodes is equal to




52. In one -to- all broadcast there is




53. For sake of simplicity, the number of nodes is a power of




54. Nides with zero in i least significant bits participate in _______




55. every node has to know when to communicate that is




56. the procedure is disturbed and require only point-to-point _______




57. Renaming relative to the source is _____ the source.




58. Task dependency graph is ------------------




59. In task dependency graph longest directed path between any pair of start and finish node is called as --------------




60. which of the following is not a granularity type




61. which of the following is a an example of data decomposition




62. which problems can be handled by recursive decomposition




63. In this decomposition problem decomposition goes hand in hand with its execution




64. the procedure is disturbed and require only point-to-point _______




65. Renaming relative to the source is _____ the source.




66. which of the following is not a granularity type




67. which of the following is not an example of explorative decomposition




68. In ------------task are defined before starting the execution of the algorithmting?




69. which of the following is not the array distribution method of data partitioning




70. blocking optimization is used to improve temmporal locality for reduce




71. CUDA thought that 'unifying theme' of every form of parallelism is




72. threads being block altogether and being executed in the sets of 32 threads called a




73. When the topological sort of a graph is unique?




74. What is a high performance multi-core processor that can be used to accelerate a wide variety of applications using parallel computing.




75. A good mapping does not depends on which following factor




76. Which of the following is not a form of parallelism supported by CUDA




77. The style of parallelism supported on GPUs is best described as




78. Which of the following correctly describes a GPU kernel




79. kernel may contain only host code




80. which of the following is not an parallel algorithm model