Computer system of a parallel computer is capable of_____________
1. decentralized computing
2.parallel computing
3.centralized computing
4.All of the these
Following issue(s) is/are the true about sorting techniques with parallel computing.
1.large sequence is the issue
2.where to store output sequence is the issue
3. small sequence is the issue
4.None of the above
If there is 4X4 Mesh Topology ______ message passing cycles will require complete all to all reduction.
1.4
2.6
3.8
4.10
In GPU Following statements are true
1.grid contains block
2.block contains threads
3.all the mentioned options
4.sm stands for streaming multiprocessor
Speculative Decomposition consist of _?
1.conservative approaches
2.optimistic approaches
3.both a and b
4.only b
task characteristics include?
1. task generation.
2.task sizes.
3.size of data associated with tasks.
4.All of the above
The gather operation is exactly the inverse of the ?
1.scatter operation
2.broadcast operation
3. prefix sum
4.reduction operation
The graph of tasks (nodes) and their interactions/data exchange (edges)?
1. is referred to as a task interaction graph
2.is referred to as a task communication graph
3.is referred to as a task interface graph
4.None of the above
The principal parameters that determine the communication latency are as follows
1.startup time (ts) per-hop time (th) per-word transfer time (tw)
2. startup time (ts) per-word transfer time (tw)
3. startup time (ts) per-hop time (th)
4.startup time (ts) message-packet-size(w)
The time taken by all-to- all broadcast on a ring is.
1. t= (ts + twm)(p-1)
2. t= ts logp + twm(p-1)
3. t= 2ts(√p – 1) - twm(p-1)
4.t= 2ts(√p – 1) + twm(p-1)
Type of parallelism that is naturally expressed by independent tasks in a task-dependency graph is called _______ parallelism?
1. task
2.instruction
3.data
4.program
Which decompositioin technique uses divide-andconquer strategy?
1. recursive decomposition
2.sdata decomposition
3.exploratory decomposition
4.speculative decomposition
Which is alternative options for latency hiding?
1.increase cpu frequency
2.multithreading
3.increase bandwidth
4.increase memory
Which of these steps can create conflict among the processors?
1.synchronized computation of local variables
2.concurrent write
3.concurrent read
4.None of the above
Which one is not a characteristic of NUMA multiprocessors?
1. it allows shared memory computing
2. memory units are placed in physically different location
3.all memory units are mapped to one common virtual global memory
4. processors access their independent local memories
A feature of a task-dependency graph that determines the average degree of concurrency for a given granularity is its ___________ path?
1.critical
2.easy
3.difficult
4.ambiguous
A hypercube has?
1.2d nodes
2.2d nodes
3.2n nodes
4.n nodes
A multiprocessor machine which is capable of executing multiple instructions on multiple data sets?
1.sisd
2.simd
3.mimd
4.miss
A pipeline is like .................... ?
1. an automobile assembly line
2.house pipeline
3.both a and b
4.a gas line
A processor performing fetch or decoding of different instruction during the execution of another instruction is called ______ ?
1.super-scaling
2.pipe-lining
3.parallel computation
4. none of these
A simple application of exploratory decomposition is_?
1.the solution to a 15 puzzle
2. the solution to 20 puzzle
3.the solution to any puzzle
4.None of the above
Average Degree of Concurrency is...
1.the average number of tasks that can run concurrently over the entire duration of execution of the process.
2.the average time that can run concurrently over the entire duration of execution of the process.
3.the average in degree of task dependency graph.
4. the average out degree of task dependency graph.
Consider Hypercube topology with 8 nodes then how many message passing cycles will require in all to all broadcast operation?
1. the longest path between any pair of finish nodes.
2. the longest directed path between any pair of start & finish node.
3.the shortest path between any pair of finish nodes.
4.the number of maximum nodes level in graph.
CUDA helps do execute code in parallel mode using __________
1.cpu
2.gpu
3.rom
4.cash memory
Decomposition Techniques are?
1.recursive decomposition
2.data decomposition
3.exploratory decomposition
4.all of above
Fine-grain threading is considered as a ______ threading?
1. instruction-level
2.loop level
3.task-level
4.function-level
For inter processor communication the miss arises are called?
1.hit rate
2.coherence misses
3.comitt misses
4.parallel processing
In All-to-all Broadcast on a Mesh, operation performs in which sequence?
1.rowwise, columnwise
2.columnwise, rowwise
3.columnwise, columnwise
4.rowwise, rowwise
In message passing, send and receive message between?
1. task or processes
2.task and execution
3. processor and instruction
4.instruction and decode
In Parallel DFS processes has following roles.(Select multiple choices if applicable)
1. donor
2.active
3.idle
4.passive
In the scatter operation ?
1. single node send a unique message of size m to every other node
2.single node send a same message of size m to every other node
3.single node send a unique message of size m to next node
4. none of above
In thread-function execution scenario thread is a ___________
1.work
2.worker
3.task
4.none of the above
In which application system Distributed systems can run well?
1.hpc
2.distrubuted framework
3.hrc
4.None of the above
In which of the following operation, a single node sends a unique message of size m to every other node?
1.gather
2.scatter
3.one to all personalized communication
4.both a and c
Interaction overheads can be minimized by____?
1. A. maximize data locality
2.maximize volume of data exchange
3.increase bandwidth
4. minimize social media contents
Mappings are determined by?
1.task dependency
2.task interaction graphs
3.both a and b
4.None of the above
Messages get smaller inand stay constant in .
1.gather, broadcast
2. scatter , broadcast
3.scatter, gather
4. broadcast, gather
Multiprocessor is systems with multiple CPUs, which are capable of independently executing different tasks in parallel. In this category every processor and memory module has similar access time?
1.uma
2.microprocessor
3.multiprocessor
4.numa
NUMA architecture uses _______in design?
1.cache
2.shared memory
3.message passing
4.distributed memory
Nvidia GPU based on following architecture
1.mimd
2.simd
3.sisd
4.misd
Parallel Algorithm Models?
1.data parallel model
2.bit model
3. data model
4.network model
Parallel algorithms often require a single process to send identical data to all other processes or to a subset of them. This operation is known as _________?
1.one-to-all broadcast
2.all to one broadcast
3.one-to-all reduction
4.all to one reduction
Parallel computing means to divide the job into several __________?
1.bit
2.data
3.instruction
4.task
Parallel processing may occur?
1.in the instruction stream
2.in the data stream
3.both[a] and [b]
4.None of the above
Partitioning on series done after ______________
1.local arrangement
2.processess assignments
3. global arrangement
4. none of the above
Pipeline implements ?
1.fetch instruction
2.decode instruction
3.fetch operand
4. all of above
Scatter is _________
1.one to all broadcast communication
2.all to all broadcast communication
3.one to all personalised communication
4.None of the above
Select how the overhead function (To) is calculated.
1. to = p*n tp - ts
2.to = p tp - ts
3.to = tp - pts
4. to = tp - ts
Select the parameters on which the parallel runtime of a program depends.
1.number of processors
2. communication parameters of the machine
3.all of the above
4.input size
Speed up is defined as a ratio of?
1.s=ts/tp
2.s= tp/ts
3.ts=s/tp
4.tp=s /ts
Suppose there are 16 elements in a series then how many phases will be required to sort the series using parallel odd-even bubble sort?
1.8
2.4
3.5
4.15
Systems that do not have parallel processing capabilities are?
1.sisd
2.simd
3.mimd
4.All of the above
The dual of one-to-all broadcast is ?
1.all-to-one reduction
2.all-to-one receiver
3.all-to-one sum
4.none of above
The First step in developing a parallel algorithm is_________?
1.to decompose the problem into tasks that can be executed concurrently
2.execute directly
3.execute indirectly
4.none of above
The length of the longest path in a task dependency graph is called?
1.the critical path length
2.the critical data length
3. the critical bit length
4.none of above
The number and size of tasks into which a problem is decomposed determines the __
1.granularity
2.task
3.dependency graph
4.decomposition
The number and size of tasks into which a problem is decomposed determines the?
1.fine-granularity
2.coarse-granularity
3.sub task
4.granularity
The number of tasks into which a problem is decomposed determines its?
1.granularity
2.priority
3.modernity
4.None of the above
The Owner Computes Rule generally states that the process assigned a particular data item is responsible for?
1.all computation associated with it
2.only one computation
3.only two computation
4. only occasionally computation
The pattern of___________ among tasks is captured by what is known as a task-interaction graph?
1. interaction
2.communication
3.optmization
4.flow
The Prefix Sum Operation can be implemented using the ?
1.all-to-all broadcast kernel.
2.all-to-one broadcast kernel.
3.one-to-all broadcast kernel
4.scatter kernel
The prefix-sum operation can be implemented using the kernel
1.all-to-all broadcast
2.one-to-all broadcast
3. all-to-one broadcast
4.all-to-all reduction
The ratio of the time taken to solve a problem on a parallel processors to the time required to solve the same problem on a single processor with p identical processing elements.
1.the ratio of the time taken to solve a problem on a single processor to the time required to solve the same problem on a parallel computer with p identical processing elements.
2.the ratio of the time taken to solve a problem on a single processor to the time required to solve the same problem on a parallel computer with p identical processing elements
3.the ratio of number of multiple processors to size of data
4.None of the above
The time taken by all-to- all broadcast on a hypercube is .
1. t= (ts + twm)(p-1)
2. t= ts logp + twm(p-1)
3. t= 2ts(√p – 1) - twm(p-1)
4.t= 2ts(√p – 1) + twm(p-1)
The time taken by all-to- all broadcast on a mesh is.
1.t= (ts + twm)(p-1)
2.t= ts logp + twm(p-1)
3. t= 2ts(√p – 1) - twm(p-1)
4. t= 2ts(√p – 1) + twm(p-1)
The time that elapses from the moment the first processor starts to the moment the last processor finishes execution is called as .
1.parallel runtime
2.overhead runtime
3. excess runtime
4.serial runtime
The___ time collectively spent by all the processing elements Tall = p TP?
1. total
2.average
3.mean
4.sum
To which class of systems does the von Neumann computer belong?
1.simd (single instruction multiple data)
2.mimd (multiple instruction multiple data)
3.misd (multiple instruction single data)
4. sisd (single instruction single data)
VLIW stands for ?
1. very long instruction word
2. very long instruction width
3. very large instruction word
4.very long instruction width
What is Critical Path?
1.the length of the longest path in a task dependency graph is called the critical path length.
2.. the length of the smallest path in a task dependency graph is called the critical path length.
3. path with loop
4.none of the mentioned.
What is is the ratio of the time taken to solve a problem on a single processor to the time required to solve the same problem on a parallel computer with p identical processing elements?
1.overall time
2.speedup
3.scaleup
4.efficiency
Which are different sources of Overheads in Parallel Programs?
1.interprocess interactions
2.process idling
3.all mentioned options
4. excess computation
Which of the following is not parallel algorithm model
1. the data parallel model
2.the work pool model
3.the task graph model
4.the speculative model
Which of the following method is used to avoid Interaction Overheads?
1.maximizing data locality
2.minimizing data locality
3.increase memory size
4.None of the above
Which of these is not a source of overhead in parallel computing?
1.non-uniform load distribution
2.less local memory requirement in distributed computing
3.synchronization among threads in shared memory computing
4.None of the above
Which one is not a limitation of a distributed memory parallel system?
1. higher communication time
2.cache coherency
3.synchronization overheads
4.None of the above
Which task decomposition technique is suitable for the 15-puzzle problem?
1. data decomposition
2.exploratory decomposition
3.speculative decomposition
4.recursive decomposition
Writing parallel programs is referred to as?
1.parallel computation
2.parallel processes
3.parallel development
4.parallel programming
__ Communication model is generally seen in tightly coupled system.
1.message passing
2.shared-address space
3.client-server
4.distributed network
_____ is a method for inducing concurrency in problems that can be solved using the divide-and-conquer strategy?
1.exploratory decomposition
2.speculative decomposition
3.speculative decomposition
4. recursive decomposition