Start networking and exchanging professional insights

Register now or log in to join your professional community.

Follow

What is the difference between shared memory systems and distributed shared memory systems?

user-image
Question added by معاذ الاحمد , Customer Service Representative , Webhelp
Date Posted: 2015/08/16
Verginia Floralde
by Verginia Floralde , Messenger , Saloon

Shared memory system- is relatively easy to program since all processors share a single view of data. Distributed shared memory system- is implements the shared memory model on a physically distributed.

distributed shared memory  is a form of memory architecture where the  memories are physically separated and it can be addressed as one  logically shared address space but the shared memory  is a form of memory architecture where the memories are physically shared and it can be addressed as one logically shared address space

Deleted user
by Deleted user

There are two issues to consider regarding the terms shared memory and distributed memory. One is what do these mean as programming abstractions, and the other is what do they mean in terms of how the hardware is actually implemented.

In the past there were true shared memory cache-coherent multiprocessor systems. The systems communicated with each other and with shared main memory over a shared bus. This meant that any access from any processor to main memory would have equal latency. Today these types of systems are not manufactured. Instead there are various point-to-point links between processing elements and memory elements (this is the reason for non-uniform memory access, or NUMA). However, the idea of communicating directly through memory remains a useful programming abstraction. So in many systems this is handled by the hardware and the programmer does not need to insert any special directives. Some common programming techniques that use these abstractions are OpenMP and Pthreads.

Distributed memory has traditionally been associated with processors performing computation on local memory and then once it using explicit messages to transfer data with remote processors. This adds complexity for the programmer, but simplifies the hardware implementation because the system no longer has to maintain the illusion that all memory is actually shared. This type of programming has traditionally been used with supercomputers that have hundreds or thousands of processing elements. A commonly used technique is MPI.

However, supercomputers are not the only systems with distributed memory. Another example isGPGPU programming which is available for many desktop and laptop systems sold today. Both CUDAand OpenCL require the programmer to explicitly manage sharing between the CPU and the GPU (or other accelerator in the case of OpenCL). This is largely because when GPU programming started the GPU and CPU memory was separated by the PCI bus which has a very long latency compared to performing computation on the locally attached memory. So the programming models were developed assuming that the memory was separate (or distributed) and communication between the two processing elements (CPU and GPU) required explicit communication. Now that many systems have GPU and CPU elements on the same die there are proposals to allow GPGPU programming to have an interface that is more like shared memory.

alaa liswe
by alaa liswe , ِAdministrative Assistant , Arab Open University

One option uses a single address space. Systems based on this concept, otherwise known as shared-memory systems, allow processor communication through variables stored in a shared address space.

The other alternative employs a scheme by which each processor has its own memory module. Such a distributed-memory system (cluster) is constructed by connecting each component with a high-speed communications network. Processors communicate to each other over the network.

The architectural differences between shared-memory systems and distributed-memory systems have implications on how each is programmed. With a shared-memory multiprocessor, different processors can access the same variables. This makes referencing data stored in memory similar to traditional single-processor programs, but adds the complexity of shared data integrity. A distributed-memory system introduces a different problem: how to distribute a computational task to multiple processors with distinct memory spaces and reassemble the results from each processor into one solution.

Distributed Computing is a way of combining the processing power of thousands of small computers (ie: PCs) to solve very complex problems that are too large for traditional supercomputers, which are very expensive to build and run.

Mohammed khalid
by Mohammed khalid , IT Engineer , ليس بعد

shared memory is memory that may be simultaneously accessed by multiple programs with an intent to provide communication among them Distributed memory is the different physical memories are logically shared over a large address space( virtual memory)

Egedi Kingsley
by Egedi Kingsley , N-POWER VOLUNTEER CORPS , N-POWER VOLUNTEER CORPS

In computer science, distributed shared memory (DSM) is a form of memory architecture where the (physically separate) memories can be addressed as one (logically shared) address space. Here, the term "shared" does not mean that there is a single centralized memory but "shared" essentially means that the address space is shared (same physical address on two processors refers to the same location in memory).[1]: Distributed global address space (DGAS), is a similar term for a wide class of software and hardware implementations, in which each node of a cluster has access to shared memory in addition to each node's non-shared private memory. A distributed-memory system (often called a multicomputer) consist of multiple independent processing nodes with local memory modules which is connected by a general interconnection network. Software DSM systems can be implemented in an operating system, or as a programming library and can be thought of as extensions of the underlying virtual memory architecture. When implemented in the operating system, such systems are transparent to the developer; which means that the underlying distributed memory is completely hidden from the users. In contrast, software DSM systems implemented at the library or language level are not transparent and developers usually have to program differently. However, these systems offer a more portable approach to DSM system implementation. A distributed shared memory system implements the shared-memory model on a physically distributed memory system.

SUNDAY NCHOR
by SUNDAY NCHOR , FRONTIER OIL LIMITED

In shared memory systems, communication of data values between processors is by way of memory, supported by hardware in the memory interface. Interfacing many processors may lead to long and variable memory latency. Distinguishing characteristics of distributed shared memory rest on the fact that communication is done in software by data transmission instructions, so that the machine level instruction set has send/receive instructions as well as read/write. The long and variable latency of the interconnection network is not associated with memory and may be masked by software which assembles and transmits long messages. However, to move an intermediate datum from its producer to its consumer a distributed shared memory machine ideally sends it to the consumer as soon as it is produced, while a shared memory system stores it in memory to be pick up by consumer when it is needed.

Deleted user
by Deleted user

Shared memory can be a useful programming model for multithreaded programs. The threads run on the same machine and all communicate through a common address space. 

Distributed shared memory uses software techniques to let these machines share (what seems to be) a common address space anyway.

More Questions Like This