Matches in SemOpenAlex for { <https://semopenalex.org/work/W16612963> ?p ?o ?g. }
Showing items 1 to 45 of
45
with 100 items per page.
- W16612963 abstract "Mether is a Distributed Shared Memory (DSM) that runs on Sun1 workstations under the SunOS 4.0 operating system. User programs access the Mether address space in a way indistinguishable from other memory. Mether was inspired by the MemNet DSM, but unlike MemNet Mether consists of software communicating over a conventional Ethernet. The kernel part of Mether actually does no data transmission over the network. Data transmission is accomplished by a user-level server. The kernel driver has no preference for a server, and indeed does not know that servers exist. The kernel driver has been made very safe, and in fact panic is not in its dictionary. The Mether system supports a distributed shared memory. It is distributed in the sense that the pages of memory are not all at one workstation, but rather move around the network in a demand-paged fashion. It is shared in the sense that processes through the network read, write, and execute access. And it is memory in the sense that user programs access the data in a way indistinguishable from other memory. The memory is never paged to disk, but the delay of accessing a page over the network is approximately the same as a paging disk. Two examples of Mether programs are shown in Figures 1 and 2. Note that, aside from the call to methersetup these programs look quite ordinary. One program prints out the value of the first 278 bytes of Mether memory; the other clears the first page of the Mether memory and then increments each byte 128 times. If the first program is running the values displayed increase. You can run either program on any host that supports Mether. The writer takes about 8 seconds to run, whether the watcher is running or not. In fact the writer usually runs a little faster if the watcher is on another machine. As the examples show, programs that access this memory can pretend that it is normal memory. If they do they may pay a substantial performance penalty. As shown in [4] programs that use DSM without modification rarely show the sort of performance gain found on a conventional sharedmemory multiprocessor. Programs must be more careful; if they are then they can communicate across the network at apparent memory speeds. The memory is accessed by opening a special file. Once the file is opened the user program executes an tnmap system call and maps the area into its address space. From that point on the process may treat the memory as it would any other memory. A function library is provided to make the use of Mether totally transparent. 'This work was done while the author was at University of Delaware, Newark, De. 'Sun and SunOS are trademarks of Sun Microsystems . I#include uorld.hu main() C unsigned int i, j ; initscr() ; methersetup(); while(1) C move(0,O); for(i = 0; i > Figure 2: A program that writes to an Mether page If the process is the only one using an area of the memory, then it will run at full memory speed. If other processes on the same processor are using the same area, they will all run at full speed, unless one of the other processes locks an area of the shared memory. If processes on other processors simply read the memory infrequently there will be a small impact on writes as messages are sent out to the other processors invalidating their copy (or, in the current protocol, updating their copy). If many processors write the same location frequently then there will be a substantial performance degradation, probably only allowing a few thousand operations per second. Mether is non-blocking so the processor will not be slowed down, just the processes accessing the contended-for location. Mether is inspired by a high-speed memory-mapped network built at the University of Delaware by Delp and Farber. We give a cursory description of MemNet below; for more details see PI, [21 and PI. MemNet is a memory-mapped network. MemNet provides the user with (in the current implementation) a two Mb contiguous region of memory which is shared between a set of processors. The sharing is accomplished using dedicated page-managment hardware communicating via a highspeed token ring. When a MemNet page is needed and it is not present in the local interface a message is sent over the token ring requesting the page. The hardware provides consistency between pages. The algorithm used is similar to those used for snooping caches: when a chunk is written all other copies of that chunk are invalidated before the write completes. For performance reasons the pages are only 32 bytes long. This size was decided upon as the optimal tradeoff between transmission time and several other factors. For a complete performance analysis, see [I]. On a system such as MemNet the global address space is much larger than any single interface's memory. A problem that must be addressed is what to do in the event a chunk can not find an interface with room for it. Some interface must always keep the space open for that particular chunk (address) in the MemNet address space. To address this problem MemNet supports the notion of reserved memory. Reserved memory is the set of chunks for which a particular interface is responsible. Space will always be available for these chunks in the interfaces' reserved area. If no space can be found for a chunk on any interface in a non-reserved area, the chunk will end up back in the reserved memory in the interface which is its home. If MemNet did not support reserved memory, chunks might be lost as interfaces filled up with multiple copies of chunks. In general a MemNet interface will have a fair share (i.e. on a system with 10 interfaces, 10%) of its memory as reserved memory, with the rest of the memory available for other chunks. Mether supports reserved memory too, on a page basis. In fact, a page must be in the reserved memory of some Mether interface for it to be created. In other words, pages are created only from the reserved space, and only when they are referenced. When a non-reserved page is referenced for the first time on a processor, a request for that page is sent out. Only if that page is in some processor's reserved address space will space for it be allocated. One difference between MemNet and Mether is that Mether blocks the process when a page is unavailable whereas MemNet blocks the processor. This difference is more important than might at first seem. On MemNet, hot spots can consume the process, the network, and all the processors on the network. It is essential that algorithms be well-behaved. Otherwise the processors on the network can, in the absolute worst case, run orders of magnitude slower than normal. On Mether only the processes requesting the information are affected. Other processes, processors, and the network operate normally. We wanted to gain experience with a DSM that ran on more than the three processors available on the existing MemNet network. Our goal is to build a DSM that matches MemNets' best-case and worst-case performance. In the best case, MemNet runs at memory speeds; in the worst case, it is several orders of magnitude slower. One reason that Mether makes no attempt to minimize paging latency is that we want to get as close to the MemNet environment as possible and explore ways in which to use that environment correctly. We will describe Mether in further detail below, after which we will describe factors that constrained the design. Mether is driven by MemNet-inspired constraints; there were a number of other constraints, driven by both technical and political realities." @default.
- W16612963 created "2016-06-24" @default.
- W16612963 creator A5049153806 @default.
- W16612963 creator A5074163062 @default.
- W16612963 date "1993-01-01" @default.
- W16612963 modified "2023-09-24" @default.
- W16612963 title "The Mether System: Distributed Shared Memory for SunOS 4.0" @default.
- W16612963 hasPublicationYear "1993" @default.
- W16612963 type Work @default.
- W16612963 sameAs 16612963 @default.
- W16612963 citedByCount "13" @default.
- W16612963 countsByYear W166129632012 @default.
- W16612963 countsByYear W166129632015 @default.
- W16612963 crossrefType "journal-article" @default.
- W16612963 hasAuthorship W16612963A5049153806 @default.
- W16612963 hasAuthorship W16612963A5074163062 @default.
- W16612963 hasConcept C41008148 @default.
- W16612963 hasConceptScore W16612963C41008148 @default.
- W16612963 hasLocation W166129631 @default.
- W16612963 hasOpenAccess W16612963 @default.
- W16612963 hasPrimaryLocation W166129631 @default.
- W16612963 hasRelatedWork W150004887 @default.
- W16612963 hasRelatedWork W1502515215 @default.
- W16612963 hasRelatedWork W152903609 @default.
- W16612963 hasRelatedWork W1968554407 @default.
- W16612963 hasRelatedWork W1987225815 @default.
- W16612963 hasRelatedWork W2014328611 @default.
- W16612963 hasRelatedWork W2021804287 @default.
- W16612963 hasRelatedWork W2029856430 @default.
- W16612963 hasRelatedWork W2032186805 @default.
- W16612963 hasRelatedWork W2044902313 @default.
- W16612963 hasRelatedWork W2070343246 @default.
- W16612963 hasRelatedWork W2072160515 @default.
- W16612963 hasRelatedWork W2080404327 @default.
- W16612963 hasRelatedWork W2094642242 @default.
- W16612963 hasRelatedWork W2110856615 @default.
- W16612963 hasRelatedWork W2118444975 @default.
- W16612963 hasRelatedWork W2126990153 @default.
- W16612963 hasRelatedWork W2182987586 @default.
- W16612963 hasRelatedWork W286037355 @default.
- W16612963 hasRelatedWork W2139794249 @default.
- W16612963 isParatext "false" @default.
- W16612963 isRetracted "false" @default.
- W16612963 magId "16612963" @default.
- W16612963 workType "article" @default.