SUPPORT THE WORK

GetWiki

single system image

ARTICLE SUBJECTS
aesthetics  →
being  →
complexity  →
database  →
enterprise  →
ethics  →
fiction  →
history  →
internet  →
knowledge  →
language  →
licensing  →
linux  →
logic  →
method  →
news  →
perception  →
philosophy  →
policy  →
purpose  →
religion  →
science  →
sociology  →
software  →
truth  →
unix  →
wiki  →
ARTICLE TYPES
essay  →
feed  →
help  →
system  →
wiki  →
ARTICLE ORIGINS
critical  →
discussion  →
forked  →
imported  →
original  →
single system image
[ temporary import ]
please note:
- the content below is remote from Wikipedia
- it has been imported raw for GetWiki
{{Short description|Cluster dedicated operating system}}In distributed computing, a single system image (SSI) cluster is a cluster of machines that appears to be one single system.{{Citation
| last = Pfister
| first = Gregory F.
| year = 1998
| title = In search of clusters
| isbn = 978-0-13-899709-0
| publisher = Prentice Hall PTR
| location = Upper Saddle River, NJ
| oclc = 38300954
| url-access = registration
| url =weblink
}}{{citation
|author1=Buyya, Rajkumar |author2=Cortes, Toni |author3=Jin, Hai | year = 2001
| title = Single System Image
| journal = International Journal of High Performance Computing Applications
| volume = 15
| issue = 2
| pages = 124
| doi = 10.1177/109434200101500205
|s2cid=38921084 | url =weblink
}}{{citation
|author1=Healy, Philip |author2=Lynn, Theo |author3=Barrett, Enda |author4=Morrison, John P. | year = 2016
| title = Single system image: A survey
| journal = Journal of Parallel and Distributed Computing
| volume = 90-91
| pages = 35–51
| doi = 10.1016/j.jpdc.2016.01.004
| url =weblink|hdl=10468/4932 }} The concept is often considered synonymous with that of a distributed operating system,
{{Citation
| title = Distributed systems: concepts and design
| year = 2005
|author1=Coulouris, George F |author2=Dollimore, Jean |author3=Kindberg, Tim | isbn = 978-0-321-26354-4
| publisher = Addison Wesley
| page = 223
| url =weblink
}}{{Citation
|author1=Bolosky, William J. |author2=Draves, Richard P. |author3=Fitzgerald, Robert P. |author4=Fraser, Christopher W. |author5=Jones, Michael B. |author6=Knoblock, Todd B. |author7=Rashid, Rick | contribution = Operating System Directions for the Next Millennium | title = 6th Workshop on Hot Topics in Operating Systems (HotOS-VI) | place = Cape Cod, MA | pages = 106–110 | date = 1997-05-05| doi = 10.1109/HOTOS.1997.595191 | citeseerx = 10.1.1.50.9538
s2cid=15380352 }} but a single image may be presented for more limited purposes, just job scheduling for instance, which may be achieved by means of an additional layer of software over conventional Operating systems running on each Node (networking)>node.{{Citation
| title=Grid And Cluster Computing
| author= Prabhu, C.S.R.
| isbn=978-81-203-3428-1
| publisher=Phi Learning
| year=2009
| pages=256
| url=https://books.google.com/books?id=EIVdVtGHv-0C&dq=%22distributed+operating+system%22+%22single+system+image%22&pg=PA177
}} The interest in SSI clusters is based on the perception that they may be simpler to use and administer than more specialized clusters.Different SSI systems may provide a more or less complete illusion of a single system.

Features of SSI clustering systems

Different SSI systems may, depending on their intended usage, provide some subset of these features.

Process migration

Many SSI systems provide process migration.{{citation
| last = Smith | first = Jonathan M.
| year = 1988
| title = A survey of process migration mechanisms
| journal = ACM SIGOPS Operating Systems Review
| volume = 22
| issue = 3
| pages = 28–40
| doi = 10.1145/47671.47673
| url =weblink| citeseerx = 10.1.1.127.8095
| s2cid = 6611633
}}
Processes may start on one node and be moved to another node, possibly for resource balancing or administrative reasons.for example it may be necessary to move long running processes off a node that is to be closed down for maintenance As processes are moved from one node to another, other associated resources (for example IPC resources) may be moved with them.

Process checkpointing

Some SSI systems allow checkpointing of running processes, allowing their current state to be saved and reloaded at a later date.Checkpointing is particularly useful in clusters used for high-performance computing, avoiding lost work in case of a cluster or node restart.Checkpointing can be seen as related to migration, as migrating a process from one node to another can be implemented by first checkpointing the process, then restarting it on another node. Alternatively checkpointing can be considered as migration to disk.

Single process space

Some SSI systems provide the illusion that all processes are running on the same machine - the process management tools (e.g. "ps", "kill" on Unix like systems) operate on all processes in the cluster.

Single root

Most SSI systems provide a single view of the file system. This may be achieved by a simple NFS server, shared disk devices or even file replication.The advantage of a single root view is that processes may be run on any available node and access needed files with no special precautions. If the cluster implements process migration a single root view enables direct accesses to the files from the node where the process is currently running.Some SSI systems provide a way of "breaking the illusion", having some node-specific files even in a single root. HP TruCluster provides a "context dependent symbolic link" (CDSL) which points to different files depending on the node that accesses it. HP VMScluster provides a search list logical name with node specific files occluding cluster shared files where necessary. This capability may be necessary to deal with heterogeneous clusters, where not all nodes have the same configuration. In more complex configurations such as multiple nodes of multiple architectures over multiple sites, several local disks may combine to form the logical single root.

Single I/O space

Some SSI systems allow all nodes to access the I/O devices (e.g. tapes, disks, serial lines and so on) of other nodes. There may be some restrictions on the kinds of accesses allowed (For example, OpenSSI can't mount disk devices from one node on another node).

Single IPC space

Some SSI systems allow processes on different nodes to communicate using inter-process communications mechanisms as if they were running on the same machine. On some SSI systems this can even include shared memory (can be emulated in software with distributed shared memory).In most cases inter-node IPC will be slower than IPC on the same machine, possibly drastically slower for shared memory. Some SSI clusters include special hardware to reduce this slowdown.

Cluster IP address

Some SSI systems provide a "cluster IP address", a single address visible from outside the cluster that can be used to contact the cluster as if it were one machine. This can be used for load balancing inbound calls to the cluster, directing them to lightly loaded nodes, or for redundancy, moving the cluster address from one machine to another as nodes join or leave the cluster."leaving a cluster" is often a euphemism for crashing

Examples

Examples here vary from commercial platforms with scaling capabilities, to packages/frameworks for creating distributed systems, as well as those that actually implement a single system image.{|class="wikitable sortable"
|+SSI Properties of different clustering systems
|-
!Name
!Process migration
!Process checkpoint
!Single process space
!Single root
!Single I/O space
!Single IPC space
!Cluster IP addressMany of the Linux based SSI clusters can use the Linux Virtual Server to implement a single cluster IP address
!Source Model
!Latest release dateGreen means software is actively developed
!Supported OS
|-
|AmoebaAmoeba development is carried forward by Dr. Stefan Bosse at BSS Lab {{webarchive|url=https://web.archive.org/web/20090203124419weblink |date=2009-02-03 }}
| {{Yes}}
| {{Yes}}
| {{Yes}}
| {{Yes}}
| {{Unk}}
| {{Yes}}
| {{Unk}}
| {{Yes|Open}}
| {{No|{{dts|1996|07|30}}}}
| Native
|-
|AIX TCF
| {{Unk}}
| {{Unk}}
| {{Unk}}
| {{Yes}}
| {{Unk}}
| {{Unk}}
| {{Unk}}
| {{No|Closed}}
| {{No|{{dts|1990|03|30}}}}WEB,weblink AIX PS/2 OS,
| AIX PS/2 1.2
|-
|NonStop GuardianGuardian90 TR90.8 Based on R&D by Tandem Computers c/o Andrea Borr at weblink
| {{Yes}}
| {{Yes}}
| {{Yes}}
| {{Yes}}
| {{Yes}}
| {{Yes}}
| {{Yes}}
| {{No|Closed}}
| {{Yes|{{dts|2018||}}}}
| NonStop OS
|-


|Inferno
| {{No}}
| {{No}}
| {{No}}
| {{Yes}}
| {{Yes}}
| {{Yes}}
| {{Unk}}
| {{Yes|Open}}
| {{Yes|{{dts|2015|03|04}}}}
| Native, Windows, Irix, Linux, OS X, FreeBSD, Solaris, Plan 9
|-
|Kerrighed
| {{Yes}}
| {{Yes}}
| {{Yes}}
| {{Yes}}
| {{Unk}}
| {{Yes}}
| {{Unk}}
| {{Yes|Open}}
| {{No|{{dts|2010|06|14}}}}
| Linux 2.6.30
|-
|LinuxPMILinuxPMI is a successor to openMosix
| {{Yes}}
| {{Yes}}
| {{No}}
| {{Yes}}
| {{No}}
| {{No}}
| {{Unk}}
| {{Yes|Open}}
| {{No|{{dts|2006|06|18}}}}
| Linux 2.6.17
|-
|LOCUSLOCUS was used to create IBM AIX TCF
| {{Yes}}
| {{Unk}}
| {{Yes}}
| {{Yes}}
| {{Yes}}
| {{Yes|YesLOCUS used named pipes for IPC}}
| {{Unk}}
| {{No|Closed}}
| {{No|{{dts|1988}}}}
| Native
|-
|MOSIX
| {{Yes}}
| {{Yes}}
| {{No}}
| {{Yes}}
| {{No}}
| {{No}}
| {{Unk}}
| {{No|Closed}}
| {{Yes|{{dts|2017|10|24}}}}
| Linux
|-
|openMosixopenMosix was a fork of MOSIX
| {{Yes}}
| {{Yes}}
| {{No}}
| {{Yes}}
| {{No}}
| {{No}}
| {{Unk}}
| {{Yes|Open}}
| {{No|{{dts|2004|12|10}}}}
| Linux 2.4.26
|-
|Open-SharedrootOpen-Sharedroot is a shared root Cluster from ATIX
| {{No}}
| {{No}}
| {{No}}
| {{Yes}}
| {{No}}
| {{No}}
| {{Yes}}
| {{Yes|Open}}
| {{No|{{dts|2011|09|01}}}}WEB,weblink Open-Sharedroot GitHub repository, GitHub,
| Linux
|-
|OpenSSI
| {{Yes}}
| {{No}}
| {{Yes}}
| {{Yes}}
| {{Yes}}
| {{Yes}}
| {{Yes}}
| {{Yes|Open}}
| {{No|{{dts|2010|02|18}}}}
| Linux 2.6.10 (Debian, Fedora)
|-
|Plan 9
| {{No}}{{Citation
| last1 = Pike | first1 = Rob
| last2 = Presotto | first2 = Dave
| last3 = Thompson | first3 = Ken
| last4 = Trickey | first4 = Howard
| contribution = Plan 9 from Bell Labs
| series = In Proceedings of the Summer 1990 UKUUG Conference
| pages = 8
| year = 1990
| quote = Process migration is also deliberately absent from Plan 9.
}}
| {{No}}
| {{No}}
| {{Yes}}
| {{Yes}}
| {{Yes}}
| {{Yes}}
| {{Yes|Open}}
| {{Yes|{{dts|2015|01|09}}}}
| Native
|-
|Sprite
| {{Yes}}
| {{Unk}}
| {{No}}
| {{Yes}}
| {{Yes}}
| {{No}}
| {{Unk}}
| {{Yes|Open}}
| {{No|{{dts|1992}}}}
| Native
|-
|TidalScale
| {{Yes}}
| {{No}}
| {{Yes}}
| {{Yes}}
| {{Yes}}
| {{Yes}}
| {{Yes}}
| {{No|Closed}}
| {{Yes|{{dts|2020|08|17}}}}
| Linux, FreeBSD
|-
|TruCluster
| {{No}}
| {{Unk}}
| {{No}}
| {{Yes}}
| {{No}}
| {{No}}
| {{Yes}}
| {{No|Closed}}
| {{No|{{dts|2010|10|1}}}}
| Tru64
|-
|VMScluster
| {{No}}
| {{No}}
| {{Yes}}
| {{Yes}}
| {{Yes}}
| {{Yes}}
| {{Yes}}
| {{No|Closed}}
| {{Yes|{{dts|2024|01|25}}}}
| OpenVMS
|-
|z/VM
| {{Yes}}
| {{No}}
| {{Yes}}
| {{No}}
| {{No}}
| {{Yes}}
| {{Unk}}
| {{No|Closed}}
| {{Yes|{{dts|2022|09|16}}}}
| Native
|-
|UnixWare NonStop ClustersUnixWare NonStop Clusters was a base for OpenSSI
| {{Yes}}
| {{No}}
| {{Yes}}
| {{Yes}}
| {{Yes}}
| {{Yes}}
| {{Yes}}
| {{No|Closed}}
| {{No|{{dts|2000|6|}}}}
| UnixWare

See also

Notes

References

{{Reflist}}

- content above as imported from Wikipedia
- "single system image" does not exist on GetWiki (yet)
- time: 8:36am EDT - Sat, May 18 2024
[ this remote article is provided by Wikipedia ]
LATEST EDITS [ see all ]
GETWIKI 23 MAY 2022
GETWIKI 09 JUL 2019
Eastern Philosophy
History of Philosophy
GETWIKI 09 MAY 2016
GETWIKI 18 OCT 2015
M.R.M. Parrott
Biographies
GETWIKI 20 AUG 2014
CONNECT