Peer-to-Peer Computing: Principles and Applications


Free download. Book file PDF easily for everyone and every device. You can download and read online Peer-to-Peer Computing: Principles and Applications file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Peer-to-Peer Computing: Principles and Applications book. Happy reading Peer-to-Peer Computing: Principles and Applications Bookeveryone. Download file Free Book PDF Peer-to-Peer Computing: Principles and Applications at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Peer-to-Peer Computing: Principles and Applications Pocket Guide.

Principles and Applications

Tim Berners-Lee 's vision for the World Wide Web was close to a P2P network in that it assumed each user of the web would be an active editor and contributor, creating and linking content to form an interlinked "web" of links. The early Internet was more open than present day, where two machines connected to the Internet could send packets to each other without firewalls and other security measures. He made a proposal for an information management system on 12 March , and he implemented the first successful communication between a Hypertext Transfer Protocol HTTP client and server via the internet in mid-November the same year.

The World Wide Web WWW , commonly known as the Web , is an information system where documents and other web resources are identified by Uniform Resource Locators, which may be interlinked by hypertext, and are accessible over the Internet.

The resources of the WWW may be accessed by users by a software application called a web browser. Therefore, USENET , a distributed messaging system that is often described as an early peer-to-peer architecture, was established. It was developed in as a system that enforces a decentralized model of control. The basic model is a client-server model from the user or client perspective that offers a self-organizing approach to newsgroup servers.

However, news servers communicate with one another as peers to propagate Usenet news articles over the entire group of network servers. The same consideration applies to SMTP email in the sense that the core email-relaying network of mail transfer agents has a peer-to-peer character, while the periphery of e-mail clients and their direct connections is strictly a client-server relationship.

Usenet is a worldwide distributed discussion system available on computers. Tom Truscott and Jim Ellis conceived the idea in , and it was established in Users read and post messages to one or more categories, known as newsgroups. Usenet resembles a bulletin board system BBS in many respects and is the precursor to Internet forums that are widely used today. Discussions are threaded, as with web forums and BBSs, though posts are stored on the server sequentially.

The name comes from the term "users network". Decentralized computing is the allocation of resources, both hardware and software, to each individual workstation, or office location. In contrast, centralized computing exists when the majority of functions are carried out, or obtained from a remote centralized location. Decentralized computing is a trend in modern-day business environments.

This is the opposite of centralized computing, which was prevalent during the early days of computers. A decentralized computer system has many benefits over a conventional centralized network.

Peer-to-peer - EverybodyWiki Bios & Wiki

Desktop computers have advanced so rapidly, that their potential performance far exceeds the requirements of most business applications. This results in most desktop computers remaining idle. A decentralized system can use the potential of these systems to maximize efficiency. However, it is debatable whether these networks increase overall effectiveness. A news server is a collection of software used to handle Usenet articles.

It may also refer to a computer itself which is primarily or solely used for handling Usenet. A reader server provides an interface to read and post articles, generally with the assistance of a news client. A transit server exchanges articles with other servers. Most servers can provide both functions.

In May , with millions more people on the Internet, Shawn Fanning introduced the music and file-sharing application called Napster.

A Thorough Introduction to Distributed Systems

A peer-to-peer network is designed around the notion of equal peer nodes simultaneously functioning as both "clients" and "servers" to the other nodes on the network. This model of network arrangement differs from the client—server model where communication is usually to and from a central server.

A typical example of a file transfer that uses the client-server model is the File Transfer Protocol FTP service in which the client and server programs are distinct: the clients initiate the transfer, and the servers satisfy these requests. Peer-to-peer networks generally implement some form of virtual overlay network on top of the physical network topology, where the nodes in the overlay form a subset of the nodes in the physical network.

Overlays are used for indexing and peer discovery, and make the P2P system independent from the physical network topology. Based on how the nodes are linked to each other within the overlay network, and how resources are indexed and located, we can classify networks as unstructured or structured or as a hybrid between the two.

Peer-to-Peer Computing

Unstructured peer-to-peer networks do not impose a particular structure on the overlay network by design, but rather are formed by nodes that randomly form connections to each other. Because there is no structure globally imposed upon them, unstructured networks are easy to build and allow for localized optimizations to different regions of the overlay. However, the primary limitations of unstructured networks also arise from this lack of structure. In particular, when a peer wants to find a desired piece of data in the network, the search query must be flooded through the network to find as many peers as possible that share the data.

Furthermore, since there is no correlation between a peer and the content managed by it, there is no guarantee that flooding will find a peer that has the desired data. Popular content is likely to be available at several peers and any peer searching for it is likely to find the same thing. But if a peer is looking for rare data shared by only a few other peers, then it is highly unlikely that search will be successful.

The most common type of structured P2P networks implement a distributed hash table DHT , [18] [19] in which a variant of consistent hashing is used to assign ownership of each file to a particular peer. However, in order to route traffic efficiently through the network, nodes in a structured overlay must maintain lists of neighbors that satisfy specific criteria. This makes them less robust in networks with a high rate of churn i. Some prominent research projects include the Chord project , Kademlia , PAST storage utility , P-Grid , a self-organized and emerging overlay network, and CoopNet content distribution system.

Hybrid models are a combination of peer-to-peer and client-server models.


  1. Media and Conflict in the Twenty-First Century!
  2. ISBN 10: 3642035132.
  3. Forms that work: designing Web forms for usability?
  4. Peer-to-Peer Networks and Applications Research Projects.

Spotify was an example of a hybrid model [until ]. Currently, hybrid models have better performance than either pure unstructured networks or pure structured networks because certain functions, such as searching, do require a centralized functionality but benefit from the decentralized aggregation of nodes provided by unstructured networks. CoopNet Cooperative Networking was a proposed system for off-loading serving to peers who have recently downloaded content, proposed by computer scientists Venkata N. All of the information is retained at the server.

This system makes use of the fact that the bottle-neck is most likely in the outgoing bandwidth than the CPU , hence its server-centric design. It assigns peers to other peers who are 'close in IP ' to its neighbors [same prefix range] in an attempt to use locality. If multiple peers are found with the same file it designates that the node choose the fastest of its neighbors.

http://api.learnit.world/78-buy-azithromycin.php Streaming media is transmitted by having clients cache the previous stream, and then transmit it piece-wise to new nodes. Peer-to-peer systems pose unique challenges from a computer security perspective. Like any other form of software , P2P applications can contain vulnerabilities. What makes this particularly dangerous for P2P software, however, is that peer-to-peer applications act as servers as well as clients, meaning that they can be more vulnerable to remote exploits.

Also, since each node plays a role in routing traffic through the network, malicious users can perform a variety of "routing attacks", or denial of service attacks.

Copyright:

Examples of common routing attacks include "incorrect lookup routing" whereby malicious nodes deliberately forward requests incorrectly or return false results, "incorrect routing updates" where malicious nodes corrupt the routing tables of neighboring nodes by sending them false information, and "incorrect routing network partition" where when new nodes are joining they bootstrap via a malicious node, which places the new node in a partition of the network that is populated by other malicious nodes. The prevalence of malware varies between different peer-to-peer protocols. Corrupted data can also be distributed on P2P networks by modifying files that are already being shared on the network.

Files infected with the RIAA virus were unusable afterwards and contained malicious code. Modern hashing , chunk verification and different encryption methods have made most networks resistant to almost any type of attack, even when major parts of the respective network have been replaced by faked or nonfunctional hosts. The decentralized nature of P2P networks increases robustness because it removes the single point of failure that can be inherent in a client-server based system. If one peer on the network fails to function properly, the whole network is not compromised or damaged. In contrast, in a typical client—server architecture, clients share only their demands with the system, but not their resources.

In this case, as more clients join the system, fewer resources are available to serve each client, and if the central server fails, the entire network is taken down. There are both advantages and disadvantages in P2P networks related to the topic of data backup , recovery, and availability.

In a centralized network, the system administrators are the only forces controlling the availability of files being shared. If the administrators decide to no longer distribute a file, they simply have to remove it from their servers, and it will no longer be available to users. Along with leaving the users powerless in deciding what is distributed throughout the community, this makes the entire system vulnerable to threats and requests from the government and other large forces. Although server-client networks are able to monitor and manage content availability, they can have more stability in the availability of the content they choose to host.

A client should not have trouble accessing obscure content that is being shared on a stable centralized network. P2P networks, however, are more unreliable in sharing unpopular files because sharing files in a P2P network requires that at least one node in the network has the requested data, and that node must be able to connect to the node requesting the data. This requirement is occasionally hard to meet because users may delete or stop sharing data at any point. In this sense, the community of users in a P2P network is completely responsible for deciding what content is available.

Unpopular files will eventually disappear and become unavailable as more people stop sharing them. Popular files, however, will be highly and easily distributed. Popular files on a P2P network actually have more stability and availability than files on central networks. In a centralized network, a simple loss of connection between the server and clients is enough to cause a failure, but in P2P networks, the connections between every node must be lost in order to cause a data sharing failure.

In a centralized system, the administrators are responsible for all data recovery and backups, while in P2P systems, each node requires its own backup system. In P2P networks, clients both provide and use resources.

Peer-to-Peer Computing: Principles and Applications
Peer-to-Peer Computing: Principles and Applications
Peer-to-Peer Computing: Principles and Applications
Peer-to-Peer Computing: Principles and Applications
Peer-to-Peer Computing: Principles and Applications
Peer-to-Peer Computing: Principles and Applications
Peer-to-Peer Computing: Principles and Applications
Peer-to-Peer Computing: Principles and Applications

Related Peer-to-Peer Computing: Principles and Applications



Copyright 2019 - All Right Reserved