The key feature of TCP is its ability that provides a reliable, bi-directional, virtual channel between any two hosts on the Internet. Since the protocol works over the IP network , which provides only best-effort service for delivering packets across the network, the TCP standard  specifies a sliding window based ﬂow control. This ﬂow control has several approaches. First, the sender buffers all data before the transmission and assigns a sequence number to each buffered byte. Continuous blocks of the buffered data which are packetized into TCP packets that includes a sequence number of the first data byte in the packet.
Second, a portion of the prepared packets or data is transmitted to the receiver using the IP protocol. As soon as the sender receives delivery confirmation for more than one data packet, it transmit a new portion of packets (the window “slides” along with the sender’s buffer, Figure 1)
Finally, the sender take responsibility for a data block until the receiver explicitly verifies delivery of the block. As a result, the sender may decide that a particular unacknowledged data block has been lost and start recovery procedures.
The delivery data to be acknowledged, the receiver forms an ACK packet that carries one sequence number. The former, a cumulative ACK, which shows that all data blocks having smaller sequence numbers have already been delivered.
After that, a selective ACK explicitly indicates the ranges of sequence numbers of delivered data packets. To be more precise, TCP doesn’t have a separate ACK packet, but rather uses ﬂags and option fields in the common TCP header for acknowledgment reason. (A TCP packet can be both a data packet and an ACK packet at the same time).
Without loss of generality, we will discuss a notion of ACK packets as a separate entity. However a sliding window based ﬂow control is likely simple, it has several conﬂicting objectives. This essentially requires that the size of a sliding window and to be maximized. (It can be shown that the maximum throughput of a TCP ﬂow depends directly on the sliding window size and inversely on the RTT (round-trip time) of the network path.)
On the other hand, if the sliding window (which is usually referred to as the congestion window) is too large, there is a high probability of packet or data loss because the network and the receiver have resource limitations. Thus, minimization of packet losses needs minimizing the sliding window.
Therefore, the problem is finding an optimal value for the congestion window that provides good throughput, yet does not overwhelm the network and the receiver. Additionally, TCP (Transmission Control Protocol) should be able to recover from packet losses in a timely fashion. This means that the shorter the interval between packets transmission and loss detection, the faster TCP can recover.
However, this interval can’t be too short or otherwise the sender may detect a loss prematurely and retransmit the same packet unnecessarily. This overreaction simply wastes network resources and may induce high congestion in the network. This means that, when and how a sender detects packet losses is another hard problem for TCP.
1.2 Proposed System
The proposed system overcomes the problems that occurred in existing system by considering the rate limiting factor in order to avoid the congestion that occurs in the network. The Host to Host congestion control proposals that build a foundation for all currently known host-to-host algorithms. This foundation includes
• The basic principle of probing the available network resources
• Loss-based and delay-based techniques to estimate the congestion state in the network.
• Techniques to detect packet losses quickly.
The objectives of the project are to avoid the packet loss by reducing the congestion that occurs during transmission of data in the network. Increase of efficiency and throughput of the original TCP Avoid unnecessary assumption of crash of servers by using RTO (Retransmission Time Out).Determines Optimal transmission rate rapidly, thereby eliminating slow start problem
2.2 Literature Survey on Different Research Papers
The aspect of engineering research is dynamic in nature. This is because different people do work in some areas to some extent and give room for other people to do further work. A lot of researches and work has been carried out by several scholars in Host to Host Congestion Control for TCP. The challenges the scholars found in Host to Host Congestion Control for TCP using different algorithms and techniques is prescribed below:
2.2.1. Literature on Host to Host Congestion Control for TCP
This paper presents a survey of various congestion control approaches that preserve the original host-to-host idea of TCP. The approached solutions focus on a variety of problems, to start with the basic problem of eliminating the phenomenon of congestion collapse and also include the problems of effectively using the available network resources in different types of environments.
The first approach in this survey, Tahoe, introduces the basic technique of gradually probing network resources and relying on packet loss to detect that the network limit has been reached.
However, although this technique solves the congestion problem, it creates a great deal of inefficient use of the network. As we showed, solutions to the efficiency problem include algorithms that  refine the core congestion control principle by making more optimistic assumptions about the network (Reno, NewReno) or  refine the TCP protocol to include extended reporting abilities of the receiver (SACK, DSACK), which allows the sender to estimate the network state more precisely (FACK, RR-TCP); or  introduce alternative concepts for network state estimation through delay measurements (DUAL, Vegas, Veno).
Alexander Afanasyev, Neil Tilley, Peter Reiher, and Leonard Kleinrock proposed to employ more intelligent techniques to make congestion control aggressive only when the network is considered congestion-free and conservative during a congestion state. Two proposals, BIC and CUBIC, use packet loss to establish an approximated network resource limit. (Alexander Afanasyev, Neil Tilley, Peter Reiher, and Leonard Kleinrock, 2010)
2.2.2 Literature on Exploration and Evaluation of traditional TCP Congestion Control Techniques
The mechanism of congestion control that has been use in this paper is divided into four main PHASES, namely, SLOW START, CONGESTION AVOIDANCE, FAST RETRANSMIT and FAST-RECOVERY. Most data applications are built on top of TCP since TCP provides end-to-end reliability via retransmissions IP packets that were missing. TCP is originally designed for wired networks where the packet losses are due to network congestion and hence the window size of TCP is adjusted upon detection of packet losses. There are some few different kinds of congestion control algorithms, which are implemented in different TCP versions.
The ordinary TCP assumes that 99% of the losses in packets in the network are caused by congestion and the remaining is caused by damage (Ghassan A. Abed, Mahamod Ismail And Kasmiran Jumari, 2012).
2.2.3 Literature on Taxonomy of Congestion Control Techniques for
TCP in Wired and Wireless Networks
The TCP congestion control technique that has used in this paper is the following algorithms: slow start, congestion avoidance, fast retransmit, and fast recovery. This paper presented a comprehensive review of more than thirty end-to-end TCP congestion control techniques in both wired and wireless networks. Based on these classifications, we can conclude that a plethora of TCP congestion control technique uses the SCP with loss based scheme. In the future work, the joint technique of prediction, compression, and network coding will be studied (Lee Chin Kho, Xavier Défago, Azman Osman Lim and Yasuo Tan, 2013).
2.3 Research Gap
Based on the survey of different literatures in the table above I deduce that the research gap is Qos. The Qos parameters that are mostly affected are: Packet loss, queueing delay and congestion collapse.