CSMA/CD
- -Carrier Sense
- If a device detects a signal from another device, it waits for a specified amount of time before attempting to transmit.
- "listening before transmit" mode
- -Multi-access
- Using a shared medium to communicate
- -Collision Detection
- When a device is in listening mode, it can detect when a collision occurs on the shared media, because all devices can detect an increase in the amplitude of the signal above the normal level.
- -Jam Signal
- When a collision is detected, the transmitting devices send out a jamming signal. The jamming signal notifies the other devices of a collision, so that they invoke a backoff algorithm.
- -Backoff
- Causes all devices to stop transmitting for a random amount of time, which allows the collision signals to subside.
- -Backoff
- When a collision is detected, the transmitting devices send out a jamming signal. The jamming signal notifies the other devices of a collision, so that they invoke a backoff algorithm.
Ethernet Communications
- -Unicast
- Communication in which a frame is sent from one host and addressed to one specific destination. In unicast transmission, there is just one sender and one receiver.
- -Broadcast
- Communication in which a frame is sent from one address to all other addresses. In this case, there is just one sender, but the information is sent to all connected receivers.
- ex: ARP
- -Multicast
- Communication in which a frame is sent to a specific group of devices or clients. Multicast transmission clients must be members of a logical multicast group to receive the information.
- ex: Video, Voice
Ethernet Frame
- -Preamble
- 7 bytes
- Start Frame Delimiter 1 Byte
- Used for synchronization between the sending and receiving devices.
- Essentially, the first few bytes tell the receivers to get ready to receive a new frame.
- -Destination MAC Address Field
- 6 bytes
- identifies the intended recipient
- assists the device in determening if a frame is addressed to it
- The address in the frame is compared to the MAC address in the device. If there is a match, the device accepts the frame.
- -Source MAC Address Field
- The Source MAC Address field (6 bytes) identifies the frame's originating NIC or interface. Switches use this address to add to their lookup tables.
- -Length / Type Field
- 2 bytes
- Frame Type if:
- the first 2 octets(bytes) are > or = to 0x600 (1536) the contents are decoded to the protocol indicated
- Length if:
- The 2 byte value is less then 0x600
- -Data and Pad Fields
- 46 to 1500 bytes
- contain the encapsulated data from a higher layer, which is a generic Layer 3 PDU, or more commonly, an IPv4 packet.
- all frames must be at least 64 bytes
- minimum length aides the detection of collisions
- if frame is smaller then 64 bytes padding is added until it reaches the minimum length
- -Frame Check Sequence Field
- 4 bytes
- detects errors in frames
- uses CRC
- sending device includes CRC
- receiving device genereates a CRC to check for consistency of message
- If there is a match the frame is accepted
- Else it is dropped
MAC Address
- 48 bits or 6 bytes
- expressed as 12 hexadecimal digits
- 00-05-9A-3C-78-00
- 00:05:9A:3C:78:00
- 0005.9A3C.7800
- All devices connected to an Ethernet LAN have MAC-addressed interfaces.
- Used for determening if a message should be passed to the upper layers for processing.
- A MAC address is made up of
- -OUI (Organizational Unique Identifier)
- 24 bits long (3 bytes)
- First part of MAC address
- Made up of:
- Broadcast field
- Indicates to the receiving interface that the frame is destined for all or a group of end stations on the LAN segment
- Local Field
- The Local Bit indicates if the 24-bit vendor number can be modified locally
- OUI Number
- 22 bits long
- Assigned by IEEE
- Identifies the manufacturer of the NIC card.
- Broadcast field
- -Vendor Assigned Number
- 24 bits long
- Assigned by vendor
- uniquely identifies the Ethernet hardware
- Can be a Burned In Address or modified by software indicated by the local bit.
- -OUI (Organizational Unique Identifier)
- expressed as 12 hexadecimal digits
Duplex Settings
- -Half Duplex
- Unidirectional Data Flow
- Higher potential for collision
- Hub connectivity
- Similar to walkie-talkies, two-way radios
- Implements CSMA/CD to help reduce potential for collisions
- Efficiency is typically rated at 50 to 60 percent of a 10-Mb/s bandwidth.
- -Full Duplex
- Point to point only
- Bidirectional
- Attached to dedicated switch port
- Requires full-duplex support on both ends
- Collision-free
- Collision detect circuit disabled
- Frames sent by the two connected end nodes cannot collide because the end nodes use two separate circuits in the network cable.
Switchport Settings
- -auto
- The auto option sets autonegotiation of duplex mode. With autonegotiation enabled, the two ports communicate to decide the best mode of operation.
- -full
- The full option sets full-duplex mode.
- -half
- The half option sets half-duplex mode.
- Fast Ethernet and 10/100/1000 ports, the default is auto
- 100BASE-FX the default is full
- The 10/100/1000 ports operate in either half or full duplex mode when the are set to 10 or 100Mb/s or full duplex mode when set 1000Mb/s
- -auto-MDIX
- automatic medium-dependent interface crossover
- the switch detects the required cable type for copper Ethernet connections and configures the interfaces accordingly.
Design Considerations
- -Bandwidth and Throughput
- full bandwidth for transmission is available only after any collisions have been resolved.
- the net throughput of the port (the average data that is effectively transmitted) will be considerably reduced as a function of how many other nodes want to use the network
- A hub offers no mechanisms to either eliminate or reduce these collisions and the available bandwidth that any one node has to transmit is correspondingly reduced. As a result, the number of nodes sharing the Ethernet network will have effect on the throughput or productivity of the network.
- -Collision Domains
- The network area where frames originate and collide is called the collision domain. All shared media environments, such as those created by using hubs, are collision domains.
- To reduce the number of nodes on a given network segment, you can create separate physical network segments, called collision domains.
- When a host is connected to a switch port, the switch creates a dedicated connection. This connection is considered an individual collision domain, because traffic is kept separate from all other traffic, thereby eliminating the potential for a collision.
- For example, if a 12-port switch has a device connected to each port, 12 collision domains are created.
- The switch creates the connection that is referred to as a microsegment. The microsegment behaves as if the network has only two hosts, one host sending and one receiving, providing maximum utilization of the available bandwidth.
- Switches reduce collisions and improve bandwidth use on network segments because they provide dedicated bandwidth to each network segment.
- -Broadcast Domains
- A collection of interconnected switches forms a single broadcast domain.
- Only a Layer 3 entity, such as a router, or a virtual LAN (VLAN), can stop a Layer 3 broadcast domain.
- Routers and VLANs are used to segment both collision and broadcast domains.
- When a device wants to send out a Layer 2 broadcast, the destination MAC address in the frame is set to all ones. By setting the destination to this value, all the devices accept and process the broadcasted frame.
- The broadcast domain at Layer 2 is referred to as the MAC broadcast domain. The MAC broadcast domain consists of all devices on the LAN that receive frame broadcasts by a host to all other machines on the LAN.
- When a switch receives a broadcast frame, it forwards the frame to each of its ports, except the incoming port where the switch received the broadcast frame. Each attached device recognizes the broadcast frame and processes it. This leads to reduced network efficiency, because bandwidth is used to propagate the broadcast traffic.
- -Netowrk Latency
- Latency is the time a frame or a packet takes to travel from the source station to the final destination.
- The predominant cause of network latency in a switched LAN is more a function of the media being transmitted, routing protocols used, and types of applications running on the network.
- Latency has at least three sources.
- the time it takes the source NIC to place voltage pulses on the wire, and the time it takes the destination NIC to interpret these pulses. This is sometimes called NIC delay, typically around 1 microsecond for a 10BASE-T NIC.
- the actual propagation delay as the signal takes time to travel through the cable. Typically, this is about 0.556 microseconds per 100 m for Cat 5 UTP. Longer cable and slower nominal velocity of propagation (NVP) result in more propagation delay.
- latency is added based on network devices that are in the path between two devices.
These are either Layer 1, Layer 2, or Layer 3 devices. - Switch-based latency may also be due to oversubscribed switch fabric.
- Latency does not depend solely on distance and number of devices. For example, if three properly configured switches separate two computers, the computers may experience less latency than if two properly configured routers separated them. This is because routers conduct more complex and time-intensive functions.
- For example, a router must analyze Layer 3 data, while switches just analyze the Layer 2 data. Since Layer 2 data is present earlier in the frame structure than the Layer 3 data, switches can process the frame more quickly. Switches also support the high transmission rates of voice, video, and data networks by employing application-specific integrated circuits (ASIC) to provide hardware support for many networking tasks. Additional switch features such as port-based memory buffering, port level QoS, and congestion management, also help to reduce network latency.
- -Network Congestion
- These are the most common causes of network congestion:
- Increasingly powerful computer and network technologies. Today, CPUs, buses, and peripherals are much faster and more powerful than those used in early LANs, therefore they can send more data at higher rates through the network, and they can process more data at higher rates.
- Increasing volume of network traffic. Network traffic is now more common because remote resources are necessary to carry out basic work. Additionally, broadcast messages, such as address resolution queries sent out by ARP, can adversely affect end-station and network performance.
- High-bandwidth applications.Desktop publishing, engineering design, video on demand (VoD), electronic learning (e-learning), and streaming video all require considerable processing power and speed.
- -LAN Segmentation
- LANs are segmented into a number of smaller collision and broadcast domains using routers and switches.
- Previously, bridges were used, but this type of network equipment is rarely seen in a modern switched LAN.
- Bridges are generally used to segment a LAN into a couple of smaller segments.
- Switches are generally used to segment a large LAN into many smaller segments.
- Bridges have only a few ports for LAN connectivity, whereas switches have many.
- Even though the LAN switch reduces the size of collision domains, all hosts connected to the switch, and in the same VLAN, are still in the same broadcast domain.
- Routers do not forward broadcast traffic by default, they can be used to create broadcast domains.
LAN Design Considerations
- -Controlling Network Latency
- Switches can introduce latency on a network when oversubscribed on a busy network.
- The use of higher layer devices can also increase latency on a network. When a Layer 3 device, such as a router, needs to examine the Layer 3 addressing information contained within the frame, it must read further into the frame than a Layer 2 device, which creates a longer processing time.
- -Removing Bottlenecks
- Bottlenecks on a network are places where high network congestion results in slow performance.
- Higher capacity links (for example, upgrading from 100 Mb/s to 1000 Mb/s connections) and using multiple links leveraging link aggregation technologies (for example, combining two links as if they were one to double a connection's capacity) can help to reduce the bottlenecks created by inter-switch links and router links
Switch Forwarding Methods
- -Store-and-Forward Switching
- In store-and-forward switching, when the switch receives the frame, it stores the data in buffers until the complete frame has been received.
- During the storage process, the switch analyzes the frame for information about its destination.
- In this process, the switch also performs an error check using the Cyclic Redundancy Check (CRC) trailer portion of the Ethernet frame.
- CRC uses a mathematical formula, based on the number of bits (1s) in the frame, to determine whether the received frame has an error.
- After confirming the integrity of the frame, the frame is forwarded out the appropriate port toward its destination.
- When an error is detected in a frame, the switch discards the frame. Discarding frames with errors reduces the amount of bandwidth consumed by corrupt data.
- Store-and-forward switching is required for Quality of Service (QoS) analysis on converged networks where frame classification for traffic prioritization is necessary. For example, voice over IP data streams need to have priority over web-browsing traffic.
- -Cut-through Switching
- The switch acts upon the data as soon as it is received, even if the transmission is not complete.
- The switch buffers just enough of the frame to read the destination MAC address so that it can determine to which port to forward the data.
- The switch looks up the destination MAC address in its switching table, determines the outgoing interface port, and forwards the frame onto its destination through the designated switch port.
- The switch does not perform any error checking on the frame. Because the switch does not have to wait for the entire frame to be completely buffered, and because the switch does not perform any error checking, cut-through switching is faster than store-and-forward switching.
However, because the switch does not perform any error checking, it forwards corrupt frames throughout the network. - The corrupt frames consume bandwidth while they are being forwarded.
The destination NIC eventually discards the corrupt frames. - There are two variants of cut-through switching:
- -Fast Forward Switching
- Fast-forward switching immediately forwards a packet
after reading the destination address.
Fast-forward switching offers the lowest level of latency. - There may be times when packets are relayed with errors.
This occurs infrequently, and the destination network adapter discards the faulty packet upon receipt.
In fast-forward mode, latency is measured from the first bit received to the first bit transmitted. - -Fragment Free Switching
- In fragment-free switching, the switch stores the first 64 bytes of the frame before forwarding.
Fragment-free switching can be viewed as a compromise between store-and-forward switching and cut-through switching. - The reason fragment-free switching stores only the first 64 bytes of the frame is that most network errors and collisions occur during the first 64 bytes.
- Fragment-free switching tries to enhance cut-through switching by performing a small error check on the first 64 bytes of the frame to ensure that a collision has not occurred before forwarding the frame.
Some switches are configured to perform cut-through switching on a per-port basis until a user-defined error threshold is reached and then they automatically change to store-and-forward. When the error rate falls below the threshold, the port automatically changes back to cut-through switching.
Symetric and Asymetric Switching
- -Asymetric
- -Symetric
Memeory Buffering
- -Port-based Memory Buffering
- Frames are stored in queues that are linked to specific incoming and outgoing ports.
- A frame is transmitted to the outgoing port only when all the frames ahead of it in the queue have been successfully transmitted.
- It is possible for a single frame to delay the transmission of all the frames in memory because of a busy destination port.
- -Shared Memory Buffering
- Shared memory buffering deposits all frames into a common memory buffer which all the ports on the switch share.
- The frames in the buffer are linked dynamically to the destination port. This allows the packet to be received on one port and then transmitted on another port, without moving it to a different queue.
- The switch keeps a map of frame to port links showing where a packet needs to be transmitted. The map link is cleared after the frame has been successfully transmitted. The number of frames stored in the buffer is restricted by the size of the entire memory buffer and not limited to a single port buffer.
Layer 2 and Layer 3 Switching
- -Layer 2 LAN switch
- -Layer 3 switch
- -Layer 3 switch and Router Comparison
- Routers are capable of performing packet forwarding tasks not found on Layer 3 switches, such as establishing remote access connections to remote networks and devices.
- Dedicated routers are more flexible in their support of WAN interface cards (WIC), making them the preferred, and sometimes only, choice for connecting to a WAN.
- Layer 3 switches can provide basic routing functions in a LAN and reduce the need for dedicated routers.
No comments:
Post a Comment